5 Responses

  1. Bob Warner
    Bob Warner September 10, 2016 at 5:04 pm

    The table doesn’t seem to capture any of ACIAR’s impact evaluations – is there a reason for that? Are they not considered to be methodologically sound? There are quite a few for the Pacific – see here

    1. Anthony Swan
      Anthony Swan September 12, 2016 at 10:44 am

      At least one paper in that ACIAR series uses the term impact evaluation in a way that does not seem consistent with 3ie’s criteria; other papers in the series are about “impact assessments”, which also do not seem consistent 3ie’s criteria. From limited sampling, it is likely there is an issue with the methodology from a 3ie perspective, although I’m not saying there is any wrong with the methodology based on the objectives of the ACIAR papers. On the other hand, the ACIAR series simply may not meet 3ie’s criteria on publication.

  2. John Gibson
    John Gibson September 9, 2016 at 12:26 pm

    It is not for me to tell 3ie how to categorize things, but as an author of several of those 18 papers I wouldn’t consider many of them as “impact evaluations”. By my definition, perhaps only the financial literacy study, which had explicit investigator-driven randomization.

    We would hope that all econometric studies that hope to uncover causal effects would be using good research designs that are not (too) prone to omitted variable bias (aka selection effects). Thus, I would think that the work we did on transport access and poverty in PNG using an IV strategy would qualify despite 3ie omitting it. When there is randomization available, as with the migration lotteries, it is good to use it. But it is hardly the silver bullet that too many development people think it is (“what works” should be more properly called “what worked, in that particular context when implemented by those particular people” because absent a theory as to why it worked there is no guarantee that it will work elsewhere – e.g. Rozelle et al Stanford REAP have new results on deworming that show no effect on school performance in rural Western China, contrary to the very publicized effects in Kenya).

    1. Anthony Swan
      Anthony Swan September 12, 2016 at 11:25 am

      Your reminder that these impact evaluations at best tell us “what worked” rather than “what works” is a good one. We should be looking to draw evidence from a many evaluations rather than a very small number. That is why 18 (plus or minus a few) for PNG and the Pacific isn’t going to tell us very much.

  3. Anthony Swan
    Anthony Swan September 9, 2016 at 11:33 am

    For those interested, here is a discussion paper on impact evaluations from AusAID / ODE.
    The paper is from 2012 and shows that greater use of impact evaluations was on the radar. Perhaps the embers of the Impact Evaluation Working Group are still warm?

Leave a Reply