Of Jeffrey Sachs, the Millennium Villages Project, and evidence

Jeffrey Sachs, arguably the world’s most influential development economist, is no stranger to criticism. From the right, academics such as William Easterly have been attacking his advocacy of aid for at least a decade. On the left his opponents have been just as strident in critiquing his advocacy of privatisation, structural adjustment and trade liberalisation for even longer (for archetypal examples see this review in the Left Business Observer and critical opinion in the Nation Magazine here). Yet the latest round of criticism of Sachs feels different. It’s in the form of an article in Foreign Policy magazine that takes aim at the Millennium Villages Project of integrated village level aid assistance that Sachs has very publicly championed.

The critique feels different in part because it takes the form of a news article, canvassing the views of a range of development thinkers, rather than an op-ed style attack from just one opponent. And it feels different in part because it is published in the sort of US establishment journal that has been pretty friendly to Sachs in the past. But most importantly, to me it at least, if feels different because it has little to do with ideology (the terrain of much previous Sachs sparring) and a lot to do with evidence. In particular the fact that Sachs failed to set up gold-standard impact evaluations to help assess whether his Millennium Village work did in fact work. And that he and co-authors have also made errors in academic work that has purported to show the project’s successes. In the words of the article:

These days, though, Sachs is increasingly on the defensive, assailed by a growing number of critics for what they say are fundamental methodological errors that have arguably rendered his Millennium Villages Project (MVP) — now consisting of 14 village clusters scattered across Africa and covering half a million people — worthless as a showcase for what can lift the poorest of the poor out of their misery. In May 2012, shortly after an editorial in Nature, the influential science journal, scolded Sachs and his colleagues for unreliable analysis, Sachs and his team were forced to admit they had committed a basic error in an academic paper intended to prove their project’s effectiveness. “The project’s approach has potential, but little can be said for sure yet about its true impact,” Nature stated.

This strikes me as progress. I’m not opposed to ideology (like everyone I have my own). And I’m not opposed to Jeffrey Sachs, whose suggestions are often interesting and sometimes inspired (an MVP response to the Foreign Policy article can be found here). I don’t think Randomised Control Trials (the sort of impact evaluation that should have been established at the outset of the Millennium Villages Project) are a panacea or without their own issues (for an interesting critique see this blog post by Paul Farmer). And I’m not the sort of naive empiricist who thinks that research is sufficient on its own to bring us the answers we seek in development. Yet for decades aid and development work has been the battleground of big, bold ideas, none with particularly wonderful track records in practice. And for much of that time we’ve done too little in gathering good systematic evidence of what works, where it works, and why it works. Ideology will always be with us, and the battle of ideas is necessary, but maybe now all of this will be augmented with more and better evidence? I hope so.

image_pdfDownload PDF

Terence Wood

Terence Wood is a Fellow at the Development Policy Centre. His research focuses on political governance in Western Melanesia, and Australian and New Zealand aid.


  • It strikes me as odd that the “battle of ideas” and the call for evidence in the development industry/academia seems to take priority over cooperation and collaboration in order to end the suffering and deaths of millions in extreme poverty. Reducing poverty, after all, is our key motivation, right?????? It’s also interesting to hear from critics of MVP who have never traveled to the MVP sites. If they had, they would have found that prior to the MVP, it was living hell. This was extreme poverty at its worst with no sanitation, no access to clean water, people dying daily of disease and malnutrition, hunger. After the MVP, one finds an entirely different world, people moving around, working, going to school….in essence, living. So when I hear “where’s the evidence?” from development critics, I find it ludicrous.

    • Thanks Susana,

      I’m definitely in favour of cooperating to end suffering and death. The trouble is, we don’t have that good an idea of what works best in doing this. Aid isn’t guaranteed to work, and aid work has as many failures to its name as it does successes. This is why I think we should look for evidence.

      For the record I think the MVP was an inspired idea and definitely worth doing, but also worth testing thoroughly. We do know that things have gotten better in MVP villages, but in the case of at least some of the villages — and this is the heart of of the Clemens critique — we also know that things have improved significantly in surrounding areas as well. So it is very hard to say for certain that the MVP interventions caused the improvements. That is an evidential weakness.

      To the credit of the people behind MVP they are, as I understand it, working to develop better impact evaluations.

      Which is great. Progress in the direction of evidence based practice — something that I’m celebrating in this post.

      Thanks again for your comment.

      • Terence, thanks for your response. I would just add that many, if not all, of the MVPs are on track to meet the Millennium Development Goals (MDGs), which is not necessarily the case outside of the MVPs for the country as a whole, although many have made progress on the MDGs.


        • Thank you Susana,

          It’s good to hear of the progress towards meeting the MDGs and hopefully with time we will be clearer as to the processes involved.


    • Susana, the comments you make here and elsewhere actually do not speak to the issue here. We have limited funds to alleviate poverty – how do we use them best? Sure the MVP has done great work. But if I had flown over the village and thrown the equivalent amount of money out of the window, would it have done as much good, if not more? Aid interventions are expensive, often involve lots of expensive foreign experts, and often don’t fix the problem over the longer term. Unless we measure carefully, and know our impact is what is making a difference, not the overall economy or other factors, our scaled-up interventions may not work at all and simply be wastes of money that could have made a bigger impact if used in other ways.

      Remember, ever dollar spent on the MVP is a dollar not spent on e.g. immunization, de-worming, maternal health, education etc. somewhere else – somewhere it may have done even more good, somewhere we have the evidence to know it WOULD have made a difference. Unless we measure, we won’t know – and we won’t be solving the challenge of poverty.

      • Thank you Stephanie,

        Your point about counterfactuals is well taken and very well put.


      • Stephanie, thanks for your reply. I understand the importance of measuring. In this case, no one else was doing what the MVP was doing. No similar project existed and the MVP brought in additional funders (private enterprise, new partnerships etc) that did not exist before. The bigger question is “how do we know what we know?” Measuring is important, but it is only ONE of the tools in epistemology. Other ways to know whether things work include: historical knowledge, practicing knowledge (which should be further developed in methods), experience, working with local communities and local leaders, etc. I agree that all of these other “ways of knowing” should be as rigorously applied as formal measurement methods, but they are just as valuable ways of determining what works.

  • Thanks Stephanie,

    It’s interesting, I think, that in the battle of ideas, the strength of belief it takes to propel one’s particular idea to the top is almost inevitably at odds with the sort of self-doubt that leads one to be more likely to want to test and measure and check.

    One One Lap Top Per Child, you may have already seen it but the Boston Review had a great discussion of ITC for development (including OLTPC) a couple of years ago:


  • This is a fascinating debate and one I’ve been following over a number of years. One Laptop Per Child is the other big anti-poverty campaign being run by high-profile academics that, like the MVP, refuse to countenance any form of robust assessment. Instead, in each case we have senior academics and public figures somehow convincing themselves that the burden of proof they would expect from anyone else doesn’t apply to themselves, because they are doing “good works” – the path to (development) hell being paved, as we know, with good intentions.

    It would have been interesting to look at the big-bang intervention concept properly too see if it works. It has some logic to it – after all, a company can’t grow if it doesn’t attract sufficient capital, so it seems likely that a village won’t either. Unfortunately Sachs never let his ideas be put to a rigorous test – which does suggest that perhaps he doubted they would stand up it.

Leave a Comment