If you’ve ever thought carefully about international development you will be tormented by shoulds. Should the Australian government really give aid rather than focus on domestic poverty? Should I donate more money personally? And if so, what sort of NGO should I give to?
The good news is that William MacAskill is here to help. MacAskill is an associate professor in philosophy at the University of Oxford, and in Doing Good Better he wants to teach you to be an Effective Altruist.
Effective Altruism is an attempt to take a form of consequentialism (a philosophical viewpoint in which an action is deemed right or wrong on the basis of its consequences) and plant it squarely amidst the decisions of our daily lives. MacAskill’s target audience isn’t limited to people involved in international development, but almost everything he says is relevant.
Effective Altruists contend we should devote as much time and as many resources as we reasonably can to help those in greater need. They also want us to avoid actions that cause, or will cause, suffering. Taken together, this means promoting vegetarianism, (probably) taking action on climate change, and–of most interest to readers of this blog–giving a lot of aid. That’s the altruism. As for effectiveness, MacAskill argues that when we give we need to focus on addressing the most acute needs, while carefully choosing what works best.
What’s not to like? Quite a lot it turns out, at least when it comes to aid. (I also have issues with his views on climate change and fair trade, but I’ll stick to aid here.)
But before outlining the issues with MacAskill’s argument, it is worth focusing on what he does well, because at times this book is superb. It’s brilliantly written, which you’ll appreciate if you’ve spent years battling development texts. There are no unnecessary chapters. There is no sedative effect. No shards of jargon. There aren’t even any thickets of equations. Instead, there is lucid prose, sound thinking and powerful examples. He’s at his best in early chapters where, for example, he uses survey data on income and happiness to argue the moral case for prioritising helping poor people in developing countries. There’s a philosophical argument involved, but he makes it adroitly, making his case seem like nothing more than common sense. (Indeed, while philosophy guides MacAskill, it sits gently in the background throughout. If you want an explicit treatment of a view similar to MacAskill’s you could try the chapters on global poverty in recent editions of Peter Singer’s Practical Ethics).
MacAskill also makes two powerful practical points about aid work: not all good deeds deliver benefits, even when well-intended; and, even when they do work, not all good deeds deliver the same amount of good. Following from this, MacAskill argues that we need to gather good evidence on efficacy, and prioritise aid work that delivers the most benefit.
This is absolutely correct in principle. Alas, however, it is hard in practice, and it’s on the practical issue of evidence that MacAskill stumbles. The evidence MacAskill wants is much harder to get than he thinks. This is because he wants Randomised Control Trials (RCTs), and preferably meta-analyses of RCTs. He makes the case for RCTs very well (if you read nothing else, read pages 70-74 where he uses the reality TV show Scared Straight to do this). And RCTs per se aren’t the problem: certain types of aid work lend themselves well to RCTs, and we should undertake RCTs much more often than we do for these types of work. The problem is that much aid work simply cannot be evaluated this way. MacAskill seems to know this. Yet it doesn’t stop him from recommending that aspiring Effective Altruists (and presumably donor governments) focus their aid on a small subset of NGOs that undertake work that has been evaluated favourably through RCTs (usually medical work in Africa), while at the same time sniffing (p. 120) that:
“You won’t find mega-charities like World Vision or Oxfam or UNICEF on…[lists of recommended charities]… These charities run a variety of different programs, and for that reason they are very difficult to evaluate. I also think it’s unlikely that, even if we were able to evaluate them in depth, we would conclude that they are as effective as the charities I list here.”
Really? Let’s take Oxfam Great Britain (I’m purposefully avoiding Australian NGOs, so as not to be seen to be showing favouritism). It is quite true that none of their work on global inequality or climate change could be evaluated with an RCT. And maybe all their work in these areas will be for naught. But I would say there’s a reasonable chance it has, or may some day, be a contributor to huge improvements in human well-being. Likewise, the work that many NGOs devote to advocacy about government aid programmes can’t be evaluated with an RCT. Yet in most countries the aid governments give is much larger than privately funded NGO work, meaning the quality of government aid matters more, and without advocacy groups it would probably be worse. Randomised control trials are great, but in other instances if you want to do good you will simply have to take a leap of faith. This shouldn’t be blind faith, and we should always try to gather evidence, but working out whether or not particular aid endeavours deserve our attention is not nearly as tidy as MacAskill imagines it to be.
The irony of this is that in other parts of the book MacAskill happily lets his standards slip. His endorsement of carbon offsetting comes on the basis of an NGO’s website and some calculations on the back of an envelope. I don’t think MacAskill is wrong in breaking out the envelope in that instance. It’s just a pity that a clear-headed thinker, who focuses a lot of his thought on international development, has failed to think carefully through the challenges of gathering evidence in the difficult world of aid.
Nevertheless, read the book. MacAskill needs to learn more about the realities of aid work. But that won’t stop you from learning from his way of thinking in the meantime.
Terence Wood is a Research Fellow at the Development Policy Centre. He undertakes research on Australian and New Zealand aid, and Melanesian politics.
Thanks Terence for this review. I am the social accountability adviser for World Vision (citizen engagement in service delivery). World Vision and other NGOs, including CARE, Oxfam, Plan and Save the Children do undertake rigorous evaluations including randomised control trials when circumstances permit. See here an Oxford led study of our social accountability work in Uganda. However RCTs are rare because of the lag times and the costs of such studies (between $500K and $1m) make these a prohibitive evaluation option except in very large projects. Even then, RCTs are such a specialised field that finding a researcher interested in the particular intervention can be difficult. I was recently at a ‘match making’ workshop run by the University of California and advocating a very interesting evaluation proposal around testing whether giving politicians information – as we do in our social accountability work – actually stimulates them to successfully advocate for better services to District government on behalf of their constituents as we are seeing happening across very different contexts including Indonesia, PNG and Uganda. We would have gone ahead but for the giant sample needed which only a government could provide. Unfortunately, donors also appear to be less willing to fund high quality researchers to do rigorous qualitative or mixed method evaluations that can be more suited to international development interventions. Beyond this, M&E consultants in international development are paid an absolute pittance, so it is no wonder we struggle so hard to get good evaluations.
Thanks Sue, it is very interesting to get an NGO’s perspective. And also to hear of events such as the one at the university of California.
For what it’s worth (and conceding you have greater practical experience than I) I do still think aid organisations (NGOs, governments, and multilaterals) should be doing more and better impact evaluations (not necessarily RCTs; there are ‘silver standard’ alternatives, which might suffice). However, the point about cost that you make is a very good one (I think RCTs can be done for <500k, but good ones will naturally cost more than most other evaluation alternatives). High costs mean there’s a trade off, ever dollar spent on an RCT is a dollar that could be spent on aid work. And choosing how to spend that dollar isn’t clear cut.
It would be great I think if government aid donors could come to the party here more often in their NGO work and allocate special NGO funding to cover the cost of best-possible evaluations in instances where the learning is important. There’s still a trade-off, but nice if it wasn’t born by a cash-strapped NGO.
Since you mention OxfamGB, worth noting that they openly publish impact evaluations of mature projects as well as other effectiveness reports here.
They’re not necessarily RCTs but as you say, not everything is going amenable to that
Thanks CDH, that is a good point. They do RCTs where they can, and at times couple them with interesting methodologies such as process-tracing and where they can’t they still evaluate their work. For a large NGO working across a range of types of work, this seems pretty much gold star.
You accuse the author of failing to think carefully, yet the nub of your argument seems to be that his requirement for objective evidence of good fails to support some of the activities you personally like based on your “leap of faith” thesis. You might genuinely want to consider what principled approach you are advocating here.
An interesting comment and a useful chance to elaborate.
I’m a utilitarian (or, at least, I think that’s the least-worst political philosophy). MacAskill is not (see endnote page 215), but he is very close, meaning our views are almost identical in principal. The issue is to do with practice.
Here, the question, when concerned with consequences, is how does one make choices given uncertainty both about probability (of success, or risk of occurrence etc) and magnitude of effect.
As I state in the review, MacAskill is aware how hard this can make decision making (i.e “this is not exactly physics” p. 180). However, he does not seem to understand that this issue is as true for aid-NGO work as it is for many of the other issues he covers in the book. This, in turn, has him coming up with lists of recommended NGOs, whilst effectively dismissing the work of Oxfam, UNICEF and World Vision (presumably the UK versions of these NGOs). The certainty in pages 120 and 121 is striking and my read of these pages is that MacAskill is very confident he knows how priorities should be set in aid.
I differ. As a Leap of Faith Utilitarian (I like your term, thanks) I believe that we should gather as much evidence as possible (“we should always try to gather evidence”;”Randomised control trials are great”), and should not make decisions on blind faith (unless perhaps in crisis circumstances). However, there are many types of work an aid NGO could do, which clearly have value if they work, but which cannot be RCT’d.
In the case of my examples. I think they are important, not because I had a whim after my second coffee last Friday, but on the basis of evidence:
1. Climate change (why worry: because the best available scientific models suggest there is a chance it will be utterly catastrophic).
2. Economic inequality (why worry: because it is high (good evidence) and it is growing (good evidence); there is also tonnes of evidence of diminishing marginal utility, meaning unnecessary inequality is welfare inefficient; meanwhile there is little evidence that quite significant increases in equality would harm growth in many countries).
3. Poor government aid (why worry: because the history of aid giving is festooned with examples of poor government aid and government aid is much larger than private aid in most countries.)
To do aid work in the way MacAskill wants would mean aid-NGOs avoiding these areas. To do aid work in the way I want would mean tolerating some uncertainty of efficacy in exchange for allowing ourselves to tackle very important issues.
Thanks for elaborating Terrence. But I would still suggest in a world of limited resources (the only kind we are likely to know) evidence that a problem is important gets us part of the way there, but we should still be wanting some evidence of (positive) results if we devote our resources to that problem.
Sadly there may be some insoluble but important problems, or at least those we don’t have the wherewithal to solve now, that we must logically pass over to ensure our resources go where they can do good.
I agree we live in a world of limited resources and that efficacy matters. It naturally follows that I think evidence is important too. My main difference with MacAskill when it comes to aid is simply that we shouldn’t be (overly) guided in choosing the work we do by the **type** of evidence we can obtain about the efficacy of our work.