AusAID’s first Annual Review of Aid Effectiveness (part 1): is our aid program really that good?

On January 25, AusAID released the first Annual Review of Aid Effectiveness (ARAE), for 2011-12. This is a welcome development. The publication of this review takes Australia’s aid program another step ahead of most other bilateral aid donors (and most other Australian government programs for that matter) in the transparency and accountability stakes. We’ve had a careful, if still preliminary, look at it. There’s much about it that could have been done better—but we want to emphasise at the outset that this is a good process that can get better over time.

There’s a lot that needs to be said about the ARAE, too much for one post, so we’ve divided our comments into three parts. This initial post provides the context, and focuses on the ARAE’s treatment of aid results.

First, a reminder of where this beast came from and what it is for. The 2011 Independent Review of Aid Effectiveness (IRAE, in which one of the authors of this post, Stephen Howes, participated as a panel member) noted that the release of the Annual Review of Development Effectiveness (ARDE), which AusAID’s Office of Development Effectiveness had been producing, had been increasingly delayed, sometimes by over a year. It recommended that the ARDE be discontinued, and replaced by the ARAE. This wasn’t just a name change. The ARAE, unlike the ARDE, would feed directly into Cabinet’s yearly consideration of progress against the government’s four-year budget strategy, as articulated in the May 2012 Comprehensive Aid Policy Framework (CAPF). And, importantly, the ARAE would make no pretence of being independent. Rather, it would be AusAID’s own corporate product, and thus less subject to sensitivities and vast delays. This change in tack seems to have worked. Though, as we previously commented, even the ARAE was late by a couple of months (it should have been released at the end of October 2012), this delay is relatively trivial.

So far, so good. What about content? The Aid Review recommended that “an annual assessment of aid effectiveness of all of ODA should be prepared using the three-tier system. This should inform the annual reviews of the four-year strategy provided to Cabinet” (Rec 35). And indeed, the three-tier system does provide the organising framework for the ARAE, which cascades down, looking in turn at (1) progress against development goals, (2) the contribution of Australian aid, and (3) operational and organisational effectiveness.

So far, still so good. But the ARAE is in fact very different from what the review panel had in mind. The panel’s fundamental concern was that there was “no single, easily comprehensible scorecard on the effectiveness of the Australian aid program as a whole” (IRAE, p. 75). Thus the heart of the ARAE would be a scorecard providing “traffic light” ratings of effectiveness in the three dimensions given above, with several key criteria identified under each dimension (IRAE, p. 296). Subsidiary scorecards would be prepared for major country programs and for partnerships with multilateral organisations and NGOs. There was a good model for this sort of approach: the Asian Development Bank’s annual performance scorecard, first produced in 2008 and published each year as part of the ADB’s Development Effectiveness Review. The World Bank started using a similar scorecard in 2011. (While the ADB and World Bank scorecards use a four-tier system, it maps onto the three tiers above. These scorecards treat operational and organisational categories separately.)

And this is where the problems start. The ARAE looks nothing like the ADB or World Bank scorecards—and that’s not just because there are no traffic lights to be seen, red or otherwise. The ARAE is much more akin to the traditional departmental annual report: a listing of things done at a highly aggregated level, with a smattering of examples, and no evidence base. Classrooms built, schoolbooks distributed, children immunised, kilometres of road constructed, summits held, reviews undertaken, strategies published, and so on. Essentially the approach is to tick off progress against the very high-level targets set in the CAPF. The adoption of high-level output targets is itself progress, and no doubt represents a lot of hard work. But one can gain no sense from the ARAE of how effective Australia’s bilateral aid is in particular country contexts, or of how effective some of AusAID’s major multilateral partners are, or of what is really going pear-shaped. In fact the ARAE is pitched at such a general level, and is so rosy (“strong results achieved against each of Australia’s five strategic goals”), that it is hard to see how it would have any bearing at all on Cabinet’s consideration of future aid priorities.

AusAID’s performance reporting is much more realistic at the country level. Take the four biggest recipients of Australian aid: Indonesia, Papua New Guinea, Solomon Islands and Afghanistan. The annual country performance reports produced for these and other recipients of Australian aid do use a traffic-light system to rate achievement of the aid program’s objectives. For 2011, Indonesia does best, but even here only five out of nine of our aid program’s objectives are given a green light, meaning that they are “on track” to be achieved. In Papua New Guinea, it’s just three out of 10. In Solomon Islands, it’s only one out of six, and in Afghanistan it is zero out of four. This seems to us to provide a much more realistic depiction of the aid program’s achievements and the difficulties faced in delivering effective aid than the picture painted by the ARAE. The ARAE’s Section I on second-tier performance doesn’t mention any of these country program assessments, but instead lists 28 program-wide results (nearly all of them in line with the pre-defined targets), 57 examples of success, and just 12 “emerging issues”, only some of which convey even the mildest sense that perhaps not everything is working as well as it could.

Drawing on the aid program’s own country-level reports would provide the ARAE with a much stronger evidence base and avoid the current disconnect between it and other more fine-grained performance reporting. In our next post, we will explore the relationship between the ARAE and the Annual Report and descend from the second tier to the third: AusAID’s operational and organisational effectiveness. Stay tuned.

The second part of our analysis is now available here, and the third here.

Stephen Howes is Director of the Development Policy Centre. Robin Davies is Associate Director of the Centre.

image_pdfDownload PDF

Stephen Howes

Stephen Howes is Director of the Development Policy Centre and Professor of Economics at the Crawford School of Public Policy at The Australian National University.

Robin Davies

Robin Davies is an Honorary Professor at the ANU's Crawford School of Public Policy and an editor of the Devpolicy Blog. He headed the Indo-Pacific Centre for Health Security and later the Global Health Division at Australia's Department of Foreign Affairs and Trade (DFAT) from 2017 until early 2023 and worked in senior roles at AusAID until 2012, with postings in Paris and Jakarta. From 2013 to 2017, he was the Associate Director of the Development Policy Centre.

Leave a Comment