This is the second installment of our three-part analysis of the Annual Review of Aid Effectiveness. You can find the first installment here and the final installment here.
Because of its emphasis on highly aggregated results and examples (as outlined in our previous post on this topic), it is no surprise that there is considerable duplication between the just-released 2011-12 Annual Review of Aid Effectiveness (ARAE) and AusAID’s 2011-12 Annual Report (AR), which appeared in October 2012. Many of the same statistics appear in both reports. A suggestion from the Independent Review of Aid Effectiveness (IRAE) that the two reports be combined has been ignored. In theory the ARAE’s coverage is broader than that of the AR, as the former extends to the aid effectiveness of the more than 60 government agencies involved in the delivery of the aid program. But the coverage provided in the ARAE of other agencies is sparse, though more is promised for future editions.
In many ways the AR, which has improved greatly over the years, provides a lot more useful detail than the ARAE. In both you can find that the world as a whole is making good progress against the MDGs, but it is only in the AR that you can find that PNG, Australia’s second largest aid recipient, is unlikely to achieve any of these goals.
The main value-add of the ARAE relative to the AR is the release of more information at the third-tier level—which relates to AusAID’s operational and organisational effectiveness. Some would argue that assessing aid results is so difficult that the best we can do is judge whether an agency is approaching the task in the right way. Even if you don’t go that far, operational and organisational effectiveness is no doubt critical for achieving aid impact, and information about it has been hard to come by for Australian aid.
Unfortunately, both substance and approach are worrying and confusing at the third-tier. The Aid Review’s top recommendation was consolidation, and the Comprehensive Aid Policy Framework (CAPF) subsequently adopted an ambitious target of reducing the number of aid projects by 25% by 2015-16. In fact, as the ARAE reports, the number of projects actually increased by 10% in 2011-12 (the exact points of comparison are October 2011 and July 2012). That’s a worry.
The way in which project ratings are (or are not) used in the ARAE is confusing. So hold on. For years now, the proportion of projects rated by AusAID to be satisfactory has been put forward in the AR as a key indicator of success: the target is 75% and the most recent figure (for 2011-12) is 87%. Project ratings are also critical data for, and are heavily mined by, Asian Development Bank and World Bank development effectiveness reviews. But read the ARAE and, oddly, you can’t even find the 87% figure, let alone any discussion of it in terms of trends, or regional or sectoral disaggregation.
What you can find is that ODE thinks that 85% of AusAID’s project ratings are “appropriate.” Presumably the other 15% are biased upwards, since this is the key risk for any self-rating system. It’s odd to have this commentary on rating accuracy and not provide the rating itself. Putting the numbers from the two reports together gives 74% (.85 of 87 per cent) which prima facie implies that performance is actually a little shy of the 75% target.
To add to the confusion, the CAPF gives priority to a different target again. It promises that 75% of projects rated unsatisfactory will be cancelled or improved within a two-year period. But, although the ARAE mentions this target twice, it is one of the few for which no performance information is given. AusAID is presumably going to wait for the second year before saying anything on this, but the effect of the silence is only to confirm the vacuum in the ARAE around the critical area of project performance management.
We could go on about the third-tier. There’s mixed news on staff churn, often identified as a major barrier to aid effectiveness. There’s less of it, but more than AusAID has targeted. And the ARAE trumpets that there has been compliance with the Adviser Remuneration Framework without exception – thereby achieving a CAPF target. But it is far from obvious that cutting off access to expensive sources of advice is in every case desirable—helping countries raise more taxes is very expensive, yet very cost-effective.
In summary, what’s newest and most valuable about the ARAE—the third-tier information—is also what raises the most questions. Is AusAID managing its projects effectively? Is it consolidating its portfolio? To the limited extent to which the ARAE answers these questions, the answer is either we don’t know, or no.
That’s the second installment of our reaction to the inaugural ARAE. Our third and final post on it will offer some practical suggestions for next time. We want to end this post as we began our first, by re-iterating that the ARAE is a good process that can, and indeed must, get better over time.
This is the second installment of our three-part analysis of the Annual Review of Aid Effectiveness. You can find the first installment here and the final installment here.
Stephen Howes is Director of the Development Policy Centre. Robin Davies is Associate Director of the Centre.