ARDE 2009: An important (but overlooked) report

By Ian Anderson
11 January 2011

In this post for the Aid Open Forum, Ian Anderson looks at the strengths and weaknesses of the latest Annual Review of Development Effectiveness. The other articles in the series can be found here.

The Annual Review of Development Effectiveness 2009 (ARDE), released late and with little fanfare, is important and deserves wider reading and debate. Important, because it is a serious assessment of the development effectiveness of Australia’s aid program, involving at the time $3.8 billion. Deserving wider reading and debate, because it provides a candid assessment of the challenges in rapidly scaling up an aid program where around half of Australia’s aid goes to “fragile” or otherwise difficult operating environments overseas.

The ARDE is the third annual review of the Australian aid program. The largely self-assessed “health check” concludes 88% of activities were rated “satisfactory” in terms of ‘implementation progress’ and ‘achieving objectives’. Around three quarters of activities were satisfactory in terms of ‘sustainability’. ‘Monitoring and evaluation’ had the lowest rating, with just over 70% of activities being rated ‘satisfactory’, although still an improvement on previous years.

Three strengths of the report

First, the ARDE is analytical and candid about challenges. It acknowledges the disturbing proliferation of fragmented activities in the Australian aid program: some 2000 separate aid activities in 2006, three times the number a decade before, despite the fact that the aid program had grown by 1.6 times in real terms over the period (page 20). It acknowledges that program objectives are often unclear, with many objectives not adequately linked to broader strategies (partly because only half of Australia’s bilateral programs had a country strategy at the time), or broader policy dialogue. It is candid about corruption in partner countries undermining development effectiveness (Page 42) and deficiencies in AusAID’s own program design and implementation (page 54). It provides a thoughtful discussion about the risks of gender being further marginalised as AusAID moves increasingly from projects to Sector Wide Approaches (page 51 – 52).

Second, the report identifies some readily understandable examples of aid effectiveness. The number of confirmed cases of TB in Kiribati fell to less than half the number the previous year because of improved detection and treatment funded by Australia.  751 junior secondary schools were completed in Indonesia with Australian support. ”Procurement Watch” was used in the Philippines to monitor the quality and quantity of AusAID funded chairs being delivered to schools, with around one third being substandard – a common problem in the Philippines – and therefore able to be rectified. True, such examples tend to be descriptive pieces, with little analytical rigour underpinning them, or at least reported. But the descriptions are in clear plain English, avoiding the jargon and development waffle so common of comparable reports from other agencies.

Third, the decision to combine the usual “health check” of the overall program with a more detailed analysis of one particular theme –  in this case improving basic service delivery of education, health, and water sanitation for the poor – has paid off.  Pages 42 – 56 in particular display substantive and thoughtful lessons about planning and implementing service delivery in difficult circumstances.  The discussion combines analytical insight with practical experience. Disappointingly, the ARDE 2009 does not draw out the obvious and substantive interactions and synergies between education, health, and water and sanitation.

Some shortcomings

First, it is still not particularly clear how robust, or meaningful, the findings are.  Self-reporting of activity performance is obviously fraught with problems for any agency. Yet an independent spot check of the accuracy of the reporting is passed over in just one paragraph, tucked away in Appendix C of the report. There also appears to be no assessment of the possible counter – factual in assessing the impact and development effectiveness of activities in the ARDE, or the extent to which Australian aid was genuinely “additional” to partner Government efforts:  two fundamental factors in assessing the true value and contribution of aid.   Some figures seem unusual. Is it realistic that only 6% of all activities (page 16) will fail to achieve their strategic objectives, particularly when around half of Australia’s program is delivered in “fragile” states and particularly difficult development environments?

Second, while the ARDE is subtitled “Improving Basic Services for the Poor” there is very little in the report that specifically measures – or even discusses – the impact of aid interventions on the poor as such. Perhaps impact on the poor is subsumed under other headings such as “Achieving Objectives”. But with over a quarter of Australia’s $ 3.8 billion directed to education, health, and water and sanitation in the year of the review, it would be helpful to have some sense of how much actually reached the bottom two quintiles, or specific vulnerable, marginalised, or remote groups.

The third shortcoming is that this ARDE is already becoming stale and dated. The analysis covers activities back in 2008/9. The ARDE 2009 report was then not released until December 2010. These delays undermine the usefulness and credibility of the report. The largely descriptive discussion on environment and climate change, for example, does not refer to the Copenhagen Conference or its fallout.

Conclusion, and next year’s report

The challenges discussed in the report – proliferation of activities, achieving sustainability, mainstreaming gender,  achieving  (and attributing) impact – are common to all development agencies. However unlike the majority of development agencies, AusAID has used an annual report on development effectiveness to report candidly and thoughtfully on what is working well, and what needs to improve. ARDE 2010 should have a better analysis of the robustness of self-reporting. It should give more attention to trying to measure the impact of interventions on the poor and vulnerable. ARDE 2010 should also be published more promptly.

Ian Anderson is a consultant and a Research Associate with the Development Policy Centre. He has recently completed almost 25 years at AusAID. Ian specialises in the economics and financing of the health-related MDGs.

About the author/s

Ian Anderson
Ian Anderson is an associate at the Development Policy Centre. He has a PhD from the Crawford School of Public Policy, Australian National University; over 25 years of experience at AusAID; and since 2011 has been an independent economics consultant.

Page 1 of 1