In this post for the Aid Open Forum, Ian Anderson looks at the strengths and weaknesses of the latest Annual Review of Development Effectiveness. The other articles in the series can be found here.
The Annual Review of Development Effectiveness 2009 (ARDE), released late and with little fanfare, is important and deserves wider reading and debate. Important, because it is a serious assessment of the development effectiveness of Australia’s aid program, involving at the time $3.8 billion. Deserving wider reading and debate, because it provides a candid assessment of the challenges in rapidly scaling up an aid program where around half of Australia’s aid goes to “fragile” or otherwise difficult operating environments overseas.
The ARDE is the third annual review of the Australian aid program. The largely self-assessed “health check” concludes 88% of activities were rated “satisfactory” in terms of ‘implementation progress’ and ‘achieving objectives’. Around three quarters of activities were satisfactory in terms of ‘sustainability’. ‘Monitoring and evaluation’ had the lowest rating, with just over 70% of activities being rated ‘satisfactory’, although still an improvement on previous years.
Three strengths of the report
First, the ARDE is analytical and candid about challenges. It acknowledges the disturbing proliferation of fragmented activities in the Australian aid program: some 2000 separate aid activities in 2006, three times the number a decade before, despite the fact that the aid program had grown by 1.6 times in real terms over the period (page 20). It acknowledges that program objectives are often unclear, with many objectives not adequately linked to broader strategies (partly because only half of Australia’s bilateral programs had a country strategy at the time), or broader policy dialogue. It is candid about corruption in partner countries undermining development effectiveness (Page 42) and deficiencies in AusAID’s own program design and implementation (page 54). It provides a thoughtful discussion about the risks of gender being further marginalised as AusAID moves increasingly from projects to Sector Wide Approaches (page 51 – 52).
Second, the report identifies some readily understandable examples of aid effectiveness. The number of confirmed cases of TB in Kiribati fell to less than half the number the previous year because of improved detection and treatment funded by Australia. 751 junior secondary schools were completed in Indonesia with Australian support. ”Procurement Watch” was used in the Philippines to monitor the quality and quantity of AusAID funded chairs being delivered to schools, with around one third being substandard – a common problem in the Philippines – and therefore able to be rectified. True, such examples tend to be descriptive pieces, with little analytical rigour underpinning them, or at least reported. But the descriptions are in clear plain English, avoiding the jargon and development waffle so common of comparable reports from other agencies.
Third, the decision to combine the usual “health check” of the overall program with a more detailed analysis of one particular theme – in this case improving basic service delivery of education, health, and water sanitation for the poor – has paid off. Pages 42 – 56 in particular display substantive and thoughtful lessons about planning and implementing service delivery in difficult circumstances. The discussion combines analytical insight with practical experience. Disappointingly, the ARDE 2009 does not draw out the obvious and substantive interactions and synergies between education, health, and water and sanitation.
First, it is still not particularly clear how robust, or meaningful, the findings are. Self-reporting of activity performance is obviously fraught with problems for any agency. Yet an independent spot check of the accuracy of the reporting is passed over in just one paragraph, tucked away in Appendix C of the report. There also appears to be no assessment of the possible counter – factual in assessing the impact and development effectiveness of activities in the ARDE, or the extent to which Australian aid was genuinely “additional” to partner Government efforts: two fundamental factors in assessing the true value and contribution of aid. Some figures seem unusual. Is it realistic that only 6% of all activities (page 16) will fail to achieve their strategic objectives, particularly when around half of Australia’s program is delivered in “fragile” states and particularly difficult development environments?
Second, while the ARDE is subtitled “Improving Basic Services for the Poor” there is very little in the report that specifically measures – or even discusses – the impact of aid interventions on the poor as such. Perhaps impact on the poor is subsumed under other headings such as “Achieving Objectives”. But with over a quarter of Australia’s $ 3.8 billion directed to education, health, and water and sanitation in the year of the review, it would be helpful to have some sense of how much actually reached the bottom two quintiles, or specific vulnerable, marginalised, or remote groups.
The third shortcoming is that this ARDE is already becoming stale and dated. The analysis covers activities back in 2008/9. The ARDE 2009 report was then not released until December 2010. These delays undermine the usefulness and credibility of the report. The largely descriptive discussion on environment and climate change, for example, does not refer to the Copenhagen Conference or its fallout.
Conclusion, and next year’s report
The challenges discussed in the report – proliferation of activities, achieving sustainability, mainstreaming gender, achieving (and attributing) impact – are common to all development agencies. However unlike the majority of development agencies, AusAID has used an annual report on development effectiveness to report candidly and thoughtfully on what is working well, and what needs to improve. ARDE 2010 should have a better analysis of the robustness of self-reporting. It should give more attention to trying to measure the impact of interventions on the poor and vulnerable. ARDE 2010 should also be published more promptly.
Ian Anderson is a consultant and a Research Associate with the Development Policy Centre. He has recently completed almost 25 years at AusAID. Ian specialises in the economics and financing of the health-related MDGs.
ARDE 2009 Review – Budget Support
The report properly notes that the integration of Australia’s programs into developing country systems through budget support implementation mechanisms avoids duplicating effort and establishing parallel systems. The report also notes that 40% of all Australian aid is being distributed through developing countries’ public financial management systems, but only 23% is subject to developing countries’ procurement systems.
Recently, AusAID approved a $500 million Australian aid package, the Basic Education Program, for schooling in Indonesia using a budget support implementation mechanism.
What other significant aid packages are being delivered through budget support implementation mechanisms?
Budget Support has become an important modality for development cooperation in the last decade and has a different basis for accountability than project support. It requires strong developing partner country financial accountability and public expenditure management, including an ‘open and transparent’ public procurement system. General budget support is budget support that is unearmarked for a specific sector of government spending. Sector budget support is budget support that is earmarked for use in a specific sector or budget line, e.g. health or education.
Many factors can affect a donor’s decisions to use developing country’s systems besides expected benefits: the assessed quality of the system; the donor’s legal framework, historic practices, or tolerance for risks; the partner country’s own preferences; and related intangibles such as the perception of corruption or poor governance.
Of importance is an understanding how AusAID is assessing its risk in using budget support implementation mechanisms and what are its rules and conditions for approving use of such a mechanism. The rules and conditions of the European Commission (EC), regarding budget support are specific – direct budgetary assistance in support of macroeconomic or sectoral reforms will be granted where:
•Public expenditure management is sufficiently transparent, accountable and effective.
•Well defined macroeconomic or sectoral policies established by the country itself and agreed by its main donors are in place.
•Public procurement is open and transparent.
What are AusAID’s rules and conditions for the use of budget support?
Assessment of developing country Public Finance Management (PFM) systems is normally through an Integrated Fiduciary Assessment i.e. a Public Expenditure and Financial Accountability (PEFA) assessment which evaluates the strengths and weaknesses of the PFM system and provides a performance benchmark to measure progress with PFM reforms and evaluate if a reorientation of the ongoing PFM effort is needed. The PEFA methodology provides a framework for developing country governments and other stakeholders to assess the PFM system in a country. The assessment is based on a standardized format and indicator set developed by a group, including the World Bank (WB), the International Monetary Fund (IMF), the EC and several bilateral development partners. The PEFA indicators are a set of 28 high level performance indicators that measure strengths and weaknesses of a government’s PFM system.
The use of WB Country Procurement Assessments (CPA) and other assessments can be used to examine public procurement system performance. A WB CPA is an intensive, comprehensive assessment and diagnostic tool offering analysis of a particular country’s public procurement regime. Specifically, the CPA identifies existing risks and vulnerabilities of the regime and assists the developing country in ultimately strengthening the institutional, legal, and policy framework underpinning its procurement regime. The CPAR is typically a joint effort between the WB and its client country. These assessments are also conducted as the public procurement system in a particular country changes or undergoes reform.
The Organization for Economic Co-operation (OECD) Methodology for Assessment of National Procurement Systems (MAPS) provides a common tool which developing countries and donors can use to assess the quality and effectiveness of national procurement systems. The assessment uses 12 indicators which can become a key element of, and will improve, the existing CPA process.
What assessment methodologies does AusAID use for developing country public finance management and public procurement systems?
A recent review of the Web Sites of the WB and other Donors suggests that few PEFA, CPA and/or MAPS assessments have been undertaken in East Asian, Papua New Guinea and Pacific Island countries in the past three years, the period during which AusAID’s use of budget support implementation mechanisms has increased.
What is AusAID’s program to assess developing country public finance management and public procurement systems to inform their suitability for budget support?
Where AusAID is using budget support mechanisms and risk is high, are complementary public finance management and public procurement improvement programs in place?
E. John Blunt
15 Parkview Crescent, Hampton East, Victoria, 3188
Telephone: 03 9553 1182
Mr. E. John Blunt, is a Public Sector Management and Procurement Expert with extensive experience in leading public procurement reforms in a variety of international development environments including in Indonesia, Nauru, Pakistan, Papua New Guinea and the Solomon Islands with the Asian Development Bank, the Australian Agency for International Development, the European Commission, the United Nations Development Program and with the governments of Australia, Swaziland and Timor-Leste. He also has commercial procurement experience in Australia, China, Hong Kong, Papua New Guinea, the Philippines and Thailand.