
AusAID’s latest performance review: opportunity for constructive feedback lost
By Richard Curtain
8 February 2012
Another in our series on AusAID’s latest performance reports: for others, see here, here, here and here.
AusAID’s Office of Development Effectiveness (ODE), in its response to the Independent Review of Aid Effectiveness, no longer undertakes the often belated Annual Review of Development Effectiveness. This publication has now been replaced by two reports on the quality of Australian aid. But in the process, has ODE changed its approach from constructive critic to agency advocate?
The following is a review of one of these reports: ODE’s assessment of AusAID’s internal activity and program performance reports for 2009–10. Stephen Howes has provided a good critique of the other report [pdf] in its use of international indicators, sourced from the Brookings Institution.
An important conclusion of the former report, which is titled, ‘The quality of Australian aid: an internal perspective’ [pdf] is the following (p9):
The evidence presented in the 2010 Annual Program Performance Reports suggests that increasingly, aid is less about the transfer of resources and more about ideas, institutions and being a catalyst for change. It is about political as much as technical issues. AusAID’s main challenge is to ensure the agency has the capacity and systems to operate in this context.
How well are AusAID’s capacities and systems rising to this challenge? The best possible spin is placed on the results. The key finding highlighted is that ‘the integrity of the performance and reporting system is improving steadily’.
Of the 20 major programs which conducted an Annual Program Performance Report in 2009-2010, the report notes that 92.5 per cent of the program objectives are expected to be either fully or partly achieved. However, we have to guess from a graph (below) what proportion of programs are likely to achieve fully their objectives – it appears to be only about one in three. We are not told what the 20 country or regional programs are or where they can be found on AusAID’s website.
ODE explains away the decline in the proportion of major programs likely to achieve their objectives between 2009 and 2010 in the following terms: ‘rather than indicating a decrease in actual performance, this result may well reflect the fact that the performance reporting system itself is producing more sophisticated information’ (p2).
These Annual Program Performance reports have had major limitations as self-assessments. A footnote (p3) refers to an unpublished independent quality review of the 2009 reports which identified a range of major problems: inadequate information systems, a lack of clarity on what program performance means, weak or absent performance assessment frameworks, limited staff capacity in the area of program results, and insufficient incentives to change work practices.
ODE, however, claims that there have been marked improvements in the quality of the reports in 2010. These are said to include: a deeper understanding of the links between activities and strategic objectives; improved capacity to report on how Australian aid is aiming to ‘make a difference’; greater use of partner government results frameworks; increased reflection on aid effectiveness issues; identifying critical data gaps to inform sector programs; and offering more considered ratings of progress.
Unfortunately, no figures are presented from the content analysis on how well the problems identified in the 2009 review have been addressed or how widespread or deep these identified changes are across the 20 program performance reports.
ODE identifies seven challenges to Australia’s aid program but few are directed at AusAID. The first challenge is ‘weak or inconsistent partner ownership and low commitment to reform’ but the issue of how AusAID can improve its way of engaging in policy dialogue with partner governments is not addressed. The need for AusAID to concentrate more on particular sectors is highlighted but again no indicators are presented on the current state of play to provide a baseline benchmark for future assessments. The use of partner systems is identified as a challenge but no statistics are offered. The challenge of streamlining coordination and harmonisation processes is reported, based on the performance report feedback but no suggested ways of responding to these problems are offered. Engaging with emerging donors (eg China) is the final challenge listed but no suggested responses are proposed.
In summary, the main weakness of the ODE assessment of the quality of Australian aid delivery is ODE’s heavy reliance to identify key issues on the on-the-ground assessments of AusAID staff who wrote the individual country and regional Program Performance Reports. The absence of independent performance data on aid delivery gives this report a narrow and defensive tone which offers little guidance and incentive for AusAID to change.
The Government’s response to the Independent Review of Aid Effectiveness noted that ‘the Office of Development Effectiveness (ODE) produces an annual report that critically assesses what Australia has achieved and what we can do better’ (p12). The latest annual review reports from ODE have largely failed to show how aid can be delivered more effectively.
Richard Curtain is a Melbourne-based, public policy consultant, who has spent 18 months in Timor-Leste in 2008 and 2009, working on projects funded by USAID, UNICEF and AusAID. His current work for two major multilateral agencies in the region relates to Timor-Leste and to pacific island countries. For Richard’s review of the previous Annual Review of Development Effectiveness, written a year ago, click here.
About the author/s
Richard Curtain
Richard Curtain is a research associate, and recent former research fellow, with the Development Policy Centre. He is an expert on Pacific labour markets and migration.