The Office of Development Effectiveness (ODE) was established in 2006 in part to “build stronger evidence for more effective aid”. In the past, ODE undertook an “Annual Review of Development Effectiveness” (ARDE). However, last year the Independent Aid Review recommended that it be discontinued due to its “limited success” and delayed release. Given the scope of the Aid review which among other things focused on AusAID’s development effectiveness, ODE released two separate short reports in December, one titled “The Quality of Australian Aid, an Internal Perspective”, and the other “The Quality of Australian Aid, an International Perspective”. This blog post looks at the former (see here for a review of the latter), which aims to assess AusAID’s own performance reporting system and the robustness of its internal reporting.
Unfortunately, the title and the core objective of the report are at odds with each other. A strong performance reporting system is a necessary but not sufficient condition for quality Australian aid. Earlier ARDEs looked at both performance reporting and performance itself. This one is focused just on performance reporting.
The tone of the report is overwhelmingly positive. The paper lists numerous areas of improvement – “improved capacity for reporting”, “greater use of partner government frameworks”, “increased reflection on aid effectiveness” to name a few. The ODE website summarizes the report’s “key finding” as: “that the integrity of the performance reporting system is improving steadily and that Australia’s aid program is more focused on managing for results than ever before”. There appears to be only one critical comment in the entire body of the report. This relates to AusAID staff not discussing in Quality at Implementation reports whether a program’s development logic is validated by implementation. This brief statement is subsequently caveated with the claim that staff’s capacity to effectively rate programs is improving.
Footnote 9 of the report contains a list of weaknesses in relation to performance reporting identified through an independent quality assessment in 2009. These sound serious (“inadequate information systems, a lack of clarity on what program performance means, weak or absent performance assessment frameworks, limited staff capacity in the area of program results, and insufficient incentives to change work practices”) but there is no reference to them in the main text except for a worrying caution that some of them persist.
There is a section in the report about challenges, but these appear to focus on the challenges of providing aid, not challenges in relation to the quality of performance reporting, which is the objective of the report.
Admittedly the contents of this 11 page report are not directly comparable to those found in the much longer ARDEs. Nevertheless the tones are comparable; the unrelentingly positive tone of this report is in contrast to earlier ARDEs which seem to have provided a much more balanced account of achievements and problems.
Is AuAID’s performance reporting really that good? The independent evaluation groups (IEGs) of the World Bank Group and ADB are consistently and constructively critical. This encourages management to readily enact change. For example, a recent report by IEG on the World Bank Group’s private sector arm titled “Assessing IFC’s Poverty Focus and Results” concluded among other things that, “IFC’s evaluation framework does not quantify benefits to poor and vulnerable groups and thus has no specific indicator for measuring a project’s poverty effects.” This statement was a damning critique of the organisation’s monitoring systems. While it provided discomfort to management, it also has spurned the IFC M&E team into action. There is no reason why ODE cannot follow a similar path, unless of course our aid agency’s performance reporting is already really that good.
Dinuk Jayasuriya is a Post-Doctoral Fellow at the Development Policy Centre. He was most recently a Monitoring and Evaluation Officer with the IFC at the World Bank Group.