Unleashing the potential of AusAID’s performance data

Recent Devpolicy blogs have been critical of the latest offerings from AusAID’s Office of Development Effectiveness. Its latest annual report released just before Christmas 2011 was published in two parts, one providing an international comparative perspective (and summarized in this blog), the other drawing on and assessing internal performance reporting. A critique of the former written by me can be accessed here (and see here for a reply); critiques of the latter are here and here.

The common criticism is that these reports emphasize the positive and minimize the negative. As Richard Curtain puts it, they leave an impression of ODE as an advocate for the aid program. The charge is a serious one, but let me in this blog provide some balance by pointing to some positive features in the second “internal assessment” report which ODE and AusAID could build upon, as well as a couple of constructive suggestions.

The first positive is that the internal assessment highlights the aggregate project ratings from AusAID’s annual quality rating cycle.

The numbers show that 83% of AusAID’s projects achieve a rating of satisfactory or better. A series of media articles over the last couple of years have highlighted fraud as a problem for the aid program, but subsequent analysis has revealed that this was a beat up: revealed fraud amounted to much less than 0.1% of the aid spend.  Compare this to the 17% of the program (an amount 170 time bigger) which, by AusAID’s own admission, isn’t performing satisfactorily, and you can see where the attention should  be focused. Not on whether funds are being lost to fraud, but on why, despite the absence of fraud, we aren’t achieving better results with our aid funds.

Not that 83% is by any means a bad result. Aid is a difficult business, and not every project should be expected to succeed. But of course a good aid program will always be trying to improve. While it is to AusAID’s credit that it collects and publishes this data, it is unfortunate that there is no attempt to analyze the data. The sort of questions to which answers are needed are: Why has performance fallen over time (from 88% in 2009 to 83% in 2010)? What sort of projects do better, what sort do worse? What are the most common reasons for failure? Which regions do better and which worse? And so on.

And we need to know not just which projects do satisfactorily (with a score of at least 4 out of 6), but which do very well (with a 5 or a 6) and which very badly (with a 1 or a 2). Nor is there any mention at all in the latest assessment of any of the ratings except for the effectiveness one. But AusAID also rates projects for monitoring and evaluation, sustainability, and gender equity.

A second very positive feature is the spot checks ODE carries out of these ratings (which are assigned by managers) to assess their reliability.  Obviously, in any system of self-assessment one has to be worried about upward bias, and the spot checks help us get a handle on this.

The internal assessment gives a brief taste of the spot check results. It says that 50 projects were randomly selected this year to have their ratings checked. It reports that “the proportion of effectiveness ratings assessed as ‘reasonable’ in 2010 rose to 78 per cent, up from 72 per cent in 2009 and 68 per cent in 2008.” This is good news, and perhaps this trend towards more realistic ratings explains why self-reported performance has gone down.  Though note that there is still a very high level of (presumably upward) bias: according to the spot check, over 20% of projects are wrongly rated.

Again, however, ODE reports only a small amount of the data it collects. The previous Annual Review of Development Effectiveness released in December 2010 noted that only 56% of projects should have been rated ‘satisfactory’ for monitoring and evaluation, well below the 70% so rated by managers. But the follow-up internal assessment released in December 2011 makes no mention of whether there has been any improvement in this dimension of reporting or not. That’s a surprising omission in a reported dedicated to assessing AusAID’s performance reporting. It would also be good to know what happens to the 83% satisfactory number once it is corrected for upward bias.

This systematic collation of project self-ratings and the regular use of spot checks is best practice for any aid agency, and something AusAID should take pride in. The problem is that, as illustrated above, the reporting and analysis of these two rich sources of data is at the current time hardly even scratching the surface of their potential.

One way forward would be for ODE or some other part of AusAID to undertake and publish a more comprehensive report and analysis of this data. That would be a good idea, both to improve aid effectiveness and to enhance accountability.

But I have another suggestion. If the data is made public, we can all do our own analysis. This would tremendously enhance the debate in Australia on aid effectiveness, and take the attention away from red-herrings such as fraud towards real challenges such as  value-for-money.

AusAID’s newly-released Transparency Charter[pdf] commits the organization to releasing publishing “detailed information on AusAID’s work” including “the results of Australian aid activities and our evaluations and research.”  The annual release of both the self-ratings and the spot-checks would be a simple step, but one which would go a long way to fulfilling  the Charter’s commitments.

Stephen Howes is the Director of the Development Policy Centre.

image_pdfDownload PDF

Stephen Howes

Stephen Howes is Director of the Development Policy Centre and Professor of Economics at the Crawford School of Public Policy at The Australian National University.

2 Comments

  • Thanks Stephen for this useful focus on AusAID’s activity rating system. Yes – making as much data available as possible is necessary for contestibility and is likely to provide useful information for AusAID at the same time.

    This sort of rating scheme provides an important summary of activity performance but is not sufficient. It is good to see that AusAID’s planned performance framework is going to include a lot more clear measures of output (eg x thousand children immunized) and outcome (eg achieve a y% reduction in child mortality). We’ve had too many years of satisfactory scores for activity performance but few clear and concrete measures of poverty reduction.

  • Hi Stephen,
    Great blog – thanks.
    Perhaps you or colleagues have addressed this elsewhere but who is the ODE? It’s internal to AusAID, right? So it’s not independent assessment? Is that something you think AusAID needs or do you think an independent review every 5 years is sufficient. Most aid NGOs have independent evaluations for their projects – should AusAID have the same? Having AusAID staff evaluating AusAID projects for AusAID managers that they might work for in the future sounds a bit tricky…
    Would be interested in your thoughts.
    Joel

Leave a Comment