Australian aid effectiveness: progress over two decades

These are edited remarks from a panel discussion on aid effectiveness at the 2023 Australasian AID Conference, providing a historical perspective on performance management and evaluation in the Australian aid program.

There is so much that could be said, so many different perspectives that could be taken. I’m going to talk about a few changes over the last twenty years, some good, one bad.

Let me start with three good changes.

I’ll caveat at the start that the first two changes are actually to do with performance management rather than impact evaluation. Many of the projects Australia funds don’t lend themselves to impact evaluation. For these more diffuse projects, having a contestable rating system, where judgements are made about performance, judgements that will eventually be made public, is particularly important.

My involvement with the Australian aid program began when I was hired as AusAID’s Chief Economist in 2005. When I arrived I found that there was very little centralized performance management of the aid portfolio, certainly less than I was used to at the World Bank, where I had been working. I suggested we target Chris Hoban, another Australian, who had been Operations Manager in the World Bank’s Delhi Office where I had also been based. Chris became AusAID’s Principal Operations Adviser, and I always think he did an amazing job in a short time. With his help, the system of investment monitoring that continues to this day was set up.

Overall, my sense is that this system has stood the test of time. So that’s the first positive.

And in some ways the performance management system has been improved since it was established. There was one big improvement in 2019 when the final ratings for projects (or investments) were taken out of the hands of project managers and made the responsibility of a central unit. This caused a big drop in rated performance, which we have written about, but making the final ratings more independent was a significant step forward. That’s my second positive reform.

Now onto evaluation. Evaluation of the aid program has also been strengthened over the last two decades. I got to have a second look at the aid program in 2011 when I worked on the Independent Review of Aid Effectiveness. We found that only about one-quarter of the evaluations that were meant to have been done over the last five years had been completed; and that of the completed evaluations, only two-thirds could be found, and only one-fifth had been published.

It took a while for reform in this area to be implemented but, since 2016, DFAT has taken a different approach. Rather than requiring that all projects be evaluated and then falling well short, DFAT is now targeting a smaller number of evaluations, making its evaluation plans public, and then following through. It’s a much more credible process, and one that has worked well.

There’s an important lesson here. It is better to have a more modest and realistic performance and evaluation framework, and implement it, rather than have a really ambitious, unrealistic one on paper, and fall well short.

That’s my third positive change.

I know some will people tend to dismiss these changes as cosmetic and the processes that underlie them as theatre. That is, the existing performance management and program evaluation requirements might be complied with, but aren’t really taken seriously. And that is certainly true to some extent. At the same time, these processes are not just theatre; they are also sometimes useful, and the aid program would be worse without them.

So it is not all doom and gloom.  While it is easy to focus on the negative and what needs to be improved, there have been gains in the past.

Nor is the story simply one of AusAID versus DFAT. It’s more complex than that.

At the same time, I don’t want to pretend that everything is positive. I’m sure we can all think of ways in which the system could be improved, or taken more seriously. I will conclude, then, with my one negative “reform”.

As we all know, in 2021 the Office of Development Effectiveness (ODE) and the Independent Evaluation Committee (IEC) that sat above it were abolished. I know some people think ODE had lost its way, and that its and the IEC’s abolition was justified or at least no great loss.

However, I think we have to look at things from an institutional perspective. There are definitely forces that conspire against robust contestability and rigorous evaluation. You just have to think about the public diplomacy role that DFAT is called on to play to understand that. And there are much broader forces within government and indeed within the aid sector that make it difficult for us to be critical, to talk about failure, that tend to make us operate, in Bill Easterly’s famous words, like a cartel of good intentions. You can just look at this conference and the Development Policy Centre’s blog, where the number of sessions or articles documenting program failure are very rare, and those celebrating success the standard fare.

So countervailing institutions are needed to push back on that, supporting good processes of performance management and impact evaluation. Andrew Leigh’s new Australian Centre for Evaluation is one such countervailing institution, government-wide. ODE and the IEC were countervailing institutions within first AusAID and then DFAT with an oversight and championing role in relation to both evaluations and performance management. In fact, two of the three positive reforms I highlighted earlier were a result of ODE or IEC pressure or initiatives. So the abolition of these two institutions was in my view definitely a backwards step.

That said, it’s not an accident that I’ve got, very simplistically, three positives and one negative reform. That’s not a comprehensive listing but it reflects my overall belief that in this specific area of Australian aid evaluation and performance management, we’ve had more progress than regress in the last twenty years.

image_pdfDownload PDF

Stephen Howes

Stephen Howes is Director of the Development Policy Centre and Professor of Economics at the Crawford School of Public Policy at The Australian National University.


  • Thanks Stephen

    That’s been a useful summary from both your professional perspective elsewhere and your pertinent observations since the Office for Development Effectiveness was abolished.

    As a former federal public servant and also deeply interested in practical bureaucratic influence within the senior levels of Canberra, may I offer three comments on the Australian Centre for Evaluation.

    (1) It has been placed as a Branch at the lowest level of an SES 1, in the Macroeconomic Policy and Analysis Division of Treasury- not kept separate and reporting to Parliament (as suggested by Nicholas Gruen years ago).

    (2) It seems that this would mean being subject to the priorities of either (a) the Division Head, or (b) the Deputy Secretary, or (c) the Secretary. It is not a separate branch reporting directly to the Secretary.

    (3) I understand that the Branch head will be located in Melbourne. This is likely to reduce the impact of influencing those senior levels located in Canberra. One of the downfalls of the of the earlier APS “Managing for Results” era was the indifference of Secretaries to acting on the results of the evaluations of those times.

    Dr Mike Keating, Secretary first at Finance then at PM&C, deserves credit for the impetus behind that “Managing for Results” era. Especially as Head of the APS, at PM&C.

  • Vinaka Stephen – one observation to share – one positive in the Australian government’s pursuit of the aid/development effectiveness agenda in the Pacific, was their advocacy for the subsequent endorsement of the 2009 Cairns Compact for Strengthening Development Coordination (Forum Compact) by the Pacific Islands Forum – several deliverables elicited “recipient” country perspectives and action across internationally endorsed measures for development effectiveness. From 2009 – 2017, Pacific country perspectives on what effectiveness changes needed to be made in the delivery, management, and programming of aid in the Pacific were put to Forum leaders, donor partners (bilateral, multilateral) annually. Perhaps, a resurgence of instruments to elicit and place the voices of Pacific countries at the heart of measuring the effectiveness of “aid,” let alone partnerships, might be what’s needed in the face of the growing geo-political interest in our region.

Leave a Comment