The baby and the bathwater: Devpolicy’s submission on performance benchmarks for aid

Devpolicy has submitted its response to the government’s call for submissions and consultation paper on the development of rigorous performance benchmarks for Australia’s aid program.

Our submission contains a historical overview of efforts to strengthen the performance orientation of the Australian aid program and our recommendations as to how the current benchmarking process should proceed.

A one-sentence summary of our recommended approach would be: ‘implement and extend past aid reforms, and focus on benchmarking process’. The aid program already has a results framework and a bunch of aid management systems. These things, as we have frequently argued, can and should be improved—but they should not be thrown out.

The full set of recommendations from our submission is as follows:

  1. In the benchmarking process, the Government should aim to improve the implementation of and extend the substantial aid reforms of the past decade.
  2. The Government should commission a review by the Office of Development Effectiveness of efforts over the last ten years to link aid funding to performance and results.
  3. Performance benchmarks should be used for redistributive purposes, not to determine aggregate aid levels.
  4. Country and other (e.g. multilateral) allocations should certainly be influenced by performance considerations, but should not be tied rigidly to the achievement of policy or governance reforms that are beyond Australia’s control.
  5. Benchmarking the efficiency and effectiveness of aid management processes should be accorded at least as much importance as tracking policy-dependent benchmarks or targets that relate to aid outputs or outcomes.
  6. Handle with caution and preferably avoid input-based benchmarks such as sectoral spending commitments, ensuring that, where used at all, they reflect a bottom-up consideration of what is needed and feasible.
  7. Benchmark the use of key aid management systems by committing that they will be fit for purpose and consistently applied, and indicate how their fitness and use will be monitored.
  8. The data and assumptions underlying the choice of benchmarks, and the data on which assessments of performance against benchmarks are based, should be reported in detail.
  9. Externally verified performance data should be used wherever possible. Where this is not possible, internally generated data should be subject to random audit by the Office of Development Effectiveness or outside parties.
  10. The aid program should conduct regular aid stakeholder surveys and use the resulting information, particularly trend information, as an input into the whole-of-program performance assessment process. Consideration could be given to defining one or more qualitative benchmarks relating to stakeholder perceptions.
  11. Performance benchmarks should be consistently used in all aid performance reporting. They should be defined within the three-tier framework already adopted for the 2012 Comprehensive Aid Policy Framework, and should give due weight to process benchmarks vis-à-vis ‘headline’ policy-related or ‘results’ benchmarks.

If you’d like to see the full submission, you can find it here. Parts of it draw heavily on our recent submission to the Senate inquiry into overseas aid. For a summary of this, see here.

Robin Davies

Robin Davies is an Honorary Professor at the ANU's Crawford School of Public Policy and an editor of the Devpolicy Blog. He headed the Indo-Pacific Centre for Health Security and later the Global Health Division at Australia's Department of Foreign Affairs and Trade (DFAT) from 2017 until early 2023 and worked in senior roles at AusAID until 2012, with postings in Paris and Jakarta. From 2013 to 2017, he was the Associate Director of the Development Policy Centre.

Stephen Howes

Stephen Howes is Director of the Development Policy Centre and Professor of Economics at the Crawford School of Public Policy at The Australian National University.

Leave a Comment