Reflections on the new aid paradigm, part 5: what, me hurdle?

22 September 2014

If there was an over-riding complaint that Julie Bishop as shadow minister for foreign affairs wanted to lodge about Australia’s aid program, it was that it was not being held to sufficiently rigorous performance standards. As she said on many occasions, ‘I do not accept that stringent performance hurdles – as envisaged by recommendation 39 [of the 2011 Independent Review of Aid Effectiveness] – are in place’.

When the Coalition government’s aid policy framework surfaced in June 2014, Ms Bishop as foreign minister also released a slender aid program performance framework, Making Performance Count. By comparison with the aid policy document, it was an ungainly thing, painful to read. More importantly, it did little to deliver on the promise implied above. A year into the life of the government, the notion of a ‘stringent performance hurdle’ remains elusive.

In the last two instalments of this series of occasional reflections on the ‘new aid paradigm’, the topic will be performance and how it figures in the allocation of Australian aid. The discussion below is about assessing the performance of Australia’s aid administration. The final piece, tomorrow, will be about the use of aid in both stick and carrot mode to improve the performance of Australia’s aid recipient countries and organisations.

The Independent Review of Aid Effectiveness recommended that the intended growth of the aid program to 0.5 per cent of GNI by 2015, under the Labor government, be made ‘subject to the progressive achievement of predetermined hurdles’, with ‘consequences’ if hurdles were not met. The Labor government accepted this in principle. The results framework provided in its ill-fated 2012 Comprehensive Aid Policy Framework (CAPF) was said to reflect ‘the intent of the “hurdles” outlined in the Independent Review of Aid Effectiveness’ while being ‘much more comprehensive’ (Box 3). Annual reviews of aid effectiveness in 2011-12 and 2012-13 declared that most hurdles suggested by the review panel for those years, most notably the production of the CAPF, had been met (Table 4 and Appendix 1, respectively)—though ‘hurdles’ gave way to flabbier-sounding ‘commitments’ in the second review.

The notion that Australia’s aid administration, now vested in the Department of Foreign Affairs and Trade (DFAT), would be subject to performance hurdles necessarily changed in content the moment the incoming Coalition government levelled the aid program at about $5 billion in September 2013. Hurdles could no longer condition increases in aid volume. The talk was, from that point forward, of ‘performance benchmarks’ for a static aid program. Nevertheless, it was generally assumed, including by many of the numerous parties who made submissions on this topic at the invitation of the new government, that the actors whose performance would be benchmarked included DFAT and, to the extent that DFAT does not act autonomously in this field, the government itself.

Making Performance Count duly says that ‘funding at all levels of the aid program will be linked to progress against a rigorous set of targets and performance benchmarks’. It eschews ‘headline’ outcome targets of the kind trumpeted by the CAPF, and also seems uninterested in country-level headlines: ‘Judging the relative performance of programs will require an informed approach that is less mechanistic than simply reporting aggregated results and comparing them between programs’. Instead, it states the government’s broad expectations of the aid program in terms of ten ‘strategic’ targets of the input and process kind: 20 per cent of aid will be ‘aid for trade’ by 2020, 80 per cent of ‘investments’ will have positive impacts for women, all investments will be required to explore the potential for private sector engagement, and so on. It then rattles off a long and eventually mind-bending list of annual and one-off processes, plans, and strategies intended to ensure the quality of Australian aid.

However, while the performance framework is demanding about the performance of projects, implementing agents and recipient governments, it is almost completely silent about the performance of DFAT as the aid program’s administrator. The word denoting the best guarantee of performance on the part of DFAT, namely transparency, does not once intrude. The government’s much-touted performance benchmarks for the aid program make no grand entrance. Instead, they bifurcate into the strategic targets just mentioned, some of which are merely policy commitments that in themselves say nothing about aid quality, and country- or project-specific targets, which will not be defined until mid-2015. Little space is allowed for anything like benchmarks for operational and organisational effectiveness on the part of Australia’s aid administration, as used by the multilateral development banks, and as represented—albeit in a patchy way—in the third tier of the CAPF’s results framework (see the Development Policy Centre’s submission [pdf] on benchmarks, and also here, for more on this point).

One exception to the above observation is a worthy but easily-fudged requirement to increase average investment size—easily fudged because one can enfold numerous, small aid packages in big, umbrella ones. A second exception, arguably, is the perversely risk-averse decree that at any investment falling below a certain value-for-money standard for more than a year must be terminated. However, the latter is not really a discipline on DFAT itself: failure on this front would inevitably be sheeted home to implementing agents or aid recipients.

It is remotely conceivable that, despite their near-absence from the performance framework, indicators of DFAT’s organisational and operational effectiveness will in practice be perceptible in corporate reporting on aid program effectiveness. It helps that Aid program performance reports (which are about individual country programs or aid ‘themes’) are to survive. In the past many of these have been vivid, current and fine-grained accounts of progress made and challenges faced by key components of the aid program. However, we do not know if they will continue to be made public (none relating to 2013-14 has yet appeared on DFAT’s website) or, if so, whether on average they will be as honest and useful as they used to be.

It is less clear whether the Annual Review of Aid Effectiveness will, in effect, survive: it is to be replaced by an annual ‘Performance of Australian Aid’ report, which will review progress against the targets of the new performance framework, summarise progress toward other targets at the level of major country, regional and thematic programs, and give a ‘snapshot’ of results achieved. Whether this change will involve any net gain or loss of information remains to be seen. In fact, nothing will be seen for a very long time given that the targets of most practical importance, relating to country and thematic programs, will only take effect in the 2015-16 financial year with reporting unlikely to appear before the end of this government’s term.

In a best-case scenario, the new annual performance reviews would, eventually, do a better job than the two ARAEs did of anchoring their assertions in the findings of aid program performance reports and operational reviews and evaluations, and would provide useful fodder for a more strategic and forward-looking Lessons from Australian Aid report, the first attempt at which was published by DFAT’s Office of Development Effectiveness this year. They would not be dominated by the reporting of highly aggregated, unverifiable and often incredible headline results. In an unhappier scenario, these reviews would dwell mainly on the strategic targets of the performance framework, conveying little concrete sense of impact at the level of country and thematic programs, and no sense of how DFAT is running the aid program.

A serious effort at performance benchmarking would have accorded central importance to administrator performance. That is what the government can control, and it can be controlled by putting in place checks and balances, consistently and transparently applied, to ensure programs are relevant, significant, flexible and focused—and therefore best placed to achieve impact. In most circumstances numerical targets, whether for inputs, processes or—as under the previous government—outcomes, are irrelevant or worse from an operational perspective: they create collective action problems or distort behaviour. Nobody outside the government attaches much credence to them, assuming that terms and standards will generally be defined or redefined in order to ensure satisfaction. The best way to increase the probability of achieving good outcomes in complex and sometimes chaotic environments is to have good principles and processes. ‘Stringent performance benchmarks’ should define what these looks like, not what percentage of the aid program will be spent on this or that, or how many investment plans and fraud control strategies will exist.

Robin Davies is Associate Director of the Development Policy Centre. This is the fifth in a six-part series of blogs examining the new aid policy, collected here.

Author/s

Robin Davies

Robin Davies is an Honorary Professor at the ANU's Crawford School of Public Policy and an editor of the Devpolicy Blog. He headed the Indo-Pacific Centre for Health Security and later the Global Health Division at Australia's Department of Foreign Affairs and Trade (DFAT) from 2017 until early 2023 and worked in senior roles at AusAID until 2012, with postings in Paris and Jakarta. From 2013 to 2017, he was the Associate Director of the Development Policy Centre.

Comments

  1. Interesting article Robin. Of interest to DFAT may be the upcoming ICAI review of DFID’s treatment of impact, see TOR here: http://icai.independent.gov.uk/wp-content/uploads/2014/01/ICAI-Impact-ToRs-FINAL-040314.pdf . You suggest that DFAT should consider organisational and administrative factors more when assessing performance. The imminent ICAI review will be looking at the role that DFID’s processes and tools (i.e. theories of change, country strategies, the reams of documents that guide program implementation etc) play in the ‘delivery of impact’. A laudable goal, but a tough gig when you don’t know what the impact is in the first place!!

    Reply Comment

Leave a comment

Upcoming events