On June 18, 2014, Australia’s then-Foreign Minister Julie Bishop said 24 words that purported to change the face of Australia’s foreign aid program: “Today I am pleased to launch the Government’s new aid policy and performance framework, which I do refer to as the ‘new aid paradigm’.”
It was a month after the Abbott Government, in its 2014 budget, had announced that Australia’s foreign aid spending would be cut by $7.6 billion over the coming five years. While the cuts were headline news, Ms Bishop’s ‘new aid paradigm’ speech also ushered in changes to the way the newly re-branded ‘Australian Aid’ (formerly AusAid) would evaluate its projects.
This re-branding would see the Department of Foreign Affairs and Trade introduce so-called ‘tougher’ standards for the organisations carrying out its projects, which were increasingly in the Asia Pacific region. Australian aid projects would be subject to performance benchmarks, and deliver value for money, otherwise, Ms Bishop warned, they would be “put on a rigorous path to improvement or be terminated”. Foreign aid spending was pulled into line, therefore, with public expectations that money spent beyond our borders must be well-justified, given the economic pressures and struggles of people at home. DFAT’s new benchmarking standards played directly to these concerns – even if the benchmarks, in reality, varied somewhat in their specificity and strictness. But is the new evaluation paradigm really likely to deliver the best return on Australia’s investment of taxpayer dollars?
International development projects, to put it mildly, are complicated, typically funded in resource-scarce environments, with limited technology and with a gamut of complex factors to consider in the evaluation process. There are complex cultural, societal, socio-economic and political contexts to take into account. And project ‘success’ is harder to quantify than the new DFAT cost-effective paradigm allows, where long-term behavioural and attitudinal changes within communities are an aid priority, for example.
Important aid projects are vulnerable to ‘cost-effective’ evaluation. The United Kingdom’s Department for International Development, for example, recently cut funding to a female vocal group in Ethiopia. While the tabloids had a field day about this ‘waste’ of taxpayer funds, this group was intended to empower young women through performance in a region where women’s rights are severely restricted. Indeed, there’s limited evidence to suggest that performance benchmarking alone is a strong model for evaluating foreign aid projects, particularly those that are community-based.
The disconnect seems obvious between ‘value for money’ benchmarking against strict criteria, and evaluation that promotes ‘on-the-ground program improvement’ that is locally-specific. If value for money evaluation is to succeed as the new paradigm, then the determination of ‘value’, and who determines or interprets it, becomes hugely significant. Should evaluation criteria be generic, prearranged and fiscally-oriented? Or should they be specific to the context of the project, with incentives to effect improvement in local program outcomes?
We can look to the policy scholar Michael Patton for guidance. His concern was that evaluation is all too often a time-consuming, dust-gathering accountability exercise of little relevance. His research documents the persistent non-use of evaluation reports. He proposed a new method – utilisation-focused evaluation (UFE). This would generate findings to assist the ‘primary intended users’ of the evaluation. In the case of foreign aid, this is generally project staff.
UFE would be a collaborative exercise between the evaluator and those being evaluated, with the process of evaluation being just as helpful for project effectiveness as the actual findings. He didn’t prescribe any particular method of evaluation. Rather, practitioners could do anything from individual interviews to financial analyses – as long as the focus is upon improving the usefulness of evaluation to those who are involved in delivering the project.
So the UFE approach is collaborative, user-centred, and could – we feel – break down some of the barriers that make foreign aid evaluation so complicated. There are physical and cultural differences, for instance, between project funders and project staff; resources are typically scant and therefore stretched; and outcomes are difficult to measure. It is hard to imagine how the ‘rigorous’ project improvement driven by performance benchmarks and strict return on investment measures that former-Minister Bishop sought will be achieved in the absence of UFE.
However, if UFE is to be fully fit for purpose in the context of Australian Aid’s new performance paradigm, a hybrid foreign aid UFE model would be required. This would incorporate aspects of performance benchmarking and collaborative, user-oriented techniques. The evaluation would involve both satisfying generic accounting measures, and working closely and in consultation with project staff in order to identify on-the-ground circumstances and outcomes. Two publicly-available evaluation reports would be produced. One written for the funding body to satisfy DFAT’s performance criteria, and the other for project staff aimed at improving outcomes.
Determining what success looks like for any government project can be a daunting venture, and when the project is taking place beyond our shores things can become even more complicated. But by considering an evaluation model that allows project staff to feel they have a stake in seeing the project grow stronger, DFAT stands a better chance of seeing its new paradigm succeed.
Gen Kennedy and Kate Crowley have recently published ‘Re-framing utilisation focused evaluation: lessons for the Australian Aid Program’ in the Journal of Asian Public Policy.
I really like this article. I can only hope that this type of ‘practical’ and ‘useful’ approach to evaluations are integrated into Australian government aid policy.