Results, value for money and the aid budget

16 May 2012

On 8 May the government announced a ‘pause’ in the growth of the aid budget. There has been quite a lot of commentary on this, and on ‘balancing the budget on the backs of the poor’.

There has been less commentary on what the budget and associated documents put out by the government as part of the budget announcement had to say about aid quality. An exception to this is a post by Stephen Howes on this blog, and a related  post by Dinuk Jayasuryia. Both are relatively optimistic about the reforms to improve aid quality which include:

  • The Comprehensive Aid Policy Framework: a four-year strategy recommended by the Aid Review to provide the road-map for the scale up that has been missing so far;
  • A Results Framework: which is a new statement of targets at the level of AusAID’s contribution to the MDGs and for standards of aid delivery including:  a clear strategy’, ‘value for money and consolidation’, ‘risk management and performance oversight’, ‘transparency and results’ and ‘Involving the Australian community’,
  • The establishment of an Independent Evaluation Committee: to oversee the Office of Development Effectiveness (ODE) and which reports to the Director-General of AusAID.
  • Some increased funding for research: Increased funding for agricultural research, though nothing on new funding for medical research.

Howes and Jayasuryia both note that these announcements require further analysis and the devil will be in the detail of how some of these proposals will be implemented.

As the implementation goes ahead it is worth noting some of the risks that will need to managed in this area. A number of observers have noted that there are number of potential concerns about how the focus on results and value for money will be done, see for example Owen Barder’s recent post on ‘seven worries about focusing on results, and how to manage them’.

Owen Barder suggests there are a number of possible concerns.  First, focusing on results may add to bureaucratic overload. Second, it may make aid less strategic and short-termist. Third, it may impose the wrong priorities. Fourth, it may ignore equity. Fifth, it may create perverse incentives. Sixth, it may inhibit partnership. Seventh, the results information is all bogus anyway as claims about results must rely on assumptions about the counterfactual which are usually flawed or incomplete.

He then suggests there are a number of ways of managing this notably by:

  • Reducing  bureaucracy including by using reliable results measures to replace, not supplement, existing procedures for tracking how aid money has been used, and by trusting development professionals by giving them more freedom to design and implement programmes to achieve  agreed results, including the freedom to adjust them in real time without needing to seek approval.
  • Remaining strategic including putting in place a transparent, simple, common framework for taking account of expected future results, so that strategic, long-term and risky investments are properly valued and where there are concerns about equity, transparently including this by specifying the premium for marginalised or under-served groups.
  • Increasing rigour while remaining proportionate e.g. by doing fewer, better evaluations, pooling resources across aid agencies to do more independent impact evaluations, and putting in place Institute for Development Effectiveness, to examine impact evaluation evidence and provide independent and transparent guidance.

I would argue that in addition to these ideas we also need to recognise that it is not only the ability of certain quality processes to produce robust and reliable information that is important. It is also clear that the reality of the politics of ‘evidence’ and evaluation is very much at play in this debate. Certain evaluative tools and methods deliver better political products to Ministers and governments, than others. This will continue to be something that requires ongoing discussion, recognition and vigilance. The Big Push Forward initiative is planning a conference next year on this topic.

As far as other practical solutions go it is important to also explore how citizens in both ‘developing’ and the ‘developed’ world might be better informed and better linked as part of a quality agenda. The government notes the importance of involving the Australian community, but could go further.

A number of observers – including Owen – argue that the days of aid agencies and NGOs having a privileged and relatively monopolistic intermediary position between ‘taxpayers’ or individual ‘donors’ is disappearing fast. Kiva, Global Giving and other internet based agencies are providing  – or at least seem to provide – more direct connections, which are also arguably more transparent. As Owen suggests agencies “must become a platform through which citizens can become involved directly in how their money is used”, but equally this needs to ensure an even greater involvement of those that this enterprise ultimately seeks to benefit.

It is time to be taking this to another level by creating deeper and broader two way ‘feedback’ loops (as Owen has argued himself in his post on what agencies can learn from evolution) by:

  1. Supporting local organisations and independent media to be telling their stories about aid and development effectiveness,
  2. Encouraging the incredible innovation that is emerging from the likes of Ushahidi and Twaweza (which Owen is board member of) to share information in all directions and to help visualize, crowdsource and aggregate this information and these stories so that the complex patterns and weave of the development tapestry become clearer,
  3. Really taking seriously the challenge of moving from a transactional to a transformational community engagement agenda in donor countries and helping to build the networks and coalitions between these actors and those promoting progressive change elsewhere, so that information on effectiveness and performance are more directly shared and debated.

This is not an alternative to more formal, rigorous and appropriate evaluation, but this kind of approach is an important complement to these types of valuation of the development process. This is because they provide more real time feedback, which is important for ongoing learning and adaptation, but also they would start to change the incentive structures faced by governmental and non-governmental aid agencies in important ways.

As Einstein once noted “not everything that can be counted counts, and not everything that counts can be counted”. This sort of approach complements a necessary focus on numbers with an improved dialogue and deeper engagement with the issues.

This blog is part of a series on 2012 Aid Budgets. For other blogs in the series, see here.

Chris Roche is the Director of Development Effectiveness for Oxfam Australia.

Authors

Chris Roche

Chris Roche is Director of the Institute for Human Security and Social Change, and Associate Professor at La Trobe University and a Senior Research Partner of the Developmental Leadership Program. Chris has worked for International NGOs for nearly 30 years, and has a particular interest in understanding the practice of social change and how it might be best catalysed and supported.

Comments

  1. It has been pointed out to me that the Independent Evaluation Committee does not report to the AusAID Director General. It is responsible to the Development Effectiveness Standing Committee (DESC). This is true, see the terms of Reference of the IEC here http://www.ode.ausaid.gov.au/publications/pdf/iec-tor.pdf . However the DESC is chaired by the Director General of AusAID see the DESC terms of reference here http://www.ode.ausaid.gov.au/publications/pdf/desc-tor.rtf .

    It is notable that in the United Kingdom that the Independent Commission on Aid Impact reports to Parliament through the House of Commons International Development Committee.

    Reply Comment

Leave a comment

Upcoming events