AusAID’s Office of Development Effectiveness has released a Paper on impact evaluation providing staff with guidance and standards around impact evaluations to support their use. There is increasing attention now being given to impact evaluations in development and so this new Paper is very timely.
This Discussion Paper is excellent. It is sensible, well-grounded in current good practice, succinct, helpfully referenced and practical. The broad approach adopted in the Paper, and acknowledgement of methodological flexibility, is a breath of fresh air in a field so dominated by constrained and limited discussions of single methods such ‘Randomised Control Trials’ to the exclusion of other methods. The key to the value of this Paper is the statement: “Impact evaluation in AusAID should not be methods-driven”.
The quality of the Paper stands in marked contrast to many other donor documents in the field, for example, those currently available from USAID. These documents are for the most part confusing, narrow, and unhelpful when discussing impact evaluation.
One area that might be addressed to strengthen the Paper is that impact evaluations can also serve an additional purpose of contributing to our body of knowledge about development. The case for advocating this additional purpose rests on the paucity of quality research and publicly available evidence for many donor-supported interventions.
AusAID has invited comments and observations on the Paper, which is available here.
Robert Cannon is an Associate of the Development Policy Centre and is presently working as an evaluation specialist with the USAID funded PRIORITAS Project in Indonesian education.
ODE and its technical partners (Professors Patricia Rogers and Howard White) should be commended for a very clear, methodologically sound and informative discussion paper for AusAID practitioners on impact evaluation (IE). AusAID has asked for comments and observations; mine are only minor.
1) Perhaps the team could provide a short note (potentially in a separate document) that highlights the difference between monitoring and evaluation and how they each add value to AusAID.
2) There is a recommendation that AusAID link with partners based overseas who place an emphasis on experimental and quasi-experimental evaluations. However there are likely to be academics scattered throughout Australian universities who could be implementing partners and sources of technical support, potentially at lower costs.
3) Highlight in the document a ‘one stop call’ for people within AusAID who want to undertake an evaluation and need to know what design would best suit their program. The evaluation options illustrated in Annex A are likely to require specialist knowledge to understand and implement.