7 Responses

  1. Enrique Mendizabal
    Enrique Mendizabal October 28, 2012 at 11:04 pm

    Interesting post, Dinuk. I agree on the need to plan evaluations more carefully to find the right approach for each intervention. Also, that more care needs to go into planning to ensure that existing knowledge is incorporated into the interventions.

    But also important is the development of a system that ensure real independence. Aid is no longer an ‘a-political’ issue that voters are unaware and interested in. Efforts to assess the quality of Australian aid will be undermined if this is not done properly.

    Australia should therefore be careful of avoiding the situation we find in the UK where DFID is the main client of the very same consultancies, NGOs and think tanks that are called to evaluate UK aid. KPMG, for example, manages the aid watchdog but also implements hundreds of millions of Pounds-worth of projects.

    This clientelistic approach means that think tanks like ODI are also now involved in projects as implementing agents for iNGOs and consultancies (such as PWC and KPMG) which makes their oversight roles impossible. And the same is true with smaller consultancies from communications to social development which often find themselves working with organisations that they are also evaluating. 3ie itself, supposedly the enforcers of absolute certainty, is not free from this.

    True independent voices are few and unpopular.

    This situation is not helped by the roles played by new foundations like Gates or iNGOs in using researchers to advocate for their own interests (see, for instance, Gates’ development progress work) which goes as far as funding influential media outlets like the Guardian for the same purpose.

    The consequence is a system with few (if any) lines of accountability; one in which all participants are clearly benefiting from the status quo and the conclusion that ‘more aid is good’. The public is beginning to react to this and, unless, important changes are made (and many will be big half-baked PR jobs, unfortunately), the baby will be thrown away along with the bathwater.

    I think the fault here lies mainly with some large bilateral funders such as DFID that have failed to recognise that different organisations play different roles in the aid sector and that their contributions demand certain degree of specialisation and even protection. A system in which consultancies, research centres, think tanks and NGOs are all expected to compete and collaborate with each other can only lead to uncomfortable and dangerous conflicts of interest.

    Conflicts that are incompatible with the demand for rigour and transparency in project evaluations.

    Australia would do well to avoid this muddling of roles. It should attempt to strengthen independent research communities with evaluation expertise separate from those tasked with implementing aid policy. Only this will allow Australian to hold its Aid industry to account.

    1. Dinuk Jayasuriya
      Dinuk Jayasuriya October 31, 2012 at 8:26 am

      Thanks Enrique for your post.

      It’s a hard balancing act, especially when there are a small pool of organisations with the skills and resources to both implement and evaluate projects. In the private sector, there are four major audit firms that audit most if not all the large multinational companies. However there are regulations in place to ensure that the same audit firm cannot provide advice on internal controls while also undertaking the external audit. As I recall there are also regulations in place to prevent one audit firm from auditing the same company for more than 5 consecutive years. Perhaps aid donors could consider similar regulations (if they haven’t done so already).

  2. Dinuk Jayasuriya
    Dinuk Jayasuriya June 1, 2012 at 3:25 pm

    Thanks Chris for your detailed comments. I agree that ODE forms only part of the evaluation team and indeed the article is directed to AusAID and not specifically ODE. I also agree that evaluation has to be tailored to methodologies that are suitable to those needs (and RCTs are not suitable in many cases). That said, where practical and cost-effective, an RCT is generally considered the most rigorous quantitative technique. The key way any other quantitative approach would be superior to an RCT is if available observable data is very highly correlated with unobservable characteristics (of units in the sample) or if unobservable characteristics are unlikely to influence outcomes or participation – both which don’t seem to happen widely in practice. Apart from the above point, the debate focuses largely on the disadvantages of RCTs (which certainly exist), and less on the disadvantages of RCTs relative to other quantitative techniques. In the large majority of cases where practical and where the benefits outweigh the costs, RCTs represent the most appropriate quantitative approach. Combine that with the most appropriate qualitative component, and it would provide a strong basis for causal inference – hence the term platinum standard. Unfortunately only a small percentage of projects will lend them to this type of scrutiny and hence other evaluation approaches to provide rigour, quality and validity are more practical.

    I am encouraged that AusAID are looking at an array of ex-ante evaluation approaches which can only improve on the good steps AusAID has taken to make performance standards stronger. As to your last point, I can only hope that development effectiveness is measuring value for money and feeding into new programs and projects.

  3. Christopher Nelson
    Christopher Nelson May 30, 2012 at 4:35 pm

    Dinuk,

    I am coming to this conversation late having just returned from the field, but you raise some important points that deserve exploration. First, I am not sure how helfpul it is to put ‘platinum’ or even ‘gold’ labels on approaches in evaluation. As a professional evaluator, I have always felt the best approach is to think carefully through what you are trying to achieve with your evaluation and then choose and explore the methodologies that are going to suit those needs. The whole issue of labels tends to open debates around standards rather than focusing on rigour, quality and validity. Some nice responses on this recently in the Journal of Economic Literature from Ravallion and Rosenzweig in their reviews of Banerjee and Duflo’s book “Poor Economics”.

    Second, the current performance and evaluation policy in AusAID allows programs significant freedom in choosing their own approaches to performance tracking. Ex-ante evaluation is encouraged and is increasingly being adopted by the more performance oriented parts of the agency (admittedly it is often the better funded country programs – Indonesia, PNG and Philippines). There are also evaluators working with a number of AusAID teams and they are regularly having conversations about pipeline evaluative activities. It is often overlooked that ODE only forms one part of the quality system in the agency and there is a whole team in the Program Effectiveness and Performance Division tasked with driving the evaluation policy. The real challenge for this section is driving policy change alongside adequate and supported cultural reform of an agency filled with Development Generalists.

    Which brings me to my third point. The main constraint in taking on this approach has always been expertise rather than cost or willingness. AusAID has few professional evaluators in its ranks and even fewer individuals capable of getting their heads around an RCT or quasi-experimental evaluation design. The field of evaluation in Australia is heavily qualitatively focused and the ability to draw on a pool of quantitative evaluation experts that understand the development context is not straight forward. As I understand it, AusAID is having conversations with J-PAL and others about running this type of evaluation, but there are all sorts of constraints and challenges that need thinking through before launching into the misguided policy (see recent USAID proclamations) of believing independent evaluations will be the answer. You suggest universities as a possible partner and there is value in better utilising these institutions. However, universtities have often had their own issues with being involved in these operational undertakings. Contrary to popular belief, my experience has been that AusAID is very open to starting a conversation with groups that could undertake this type of work and the political barriers are often overcooked.

    Thanks for raising the issue as I think it is an important component of the development effectiveness debate. What I think is even more important though is raising the whole issue of where the issue of effectiveness and value for money is going. There are some important aspects of the recent AusAID agency results framework that have huge ramifications for how the sector works and this deserves further discussion.

    Christopher Nelson is a Monitoring and Evaluation specialist with the World Bank Group and a former M&E advisor at AusAID.

  4. Dinuk Jayasuriya
    Dinuk Jayasuriya May 17, 2012 at 5:30 pm

    Thanks David and Paul for your comments. You both point to the fervor surrounding RCTs – it’s unfortunate if institutions think them to be the be all and end all. I certainly hope (and indeed believe) that AusAID will not go down the path of funding projects that are only suitable for RCTs (and by extension platinum style evaluations which also incorporate qualitative tools). For many reasons, including the ones mentioned above, we need an array of evaluations of different rigor depending on the cost, benefit, practicalities and technical expertise available.

  5. Paul Holden
    Paul Holden May 17, 2012 at 6:47 am

    It is important to note that in some aid agencies, the fervor for RCTs has led to an expansion of projects that can be evaluated in this way at the expense of those for which RCTs are not appropriate but which might have a longer lasting impact. For example, Andrew Natsios, a former Administrator of USAid describes in “The Clash of the Counter Bureaucracy and Development” how the demand for the so-called “gold standard” of RCTs (although I see that it is now “platinum” – is this evaluation inflation?) has biased the choice of projects that USAid supports towards those that are suitable for RCT evaluation. He points out that initiatives that have the most transformative impact are those that are difficult to measure and also carry the greatest risk of failure. No good bureaucrat wants a failed project, so risk taking in this way is eschewed. I have long maintained that we can learn as much from failed projects as we can from successful ones as long as the reasons for failure are carefully documented and become part of institutional memory and future project design. Alas, one of consequences of the rapid turnover of staff in many aid agencies means that institutional memory is rare or non-existent.

  6. David Carpenter
    David Carpenter May 15, 2012 at 4:19 pm

    Dinuk, I agree with your observations regarding the need for mixed methods approaches, and an ex ante evaluation perspective. I am the research director of a large evaluation of the AusAID-funded, Australian Sports Commission-implemented Australian Sports Outreach Program. This evaluation is employing a mixed methods approach, however the quantitative component is restricted due to the ex post nature of the design (and therefore no baseline, no possibility of a control etc), this is a pity but there is not much we can do about it (so is this a ‘bronze’ or maybe ‘silver’ standard design?). Due to the fervor surrounding RCTs there is increasing pressure on evaluation consultants to employ the most rigorous methods possible, however for the reasons you mention this is almost always not possible; more needs to be done to ensure that donors understand when such methods are possible and when they are not. I agree with your call for AusAID to review upcoming projects to assess which of them may be suited to platinum standard evaluations. David

    Dr David Carpenter is the Principal Consultant, International Development at Sustineo in Canberra

Leave a Reply