USAID: Obsessive measurement disorder?

Is too much oversight a risk to aid effectiveness? A recent paper by Andrew Natsios, visiting fellow at the Centre for Global Development and a former Administrator of USAID outlines the role of the ‘counter-bureaucracy’, that part of the US government that deals with budgeting, accountability and oversight, in distorting the implementation of the US aid program.

Natsios describes an aid system where rules and reporting requirements created by the counter-bureaucracy crowd out creative work such as activity identification and design and create perverse incentives which stifle innovation and lead to a focus on short term results.

He coins the term ’obsessive measurement disorder’ – to mean the belief that the more an activity can be quantified, the better the policy choices and management of it will be.

For Natsios the rise of the counter-bureaucracy and obsessive measurement disorder has led to aid funds shifting towards interventions where results are easy to quantify and measurable after a short period. In particular, he notes that there has been an increased focus on health service delivery. As an example, he singles out the President’s Emergency Plan for AIDS Relief (PEPFAR), which uses a similar project design in each country, and reports against quantitative indicators such as the numbers of treatments dispensed and people treated. Natsios notes that once someone starts a course of anti-retroviral treatments (ART), they must continue it for life – and questions how, in the absence of institution building – ART access will be sustained once PEPFAR funds cease. Natsios believes that service delivery, such as provision of ART should be subordinate to institution building. (Note: The PEPFAR website indicates it is starting reporting on health system strengthening, perhaps a recognition of the limitations of its earlier approach.)

The situation of USAID described by Natsios is one of ‘goal displacement’. First described by Robert Merton in 1957, goal displacement occurs when formalistic goals become more important than the substantive goal of the organisation. In the case of USAID, the goals are largely set by the counter-bureaucracy and do not reflect development goals of the partner country or of the ultimate recipients of US aid. It is this fact that leads Natsios to claim that the counter-bureaucracy is now so dominant that it has become the main customer of US aid.

Natsios believes that aid can transform society, he does not draw a clear distinction between aid and development. He says ‘Development, on the other hand, is at its root an effort to build or strengthen institutions.’ And he regrets the decrease in aid to law and order and other governance sectors which he believes can be transformative but where results are harder to measure and often do not emerge for years.

Many would dispute Natsios’claim that aid can be transformative or that it can lead to development. Chief amongst the sceptics is William Easterly, who in 2007 wrote ‘Once freed from the delusion that it can accomplish development, foreign aid could finance piecemeal steps aimed at accomplishing particular tasks for which there is clearly a huge demand.’

Irrespective of your views on the potential of aid, it would be wrong to discount the paper. Natsios’ extensive experience as the Administrator of USAID , with World Vision and in various positions dealing with aid at a political level give him a practical perspective that is refreshing in the aid effectiveness literature.

In the aftermath of the global financial crisis, with aid donors budgets coming under increased pressure, there is pressure for increased scrutiny of the effectiveness of aid, and for clearer demonstration of value for money. This scrutiny will come from the counter-bureaucracy. Natsios’ recommendations include:

  1. A new measurement system for results that acknowledges that short term quantitative indicators of results are not appropriate for all programs.
  2. Adoption of different evaluation methods for each of service delivery, institution building and policy reform programs.
  3. More research into the effect of the counter-bureaucracy on aid effectiveness with a view to reducing the compliance and reporting burden.
  4. Overt recognition of foreign policy as an objective of some aid projects and a call to judge these projects against political not development objectives.
  5. An end to disbursement rates being used as a measure of performance for the aid program, and recognition that the time taken for results to emerge will vary from program to program and country to country.
  6. Devolving aid programming and decision making to the lowest possible level where there is the greatest level of on the ground knowledge.
  7. Recognition that aid politics demands simple data on outputs, which can be used to defend the aid program, but that this may differ from what development professionals need to make decisions about the aid program.

Natsios’ paper is a timely caution that excess oversight can impact negatively on aid effectiveness. His recommendations, although directed at USAID, are therefore relevant to all bilateral donors.

Cate Rogers is a PhD student at the Crawford School, on leave from the Australian Agency for International Development (AusAID).

image_pdfDownload PDF

Cate Rogers

Cate Rogers is a Research Associate with the Development Policy Centre. She is also a PhD candidate at the Crawford School and has over a decade’s experience in international development, including four years evaluating development programs.


  • Another excellent thought provoking article. It touches upon something quite fundamental in development today – namely that Government Ministries are fed up with being poked around and micro examined like bacteria in a petri dish.

    It would be one thing if these donors subjected themselves to such scrutiny but the real insult comes when one tries to determine the “source” of such information. With a bit of research it quickly becomes evident that many of the key indicators used by the major donors are sourced from out-of-date surveys done by NGO’s. As a result the movement of these indicators can be almost random, yet the impications in terms of funding and development support up-stream can be very serious.

    There is probably a good follow up article to be written about the “out-sourcing” of this data collection.

  • Great post Cate. And good comment Scott.

    I think that Natsios’ suggestion to focus on and measure capacity (I would also add the word sustainability) is not so much pie-in-the-sky. The measures have to be program specific. To take the case of malaria programs (focused on prevention): currently PMI measures nets shipped, and nets handed out.

    Important program effectiveness measures capturing real results might be: the increase in demand for, or propensity to use, nets; parasitemia and anemia indicators (captured via household surveys); access to nets at community level (which campaign distributions often diminish); and measures of capability of malaria control program staff (numbers, qualifications; stability; adequacy of resources to analyze program data, and to manage the program).

    These things are measurable. Indeed, in many cases the data is already collected (the data from household surveys) but it is not used to assess program effectiveness.

  • Cate’s summary of Natsios is good and the paper itself is interesting, but let’s not forget that there are also reasons why the push for better results measurement, tougher oversight, and centralized controls came about. Development administrators always argue that nobody is better positioned to regulate them than they themselves. Granted, this is now out of control (Natsios’s main point) to the point where agencies like USAID are splendidly dysfunctional, but the track record of in-house oversight isn’t all that great either. We can be sure that the dialectic between centralized control and decentralized initiative is going to be with us for some time to come.

    More interesting for me are Natsios’s proposals to use different measurement systems for institutional capacity building. Unfortunately, he doesn’t say what those might be. And yet while “capacity” seems to be one of those areas that every agency parks in the “sacred cow” category of agency purpose, it’s one where we don’t seem to be much nearer to true knowledge and understanding than we were thirty years ago. Just what is it that creates capacity? And for development, are there any lessons to learn about what “capacity development” means (beyond endless training courses and strategic planning workshops) in terms of what to do in fragile state environments, with decentralization, with institutionalizing bottom-up accountability, etc? USAID (and AusAID in the Pacific) probably suffer more from their constant list towards capacity displacement than they do from too much measurement and control, and some of the push for them to show results came from the fact that so many countries seemed locked into a low-level equilibrium that sustained a permanent stream of new projects and helpful consultants but never seemed to make any real progress towards building up significant national capacities.

    • Scott thanks for this, I agree with you. The Natsios article doesn’t dwell on the failures of USAID’s own target setting and measurement of results – one factor behind the rise of influence of the counter-bureaucracy. It does however reference the disconnect between the data that politicians need to sell the aid program and that which development professionals need to run it. This is clearly an area where more work needs to be done, particularly in relation to hard (but not impossible) projects to evaluate like institution building. Otherwise there is a risk that pressure will mount to shift aid towards projects with simple measurable outputs – such as those outlined in the article, or to impose artificial and inappropriate targets on institution building projects. Evaluations of such projects tell us that such targets are often met, but that this may come at the expense of the main objective of capacity building.

      Ultimately, I think it boils down to the fact that if we want effective aid, we need decisions about interventions to be driven by strategy rather than ease of measurement – and we need monitoring and evaluation methods that are appropriate to the objective of the intervention and meet the information requirements of both development professionals and politicians.

Leave a Comment