A new, headline-grabbing report from Britain’s Independent Commission for Aid Impact (ICAI) has criticised the Department for International Development (DFID) for failing to do enough to tackle corruption in the countries where it gives aid.
In the report [pdf], the performance of DFID’s anticorruption work was rated ‘amber-red’: the second worst grade possible, indicating that the program “performs relatively poorly overall against ICAI’s criteria for effectiveness and value for money”. The report criticised DFID’s strategy, stating that it wasn’t sufficiently focused on the poor, particularly on their everyday experience of corruption, and wasn’t applying lessons learned.
Many of the report’s criticisms were levelled at a lack of evidence for DFID’s anticorruption efforts—for example, the lack of a demonstration of a “robust causal link” between supporting Nigeria’s involvement in IATI and flow on benefits to the country’s poor.
The report made a number of recommendations, such as DFID establishing standalone anti-corruption strategies, including more projects targeting “everyday corruption” in its portfolio and creating an internal embedded centre of excellence on anticorruption.
The report has certainly sparked conversation on corruption and aid, not only in development blogs, but also in newspapers like The Telegraph and Daily Mail—perhaps a less helpful kind of babble.
But the report itself has also been criticised for holding the aid program to unrealistic standards and missing the mark.
In The Guardian, UK development academic Heather Marquette labelled the DFID report “a mess” and “a wasted opportunity to think about how we deliver aid with integrity” that fails to understand the nature of corruption and creates a risk of ineffective “window dressing” approaches. In a related piece for The Conversation, she wrote on the need to communicate a more realistic and nuanced picture of corruption and for organisations such as ICAI to better understand the work that aid agencies do.
While acknowledging the effort made by ICAI to gather systematic evidence, Charles Kenny of CGD criticised the quality of the approach, writing that “ICAI’s attitude to what counts as evidence is so inconsistent between what it asks of DFID and what it accepts for itself”.
Edward Hedger of ODI wrote that ICAI missed opportunities to look more closely at what might work in fragile contexts and that the report lacked a credible narrative on how to balance risk and impact, but he praised its pragmatic approach to budget support.
The critiques of the report remind us of some of our responses to Australia’s ODE aid evaluations. Perhaps getting high-quality evaluations is just as challenging as getting high-quality aid.
Leave a Comment