17 percent of development projects at both the World Bank [pdf] (between 2009-11) and the Australian aid program [pdf] (2009-10) failed to ‘reach satisfactory outcomes’. Having a better understanding why these projects fail could have profound implications on accountability, projects learning process and aid effectiveness in general.
An article in the latest Journal of Development Economics tries to improve our understanding of this issue. Using a dataset of more than 6000 World Bank projects assess between 1983 and 2011 the authors systematically look at the link between country-level and project-level factors on project performance.
The authors use the World Bank’s Country Policy and Institutional Assessment rankings, civil liberty and political rights indices, and real per capita GDP growth as proxies for differences between countries. They find that there is generally a positive and significant link between country-level factors and project performance. Interestingly, however, the authors find that only 20 per cent of the variation in project performance is due to country-level factors. While 20 per cent is not insignificant (still large enough to justify country-level selectivity), this finding highlights the importance of analysing project-specific factors and their connection to outcomes.
The authors address a large array of project level ‘micro’ variables in their analysis including project complexity, project duration, preparation and supervision costs, delays in the project, and whether or not the project had been re-constructed in its lifetime. They find that:
- Restructured programs perform better-than-average following their restructuring, underscoring the effectiveness of this particular intervention to turn around underperforming projects.
- There is a negative correlation between preparation and supervision expenditures and project outcomes.
- There is only some evidence to suggest that larger, more complex projects are less likely to be successful.
- Greater dispersion of a project across sectors is significantly associated with better project outcomes.
- Whether or not a project is new or repeated does not seem to matter much for outcomes.
- The ‘human factor’ (i.e. quality of the team leader measured by various proxies such as the performance of projects they have previously managed) is significantly correlated with project outcomes.
Even after accounting for a wide range of micro and macro variables, the authors can only account for between 13 and 16 per cent of the variation in project outcomes.
Moreover, while they find that some of their explanatory variables have a partial correlation with project outcomes, their results may be clouded by the fact that these variables might themselves be responding to unobserved project-level factors that matter to project outcomes. The authors themselves conclude that only a small part of the very substantial (80%) project-level variation in outcomes can be attributed to the variables that they analysed.
What does this mean? If none of the conventional variables highlighted above have a major impact on project performance, what does? The authors suggest that there are critical but unmeasurable, ‘human factors’ which drive project outcomes.
When the external environment that a program is sitting within changes, whether that be political shifts or changes to the needs or another factor, there will of course be the requirement to restructure or divert from the original design. Over time, in engaging more deeply with stakeholders and partners, learning should emerge which can feed into improvements for a program. For ‘successful’ programs that effectively deliver against an original design and don’t evolve over time, I wonder whether there are missed opportunities to achieve even greater impact and stronger outcomes. Restructuring might be more the symptom/visual cue, but not necessarily the cause of improvements in effectiveness- I would question if restructuring per se creates better-than-average performing projects, or whether that’s more attributable to teams capturing learning from a program throughout implementation, which in some cases is reflected through a program restructuring or complete program redesign. Perhaps that’s one of the reasons why the author’s found the outcomes were only partly attributable to the variables assessed. There is a need to look at things more holistically and contextually.
Indeed, I think this is a rather long winded way of saying what we already know and that is good people make for good projects. It would be interesting to see how flexibility and changes to frameworks have changed over time and how that may have affected outcomes,
Not clear what the “new evidence” is. That a “restructured program would perform better than one not restructured? Many projects undergo restructuring during implementation exactly to manage performance. Projects that are on track fortunately don’t need restructuring; so suggesting that restructured projects are likely to perform better is like saying that projects on track are more likely to fail. Then the revolutionary finding about the negative correlation between the cost of project design and supervision and project outcomes, which suggests that the lower this cost the higher the outcomes. Not only is this counter-intuitive, it also contradicts elementary principles of project design and management.