UN Secetary General hosts UN Private Sector Forum 2015 (Flickr/UN Photo)

Resisting the formulaic: measuring the impact of aid on entrepreneurship and development

By Simon White
24 February 2016

There is a high degree of innovation happening in how donor and development agencies promote entrepreneurship and private sector development (PSD) in developing countries. Where once donors and developing country governments provided finance, training and other services directly to nascent enterprises, they now focus on the role of market and government systems. Programs have become more commercialised and market-driven, and are often embedded in value chain interventions. While investment climate and business environment reforms improve governance and the conditions in which private enterprises start-up, operate and expand, inclusive business models seek to open up new opportunities along global value chains. Impact investing is also changing the ways we engage with the private sector. These innovations make it more difficult for agencies to track and measure change and assess impact.

In parallel, a reconfiguration of the architecture for international development has occurred with the adoption last year of the Sustainable Development Goals (SDGs). While the SDGs present a “call to action” for governments around the world, they also place demands on programs to demonstrate how they contribute to the 17 goals and 169 targets that Members States have signed up to. Entrepreneurship is mentioned only twice in the SDGs (i.e., Indicators 4.4 and 8.3). However, PSD and entrepreneurship is indirectly connected to many goals and indicators. Furthermore, the private sector is often cited (see for example here) as a critical actor in the implementation and success of the SDGs. The challenge for donor and development agencies is how to measure the contribution that PSD and entrepreneurship programs make to these desired outcomes. However, there is danger in forcing a global blueprint of development outcomes on all actors without recognising the practical, unique problems experienced on the ground. Thus, agencies should refine their indicators to reflect changes at micro and macro levels.

The OECD-Eurostat Entrepreneurship Indicators Programme and the Global Entrepreneurship Monitor provide valuable insights into the dynamics of entrepreneurship across selected countries, but they do not provide data for determining the performance and impact of PSD and entrepreneurship promotion programs.

The process of measuring program effects and impacts is closely connected to diagnostics and design. A great deal of effort and time typically goes into understanding how market and government systems shape the behaviour of entrepreneurs. However, program design can be an unwieldy process made more complicated by detailed design templates, sophisticated log frames and lengthy approval processes, which are often disconnected from the realities of program beneficiaries. The top-down goals of development programs, such as those enunciated by the SDGs, are combined with the bottom-up experiences of entrepreneurs and other system agents; or at least they should be.

A well-designed program will focus on the neck of the hourglass of top-down and bottom-up demands, and will strategically focus program interventions on systemic changes that influence outcomes in both directions. The challenge for program managers is to monitor the program effects on market and government systems and to adjust accordingly, while responding to demands from funders for evidence of longer-term impact.

Working within systems requires a level of flexibility and adaptability that funding agencies are not always comfortable with. While funding agencies are interested in results, programs working with systems need to understand how they are influencing behaviour and how this will produce the outcomes donors and their partners desire. The logic of program design is elaborated through the construction of results chains that articulate the ways that program interventions change behaviour at the micro-level in order to ignite macro-level change. This also provides opportunities for programs to test the assumptions behind these links.

To succeed, programs are obliged to focus on locally identified problems informed by local conditions and politics. Programs should be flexible and adaptive, encompassing careful monitoring of effects and making adjustments that respond to unexpected changes, whether they are created by the program or by external forces. New developments in “problem-driven iterative adaptation”, or PDIA, highlight the importance of solving locally defined problems as opposed to transplanting preconceived and packaged best practice solutions. These approaches encourage experimentation embedded in tight feedback loops that facilitate rapid experiential learning and engage broad sets of agents to ensure that reforms are viable, legitimate, relevant, and supportable.

The Donor Committee for Enterprise Development (DCED) Standard for Results Measurement is an example of a useful management tool designed to measure and improve program performance. The Standard helps program managers articulate the hypothesis connecting program activities with desired change at the enterprise and economy level. It provides guidance in systematically setting and monitoring indicators to show whether events are occurring as expected.

Broad global and national development plans, such as those enunciated by the SDGs, present a challenge to aid programming. While they correctly align development programs to the aspirations of governments and their development partners, they increase the demand for a predetermined, formulaic set of results. Program managers need to develop a set of indicators and assessment mechanisms to allow them to measure and test the relationships between program interventions, monitor changes in market and government systems, and remain aligned to national and global development goals. Understanding how programs affect behaviour and how these behaviours link to broader macro-level outcomes is critical for success. New innovations and larger, more sustainable program impacts can be produced with this information, but programs require the space and opportunity for experimentation and constant refinements. Donor effectiveness will improve as these links are more rigorously tested, documented and learnt from.

Simon White is an independent policy advisor who helps national, regional and city governments, business organisations and development agencies formulate and implement policies and strategies for economic growth, business development and job creation. He is a Visiting Fellow at the Sir Walter Murdoch School of Public Policy and International Affairs, Murdoch University.

This post is based on a paper Simon presented at the 2016 Australasian Aid Conference; read the full conference paper here [pdf]. 

About the author/s

Simon White
Simon White is Managing Director of the consulting firm Publicus.

Page 1 of 1