Research in international development: bridging the gap between production and use

(Credit: DFAT/Flickr CC BY 2.0)

The recent awarding of the Nobel Prize to Abhijit Banerjee, Esther Duflo and Michael Kremer for their work in developing experimental research methods to assess the impact of development interventions might be seen as a testament to the important role of research in improving development practice. But the perceived value of research in informing policy and practice, in fact, comes and goes as a fad in international development – waxing and waning with the wider political climate.

In many donor countries, a renewed focus on ‘research and learning’ has been apparent since the global financial crisis, influenced by austerity measures and a sense of aid budgets being under attack. A range of donors and governments invest in research and evidence with the intention of delivering more effective aid programs or policy outcomes – putting in place wonky sounding policies and units. There’s DFID’s ‘Evidence into Action Team’, USAID’s Monitoring, Evaluation, Research and Learning Innovations (MERLIN) Program,  DFAT’s Aid Evaluation Policy and Innovation Strategy, and South Africa’s Department of Planning, Monitoring and Evaluation ‘Evidence and Knowledge Systems Branch’.

This interest in research has also trickled down into aid programs. We increasingly see large aid programs supported by ‘learning or knowledge partners’ that undertake research alongside programming to inform it and capture its lessons – such as the Water for Women Fund, or the Australia Pacific Training Coalition.

Yet despite these good intentions, when the rubber hits the road in programming, it is frequently the ‘research and learning’ components that get trimmed first. In this sense, research and learning is a bit of an add on – something that’s nice to have but not necessary. Why is this the case when we know that it should be a valuable investment?

The general consensus on the importance of research masks some differences that – when we start to unpack – contribute to a disconnect between research production and research use. These differences have important implications for what we research, how we research, what questions and findings are valued, and so on. Without clarifying these differing views, the focus on ‘knowledge’ and ‘research’ is likely to be frustrated, as different actors working in support of this agenda confront the reality that they might actually be interested in quite distinct things. There are many ways in which this plays out but let’s just take three.

First, what is the purpose of research? Is it to spend taxpayer money more effectively? To provide a platform for citizen voices? To demonstrate impact? To understand complexity and make more informed decisions? None of these answers is wrong or necessarily mutually exclusive. But they speak to different objectives of why research might be valued in the first place.

Second, what methods are most useful? Often donors want firm answers or solutions with strong lines of causation between inputs and outputs. This can favour econometric methods, such as randomised control trials, which may be suitable in some instances. In others, qualitative methods, such as case studies or ethnographic methods, may be better placed to open up complexity and context specificity; or participatory action research may be used to give voice to the marginalised. While disciplinary battles are ongoing, the point is less that certain methods are better than others and more that they simply do different things and treat different things as relevant evidence.

Third, what research outputs are most important? For academics, professional bodies such as the Australian Research Council and others stipulate that academic publications ‘count’ more than commissioned reports. Yet these are often seen as dense and impractical by aid programs, donors or NGOs which want succinct, easy to comprehend products. Or, if you work with communities directly, you may not want text-based outputs at all but value other forms of  ‘reporting back’.

These disconnects can mean both that research produced does not always ‘scratch the itch’ for practitioners and policymakers; and that practitioners and policymakers are not always well equipped to integrate the findings from research.

To address this disconnect, the Research for Development Impact (RDI) Network is funding the ‘Enhancing Research Use in International Development Action Research Project’, supporting research partners to unpack their internal political economy and ways of working to understand what constrains and enables better research use. The project brings together 12 organisations working in different parts of the international development sector – spanning donor agencies and intergovernmental organisations, NGOs, private sector consultancies and universities.

Over the coming six months, small groups of research advocates in each organisation will undertake their own internal research and trial initiatives to improve research use. While these are still being developed, they range from instituting research and learning strategies within NGOs, to demonstrating the value of research to senior government bureaucrats, to carving out time for investment in research within the cost recovery model of consulting firms, to facilitating greater interdisciplinary research and outreach with non-researchers within universities.

Researchers from La Trobe University’s Institute for Human Security and Social Change and Praxis Consultants will accompany the research advocates, documenting their learning about the obstacles and opportunities to better research use across the international development sector in Australia.

We’ll be sure to report back in the coming six months on what is being learned and how different research partners are addressing the challenge. We’d love to hear from others in the sector too on how they are trying to cultivate a smoother relationship between research and practice.

image_pdfDownload PDF

Lisa Denney

Lisa Denney is a senior research fellow and Deputy Director of the Institute for Human Security and Social Change at La Trobe University and a research associate with ODI.

2 Comments

  • The field of program evaluation has a body of literature explaining the different types of evaluation use and factors affecting the take up of evaluation findings.

  • Thanks for the post, Lisa. DT Global is pleased to be one of the organisations participating in this RDI Network initiative. It is a topic of great importance to us, practically and intellectually, and we recognise that we need to explore better ways to address the research/policy/practice intersect in what we do. We are looking forward to our own exploration and in particular the learning we can take from working with this positively diverse peer group.

Leave a Comment