Adaptive programming is de rigueur. Everyone’s into it. It’s been this way for some time. Yet not unlike its own reason for being, the field of adaptive programming is messy. Every mum of toddlers knows: some mess is healthy. But presently it’s difficult to see the forest for the iterative jargon. As ODI has pointed out, the field generally lacks rigour. It contains gaps, is frequently misunderstood, and rarely – at least in my experience – are its implications for the program fully understood by all those at the wheel of the steering committee. Like all good problems, however, the adaptive programming challenge contains hints of its own solution. If we pull it apart a little – by taking a look back at how we got to where we are – and the ongoing challenges to do this better, there are some pieces of the puzzle we may have missed.
Over the past ten or so years, there are three discernible phases to the adaptive trend in international development practice. The first phase addressed the ‘Why?’. Learned folk on all sides of the Atlantic produced detailed, evidence-based articulations of why development programs need to adapt more effectively to the contexts in which they operate. The evidence was suggesting that traditional program approaches were either at risk of having no impact or, worse, of doing harm. These insights occurred in parallel with significant improvements in program evaluation methods. Broadly speaking, the evaluators were in concert: so many large development programs simply weren’t achieving the outcomes they set out to achieve. Long-winded multi-year technical approaches to solving complex, wicked problems of development just weren’t cutting the mustard.
Lots of people got on board with the idea: that programs would be more effective if they were designed and implemented in such a way that teams are empowered and enabled to make informed changes in direction, in order to navigate high levels of uncertainty in dynamic and changeable contexts, to solve locally-specific problems. Graham Teskey called this, rather ominously, the commencement of a ‘Second Orthodoxy’. No concept, design or request for proposal sent forth into the market was worth its muster if it didn’t require all three – flexible, iterative and adaptive – in spades. As with all good ideas, there was, however, some garbling in the translation, the proliferation of jargon and some pretty justifiable eye-rolling. Iterative risked becoming scattergun or just plain unfinished; flexible was often short term and inconsistent; and adaptive frequently needed to be reined in with a bit of accountability.
Phase two therefore put more emphasis on the how and two interesting shifts occurred. The first was the shift to more purposeful adaptation that was better able to explain itself to a range of audiences. Over the past six or seven years, processes and tools (dare I say toolkits) have been devised to facilitate better decision-making processes, often quite separate to official ‘work plans’. The most thoughtful of these has seen teams navigate together the inherent tension in trying to put structure around flexibility, strategy around iteration and some sense of knowing around largely unknown future paths. New techniques carved out frequent periods of structured reflection, encouraging stronger partnerships and emphasising trust-building. New monitoring techniques combining rubric and qualitative methods with more participatory data collection and recognising the centrality of Learning, MEL became MLE.
The second, somewhat more nascent, is the shift to more meaningful adaptation. It is facilitated by consideration of the substance needed to inform more frequent decision-making where decisions over a change of course may have significant consequences, and more than just the opinions of those in the room on reflection day are needed. ODI’s Rapid program and others in the research-into-action field were doing this stuff a decade ago, of course. More recently, practical and applied Political Economy Analysis frameworks have nudged out their ivory-tower predecessors, enabling everyone to get their hands dirty on a more day-to-day basis in the practice of thinking and working politically. At The Asia Foundation, we have a range of templates that support the real-time collection of stakeholder engagement and policy dialogue-based data, to provide grist in the reflective mill every quarter on some programs and more frequently on others.
Yet all the while, the jargon has proliferated faster than meaningful approaches can keep up, and much of this remains a niche area within international development writ large. With the accompanying cynicism rising, adaptive programming has taken its own pause to reflect. Why, say some of its biggest advocates, hasn’t the ‘theory’ of adaptive programming translated into more concerted and widespread ‘practice’?
Arguably, however, it’s not the right question. Anyone who has dallied with the poststructuralists at any point knows it’s a false dichotomy. There isn’t theory and then practice. Practice is always already imbued with and driven by theory. It’s the oil in the engine. If so-called ‘practice’ continues to be driven by traditional oil, then the ‘theory’ still needs work. To make programs properly adaptive, all systems need to be geared to facilitate changes of course. To date, though, most of the focus has been on fixing up the systems and processes at the front end of the programming cycle: strategies, designs and monitoring frameworks.
And in fact tons of stuff hasn’t changed at all – in particular the operational nuts and bolts. A large chunk of the system is yet to be encouraged to be more adaptive: all those processes that derive from traditional programming approaches from annual work plans to risk management matrices; from the structure of the basis of payment in the contract to the due diligence framework; to the budgeting and financial management systems and the reporting requirements that follow from those. Rather than why theory hasn’t translated into practice, perhaps a question we should be asking is why hasn’t the reform of program been accompanied with a reform of operations?
If we took a frank look at all the processes that structure the implementation process – warts and all – what would they look like if they were altered to get on board in a purposeful and meaningful way with local-problem-driven, iterative, politically-savvy adaptation? What troubles me frequently is that we are running parallel processes: adaptive programming approaches and rigid, traditional systems-based operations. What if we took a long hard look at all the boring stuff?
How, for instance, can risk management matrices be refigured to allow for small bets to be placed, emphasising the skills of risk mitigation rather than risk reduction? What contract structures allow a better balance between accountability and flexibility in funding streams? Can implementing teams and partners be licensed to interrogate the theory of change at regular intervals, such that adaptive management doesn’t mean simply altering activities, rather than being contractually fixed to the original preordained design document? And how can a program’s ‘efficiency’ be measured so that it doesn’t equate with predictability and be driven by the incentive to spend on time and on budget? Can we measure value for money in what we’re saving by not investing in activities that don’t seem to be working? If we took a holistic approach to enabling adaptation across all parts of the program cycle, what would it look like? And would it contribute to more meaningful adaptive programming in which all parts of the team play a part that makes sense to them? I’d love to know what others think.