Adaptive programming is de rigueur. Everyone’s into it. It’s been this way for some time. Yet not unlike its own reason for being, the field of adaptive programming is messy. Every mum of toddlers knows: some mess is healthy. But presently it’s difficult to see the forest for the iterative jargon. As ODI has pointed out, the field generally lacks rigour. It contains gaps, is frequently misunderstood, and rarely – at least in my experience – are its implications for the program fully understood by all those at the wheel of the steering committee. Like all good problems, however, the adaptive programming challenge contains hints of its own solution. If we pull it apart a little – by taking a look back at how we got to where we are – and the ongoing challenges to do this better, there are some pieces of the puzzle we may have missed.
Over the past ten or so years, there are three discernible phases to the adaptive trend in international development practice. The first phase addressed the ‘Why?’. Learned folk on all sides of the Atlantic produced detailed, evidence-based articulations of why development programs need to adapt more effectively to the contexts in which they operate. The evidence was suggesting that traditional program approaches were either at risk of having no impact or, worse, of doing harm. These insights occurred in parallel with significant improvements in program evaluation methods. Broadly speaking, the evaluators were in concert: so many large development programs simply weren’t achieving the outcomes they set out to achieve. Long-winded multi-year technical approaches to solving complex, wicked problems of development just weren’t cutting the mustard.
Lots of people got on board with the idea: that programs would be more effective if they were designed and implemented in such a way that teams are empowered and enabled to make informed changes in direction, in order to navigate high levels of uncertainty in dynamic and changeable contexts, to solve locally-specific problems. Graham Teskey called this, rather ominously, the commencement of a ‘Second Orthodoxy’. No concept, design or request for proposal sent forth into the market was worth its muster if it didn’t require all three – flexible, iterative and adaptive – in spades. As with all good ideas, there was, however, some garbling in the translation, the proliferation of jargon and some pretty justifiable eye-rolling. Iterative risked becoming scattergun or just plain unfinished; flexible was often short term and inconsistent; and adaptive frequently needed to be reined in with a bit of accountability.
Phase two therefore put more emphasis on the how and two interesting shifts occurred. The first was the shift to more purposeful adaptation that was better able to explain itself to a range of audiences. Over the past six or seven years, processes and tools (dare I say toolkits) have been devised to facilitate better decision-making processes, often quite separate to official ‘work plans’. The most thoughtful of these has seen teams navigate together the inherent tension in trying to put structure around flexibility, strategy around iteration and some sense of knowing around largely unknown future paths. New techniques carved out frequent periods of structured reflection, encouraging stronger partnerships and emphasising trust-building. New monitoring techniques combining rubric and qualitative methods with more participatory data collection and recognising the centrality of Learning, MEL became MLE.
The second, somewhat more nascent, is the shift to more meaningful adaptation. It is facilitated by consideration of the substance needed to inform more frequent decision-making where decisions over a change of course may have significant consequences, and more than just the opinions of those in the room on reflection day are needed. ODI’s Rapid program and others in the research-into-action field were doing this stuff a decade ago, of course. More recently, practical and applied Political Economy Analysis frameworks have nudged out their ivory-tower predecessors, enabling everyone to get their hands dirty on a more day-to-day basis in the practice of thinking and working politically. At The Asia Foundation, we have a range of templates that support the real-time collection of stakeholder engagement and policy dialogue-based data, to provide grist in the reflective mill every quarter on some programs and more frequently on others.
Yet all the while, the jargon has proliferated faster than meaningful approaches can keep up, and much of this remains a niche area within international development writ large. With the accompanying cynicism rising, adaptive programming has taken its own pause to reflect. Why, say some of its biggest advocates, hasn’t the ‘theory’ of adaptive programming translated into more concerted and widespread ‘practice’?
Arguably, however, it’s not the right question. Anyone who has dallied with the poststructuralists at any point knows it’s a false dichotomy. There isn’t theory and then practice. Practice is always already imbued with and driven by theory. It’s the oil in the engine. If so-called ‘practice’ continues to be driven by traditional oil, then the ‘theory’ still needs work. To make programs properly adaptive, all systems need to be geared to facilitate changes of course. To date, though, most of the focus has been on fixing up the systems and processes at the front end of the programming cycle: strategies, designs and monitoring frameworks.
And in fact tons of stuff hasn’t changed at all – in particular the operational nuts and bolts. A large chunk of the system is yet to be encouraged to be more adaptive: all those processes that derive from traditional programming approaches from annual work plans to risk management matrices; from the structure of the basis of payment in the contract to the due diligence framework; to the budgeting and financial management systems and the reporting requirements that follow from those. Rather than why theory hasn’t translated into practice, perhaps a question we should be asking is why hasn’t the reform of program been accompanied with a reform of operations?
If we took a frank look at all the processes that structure the implementation process – warts and all – what would they look like if they were altered to get on board in a purposeful and meaningful way with local-problem-driven, iterative, politically-savvy adaptation? What troubles me frequently is that we are running parallel processes: adaptive programming approaches and rigid, traditional systems-based operations. What if we took a long hard look at all the boring stuff?
How, for instance, can risk management matrices be refigured to allow for small bets to be placed, emphasising the skills of risk mitigation rather than risk reduction? What contract structures allow a better balance between accountability and flexibility in funding streams? Can implementing teams and partners be licensed to interrogate the theory of change at regular intervals, such that adaptive management doesn’t mean simply altering activities, rather than being contractually fixed to the original preordained design document? And how can a program’s ‘efficiency’ be measured so that it doesn’t equate with predictability and be driven by the incentive to spend on time and on budget? Can we measure value for money in what we’re saving by not investing in activities that don’t seem to be working? If we took a holistic approach to enabling adaptation across all parts of the program cycle, what would it look like? And would it contribute to more meaningful adaptive programming in which all parts of the team play a part that makes sense to them? I’d love to know what others think.
So much THIS! Thank you for this articulation. I was chewed up and spit out of the UNICEF system through a Developmental Evaluation, ironically in which they were supposedly interested in identifying the “deep systems” that were keeping them from a more child-centered approach. . . their inability to even contract effectively with me in an adaptive/responsive manner was painful. As a women-owned small business in the field we trip on these types of systems all the time and find our strategy and human-centered approaches are engaged to as dressing on top of predetermined wholes that just give us latitude to scratch the surface. This is a discussion long overdue. Thank you.
Thanks Nicola for a provocative post. I think many of the problems you identify derive from the professionalization of adaptive management and its codification into a skill, sub-discipline, site of expertise, etc. But with regards to trying to tie adaptive management to real organisational processes, you may find this study on Sida risk management systems and the relationship they strike between accountability and flexibility of interest: https://www.odi.org/publications/11388-fit-fragility-exploration-risk-stakeholders-and-systems-inside-sida
Thanks, Nicola, for your reflections!
There was some discussion on your post at the #AdaptDev Discussion group, which may be interesting for readers (and yourself, Nicola, if you didn’t see it):
https://groups.google.com/forum/#!topic/adaptdev/4K2wiMssRwo
We can add ‘innovation’ to the mix. I have seen a plethora of ‘innovations’ identified based on the experience and learning of people involved in projects/programs which are not able to be acted on due to inflexible operational systems, and adherence to design documents and rigid thinking. Innovation is needed within donors to free up processes and systems, rather than just looking externally through innovation funds.
Thanks Nicola. This is a critical piece of thinking at the moment – as we watch really brilliant and appropriate adaptive programs being dismantled by their inability to ‘fit the processes’ which are often used as a trump card by bureaucracies. Time to make the processes fit the program if we are serious about effective development.
I found this to be so true in indigenous programs, mistakenly blamed program intent/design, when it is actually operational rigidity and focus on the process, that is stifling progress. Work needs to be done on impact measuring and accountability, to push through old and poor delivery routines, to achieve licence (for field workers), to work on suitable adaptation. Adaptation requires dialogue and all players to believe in equality of contribution. Where top down systems are working well, (for some) adaptation will be considered risky.
Well said Nicola, as with every endeavour, you can have a perfect strategy and vision, but if the machinery is not connected and responsive you will not reach the goal. It’s all about the contracting – that, and having the right people who can actually work flexibly.
Nicola, many thanks for your valuable assessment of the state of adaptive programming and what needs to be done to deliver the goods. I immersed myself in this literature when I was on the design team for the APTC Stage 3 which became the Australia Pacific Training Coalition to reflect its new approach. However, from my arms length engagement with APTC since then, I fear that little has changed on the ground. The pressures and incentives to continue to reply on ‘rigid, traditional systems-based operation’ are too strong to try more flexible ways of solving the problems we identified.
Thanks Nicola – challenging but very timely. For my part, there are highly relevant tools out there: e.g. results-based budgeting; portfolio management techniques; probabilistic (rather deterministic) planning approaches; and better-suited performance management frameworks (i.e. better than tools developed for engineering projects). The challenge is to actually apply them (not just the terminology). And rigorous application means quite major changes in both the systems and thinking of donors and partner governments alike.
On a positive note, I know DFAT is giving serious thought to this. Hopefully, they will find answers to at least some of your questions some time soon.