This is an edited extract of an address to the Asian Development Bank Institute research conference on efforts to increase inclusive economic growth, in honour of Dr Peter McCawley, in Manila, The Philippines on 24 July 2024.
Thank you to the Asian Development Bank Institute, the Asian Development Bank and the Australian National University for hosting this conference on inclusive economic growth.
It’s an honour to be here representing the Australian Government and to have the opportunity to pay tribute to a great Australian and international citizen, Dr Peter McCawley.
Like many of us here today, McCawley had several professional lives bound by a deep desire to understand economic equality.
In a 2019 essay on why he was interested in Southeast Asia, McCawley said: “I was worried by the huge gaps between rich and poor countries. They seemed to me then – as they still seem to me now – a key global issue.”
McCawley understood the value of bringing academic insights into the policymaking process. In common with many modern Australian development economists – Lisa Cameron, Stephen Howes, Lata Gangadharan, Pushkar Maitra and many others – McCawley’s career helped to change lives through research findings and policy advocacy.
Today, I want to focus on an area of policy where this approach is particularly valuable. In July 2023, our government established the Australian Centre for Evaluation (ACE), with a mandate to conduct high-quality randomised trials and other impact evaluations across government, including Australia’s contributions to international organisations.
The Australian Government understands its obligation to ensure that the aid we deliver has the maximum positive impact. Which is why the ACE isn’t ideological, it’s practical.
The more we can figure out what works, the better we can make government work for everyone – especially for the most disadvantaged. Because it’s the people who rely on aid services who suffer most when those services do not work.
Those of us who advocate for its use are informally known as “randomistas”. I used this name as the title of my 2018 book on the history, development and evolution of randomised trials.
The focus of my speech today is to make the case in favour of randomised trials in development.
In particular, I want to discuss the benefits and refute some of the criticisms.
Let’s start with the most tangible marker of global poverty – ill health.
One disease that receives significant attention is malaria, which claims the life of a young child almost every minute. Because mosquitos are most active at night, a simple solution is to sleep under a bednet. The challenge for aid workers was discovering how to increase bednet use.
The answer was eventually settled by randomised experiments, which were conducted in a range of developing nations.
And the results were clear.
People who received a free bednet were just as likely to use it as someone who purchased it via a co-payment. But because they were free, many more people took them up.
That translated into practice and has helped save thousands of lives across Africa and the rest of the developing world.
This is just one example of how randomised trials can make a difference.
Perhaps this is why there has been such a rapid growth in their use in development economics.
In the 1990s there were fewer than 25 randomised experiments from developing countries published globally each year. In 2012 – the last year for which we have good data – there were 274 published.
Today, the Abdul Latif Jameel Poverty Action Lab at MIT, known as J-PAL, just one organisation, averages 140 randomised evaluations a year.
High-quality evaluations can provide policymakers with certainty.
Here in the Philippines, a randomised trial confirmed that a mobile phone-based tutoring program during the COVID-19 pandemic led to a 40% increase in students’ achievement in mathematics.
In Niger, a randomised trial demonstrated that scholarships provided to middle-school students to cover the cost of schooling halved the rates of child marriage for girls.
In Liberia, a randomised trial demonstrated that an eight-week cognitive behavioural therapy program was able to reduce criminality in at-risk men in the short term. This effect stuck and 10 years later, the intervention halved criminal offending.
There are several criticisms of randomised controlled trials in policy, some of which I’ll examine now.
First is the concern about ethics and the issue of fairness, which is often the first to arise.
Certainly, if we know for sure that if an intervention works then I agree that it is unethical to put people in the control group. But there’s a flipside to that. If we don’t know whether an intervention works, then it is unethical not to find out. We cannot countenance programs being rolled out without robust evidence to back it up.
Another aspect of the ethical discussion is that conducting randomised trials can help strengthen our democracy.
By using solid evidence to design programs, citizens can see that government is making programs based on what works, not on ideology or partisanship.
There’s also a criticism that focusing on policies that can be evaluated using randomised control trials may be a distraction from evaluating more important policy programs, where such trials are not feasible.
But there is a plethora of examples where randomised trials can be helpful. As development economist David McKenzie observes, “there are many policy issues where … even after having experienced the policy, countries or individuals may not know if it has worked. … [this] type of policy decision is abundant, and randomised experiments help us to learn … what cannot be simply learnt by doing”.
It is true that different evidence and methods are suited to different types of policies.
However, we have learnt that randomised trials are feasible far more often than critics suggest. And when they are feasible, they often provide compelling evidence that other methods cannot.
Having said that, we must also be willing to draw on other empirical tools and methods where they are not.
Australia is a staunch supporter of the work of J-PAL in Indonesia.
And we understand that multilateral institutions do their best work when they are driven by evidence rather than ideology.
Nobel Laureate Esther Duflo, the founder of J-PAL, is responsible for hundreds of randomised trials, including the bednets example I mentioned earlier. Professor Duflo emphasises the importance of neutrality when assessing the effectiveness of antipoverty programs. She once said: “One of my great assets is I don’t have many opinions to start with. I have one opinion — one should evaluate things — which is strongly held. I’m never unhappy with the results. I haven’t yet seen a result I didn’t like.”
Professor Duflo admits that policies sometimes fail. But it is because people are complex, not because there is a grand conspiracy against the poor.
Randomistas like Professor Duflo are providing answers that help to reduce poverty in developing nations. These results are usually messier than the grand theories that preceded them, but that’s the reality of the world in which we live.
More rigorous evaluation means we pay more attention to the facts.
Randomistas are less dogmatic, more honest, more open to criticism, less defensive. We are more willing to change our theories when the data prove them wrong. Ethically done, randomised experiments can change our world for the better.
Randomised trials may not be perfect, but the alternative is making policy based on what one pair of experts describe as “opinions, prejudices, anecdotes and weak data”.
As the poet W.H. Auden once put it, “We may not know very much, but we do know something, and while we must always be prepared to change our minds, we must act as best we can in the light of what we do know”.
Thanks Andrew
Always good to be reminded of the value of our foreign aid and knowing the basis of what works through Treasury’s Australian Centre for Evaluation. The effectiveness of Australia’s aid expenditure should always be assessed and the resulting benefits explained to the Australian people.
Having been associated with the 1990 World Summit for Children and its Candlelight Vigils for Children beforehand, I have retained an interest in the long-term impacts of the Convention on the Rights of Child and action against childhood diseases around our world.
So a slightly different form of evidence of what works on child immunization and poverty is available. It also identifies an important factor of long-term outcomes from aid expenditure, by assessing return on investment over an extended period of 20 years, taking it out of the short-term priorities of governments and funding bureaucrats:
“Return On Investment From Immunization Against 10 Pathogens In 94 Low- And Middle-Income Countries, 2011–30” (Health Affairs,NO. 8 (2020): 1343–1353) – https://www.healthaffairs.org/doi/epdf/10.1377/hlthaff.2020.00103
Immunization included against measles, rubella, Japanaese encephalitis, hepatitis B and yellow fever. A key extract shows this financial impact and complements your example of anti-malaria bed nets:
“Using the cost-of-illness approach, return on investment for one dollar invested in immunization against our ten pathogens was 26.1 for the ninety-four countries from 2011 to 2020 and 19.8 from 2021 to 2030. Using the value-of-a-statistical-life approach, return on investment was 51.0 from 2011 to 2020 and 52.2from 2021 to 2030.”
It concludes:
“Potential users of ROI estimates could use either the cost-of-illness approach or the value-of-a-statistical-life approach according to their policy questions of interest. Insights from the ROI using the cost-of-illness approach will inform decisions that require consideration of the budgetary and macroeconomic aspects of immunization. In contrast, the value-of-a-statistical-life approach captures broader economic benefits of immunization beyond those attributable to wages and averted costs.”
This is a different form of assessing what works – especially in considering what works for ensuring children around our world are alive after five.