Reflecting on the implications of artificial intelligence for international development

20 December 2023

This is an edited extract from the keynote address delivered by Laurel Miller, President and CEO of The Asia Foundation, at the 2023 Australasian AID Conference dinner, 6 December 2023.

Just over year ago, ChatGPT and other generative artificial intelligence (AI) systems burst into our collective consciousness. Although AI’s analytical and predictive powers are already widely employed, the ability of new generative models to create novel content is altogether different. AI technology is poised to have huge effects in developing countries, and on the development practices in which The Asia Foundation and other organisations are engaged.  What can we expect as this new technology meets old problems of inequality, fragility, power concentration, weak governance, and violence?

Those at the forefront of AI development – including those who think most deeply about its positive and negative implications – suggest that AI-driven disruption is coming faster, and with a greater magnitude of benefits and risks, than anything the world has previously experienced in the way of technological change. While the need for caution versus the impulse for speed in advancing generative AI is a live debate, at the moment speed is winning. Some of the sharpest debate is focused on AI’s implications for political stability, economic growth, and security.

As the development community wraps its mind around the pace and scope of coming change, it is helpful to reflect on the rapid rise and impact of social media over the last 15 to 20 years. Facebook, YouTube, X (Twitter), and other social media platforms were swiftly and widely embraced as new channels for human interaction, information sharing, and community organising. This revolution in connectivity offered unforeseen opportunities for trade and commerce, unleashed new or formerly suppressed ideas, and empowered communities that previously had little visibility or influence in public affairs. Social media truly reshaped the fabric of societies across Asia and the Pacific and globally.

At the same time, social media’s unfettered growth produced a darker side, with the same tools that connected people also used as platforms for surveillance, repression, division, the spread of misinformation and hate speech, and the instigation of violent conflict. We have seen how social media has contributed to a decline in public trust, while regulators and guardians of public wellbeing have struggled to keep pace.

Most observers agree that AI’s impact will vastly exceed the effects of social media. AI is already demonstrating its potential to revolutionise development by enhancing the efficiency and effectiveness of practitioners and partners in international organisations, government, the private sector, and civil society. Improved data collection, analysis, and interpretation are already leading to better results in public health, education, agriculture, natural disaster preparedness and response, urban development, and financial inclusion. In the future, AI-enabled tools may inform decision making, optimise resource allocations, guide the distribution of development assistance, and swiftly assess needs, prioritise responses, and anticipate and respond to crises.

But there are strong reasons for worry. Among them, AI solutions are likely to exacerbate existing inequalities. Developing countries are unlikely to share equally in access to new AI tools, leaving some populations behind as a result of technological asymmetry. Biases in the data that AI systems are trained on can lead to discriminatory outcomes that reinforce existing inequalities or disadvantage certain groups, and to solutions that poorly suit local needs.

Adding to the challenges, the quality and volume of existing data tends to be lower in developing countries. The AI systems used in today’s development applications require a volume of data that does not currently exist in many developing countries. The limited, locally-generated data that does exist is frequently owned by governments or private companies and is not open to public access, use, or scrutiny.

In addition, important ethical questions related to data privacy and consent for use in AI systems are raising questions and prompting some regulation in Western countries. Absent effective safeguards, the good that flows from the collection and use of personal data to deliver better public services and development solutions can also involve invasions of privacy. Most developing countries still have weak data protection laws and few have clear technology regulatory strategies in place. At the same time, the policy choices relevant to data privacy are complex. Measures taken to protect access to local data will make it harder for developing countries to maximise the utility of AI systems.

AI also has dramatic implications for the future of work. Although the technology has the potential to create new job opportunities in some countries, it also enables greater automation. This poses a threat to the availability of traditional employment, particularly jobs reliant on manual or routine tasks. Many businesses may be motivated to use AI to replace humans, rather than to help them accomplish their work.

Finally, the intentional use of AI technology for malign purposes will inevitably increase through the actions of state actors, political rivals, criminal elements, proponents of violence, and others eager to gain perceived advantage. For the last decade, the spread of politically-motivated disinformation has become a growing problem, affecting public opinion, tilting elections, prompting anti-vaccination and anti-mask campaigns, and undermining public trust in critical institutions. It is now possible to unleash “deep fakes” and other AI-sourced disinformation to sway public opinion, as well as perpetrate criminal harm.

The creation of institutional and policy safeguards generally lags technological development, with the gap set to be especially wide for AI. In managing the promises and perils of AI, developing countries will be challenged to react fast enough. Globally, even those governments that are looking the hardest at regulatory guardrails have for the most part only reached the stage of thinking about how to think about what to do.

Development professionals should not embrace AI as an obvious solution without examining and avoiding the risks. In moving faster up the learning curve on AI technology, we will need to support the development of new capabilities among partners in government, civil society and the private sector – and, importantly, within our own ranks. We will need to make considered judgments about when AI tools are the best choice, mindful of the potential for biased outcomes and other negative consequences. We will need to support the development of local technical and regulatory capacity, recognising the critical need for people whose understanding of context and culture matches their technical acumen. Our support should include reflection on ethical issues and responsible practices. In the realm of AI governance, we will need to help develop responsible policy and regulatory capacity in the countries we work in.

In addition, we should do more to encourage and support dialogue among developing countries geared to sharing knowledge on the emerging challenges and opportunities associated with AI and its unique implications for developing countries. We will do well to observe the particular effects of the AI revolution in Asia and the Pacific where, before discussion of the negative impacts of social media drew global attention, we were already observing its bellwether effects in the Philippines, India, Myanmar, and other countries. Finally, we should improve communication and collaboration within the development community around all things related to AI.

AI offers immense potential to address some of the most pressing development challenges of our generation. At the same time, ensuring that its benefits are distributed equitably and ethically will be tough. Each time we use ChatGPT or another AI tool, it will be the least capable version of that tool we will ever use. The same cannot be said of our continued encounters with poverty, inequality, climate change, gender inequality, and other development challenges. We should leverage AI to address these challenges, but not without being mindful of the attendant risks and the need for adequate safeguard measures.

Author/s

Laurel Miller

Laurel Miller joined The Asia Foundation as president and chief executive officer in February 2023. She previously served as director of the Asia Program for the International Crisis Group.

Comments

  1. A balanced keynote; the parallel with social media is an interesting one.

    Couple comments:

    > particularly jobs reliant on manual or routine tasks

    Routine yes, manual no. A graphic designer is more threatened by recent AI advances than, say, a barista. Software engineers might be next, while manual workers have less to worry about, at least in the near future.

    > AI solutions are likely to exacerbate existing inequalities

    Yes, but AI can also level the playing field. For students, it can serve as a virtual tutor to support learning, when private tutors are unaffordable (https://www.mdpi.com/2227-7102/13/4/410). International development has a role to play in fostering positive uses of AI to reduce inequality.

    Reply Comment

Leave a comment

Upcoming events