Submitted by Melanie Eckle-Elze
23 Jan 2025

Demystifying artificial intelligence for anticipatory action

When workshop participants were asked what comes to mind when they hear ‘AI’, one response stood out: POTENTIAL. But what exactly does this potential look like for anticipatory action? How can we harness it in a way that is ethical and aligned with humanitarian values? And who needs to be involved to make this happen?

How AI is currently used for anticipatory action

AI is currently being used in anticipatory action to enhance tasks such as communication and mapping. Indeed, it can play a pivotal role at every stage of the anticipatory action timeline (see Figure 1), although each of these has challenges that need to be monitored and addressed.

Figure 1. How AI can support anticipatory action at different stages

Ethics of AI for anticipatory action

The workshop discussion moved on to digital ethics, covering issues such as digital ‘Do No Harm’, data safeguards and measures to address high-likelihood, high-impact risks. These are critical in humanitarian contexts and the participants shared examples of insecure data-sharing practices and the historic mishandling of sensitive information. These challenges are compounded by increasingly complex agreements with technology providers, which can hinder organizations’ ability to manage data responsibilities throughout the data lifecycle.

There were also questions raised around the intersection of AI ethics and humanitarian ethics. For example, could the sharing of AI outputs (e.g., resource maps) with communities inadvertently trigger tensions with other communities? Collaboration across disciplines will remain essential for developing AI and machine-learning tools responsibly, so that they are transparent and align with humanitarian values – not only for anticipatory action but across the humanitarian system.

Language-inclusive AI

AI is based on data, and large volumes of that data are generated using human language; this might include data you produce through social media interactions, surveys, automated services, or even the search terms you use on online maps. But if ‘your’ language isn’t included, then your voice, and the voice of your community, cannot be added to the content that creates AI models.

CLEAR Global is developing two tools that are steps towards a more language-inclusive AI. TWB Voice is designed to collect and share voice data and parallel text. This will help tech developers build AI tools that can ‘hear from’ and ‘speak with’ the speakers of marginalized languages, even those who are not literate.

The second tool – the ‘Language Use Data Platform’ – provides digital language-use datasets built to be integrated with maps. These will help users to see who speaks which languages, and where. For organizations looking to develop AI tools for anticipatory action, this represents a way to overlay geographic information with the languages spoken in precise locations. This in turn will allow them to plan a communication strategy specifically for the at-risk population, based on their language preferences, literacy levels and preferred channels. As the circumstances of the disaster evolve and new areas are identified to be at risk, information on the languages of the newly affected communities will automatically be available.

AI-supported mapping tools

Understanding the conditions of an area, and assessing how these change over time, is critical for anticipatory action, yet large areas of the world remain either unmapped or only sparsely covered. This makes it challenging to monitor the vulnerability and resilience of the population.

AI has potential here, too, and during the workshop, participants heard about MapSwipe, an open-source smartphone app that helps volunteers collect and improve geospatial data by identifying infrastructure, tracking environmental changes and validating maps. There was also a presentation about Sketch Map Tool, an easy-to-use app for offline participatory mapping, digitizing and georeferencing local spatial knowledge.

AI makes both tools even more efficient by making use of user inputs as learning data. This supports data creation and detection, and speeds up analytical processes. Another advantage is that biases and ‘black boxes’ – where internal workings are not easily understood or visible, or where processes remain hidden or complex (common problems when using AI) – can be avoided, because the input data is coming from, and is double-checked by, the users. 

The future for AI in anticipatory action

To conclude the workshop, the participants were asked to share their wishes and visions for AI and its use in anticipatory action, with a deadline of 2030. The responses demonstrated some of the shared aims and goals. Three common themes emerged:

  • Humanitarian, development and climate actors should scale up the use of AI in anticipatory action.
  • Governments, companies and researchers should improve coordination of the use of AI in anticipatory action.
  • AI developers, disaster-risk-management specialists and social scientists should collaborate on risk forecasting for anticipatory action.

Other priorities put forward included the following:

  • Make data collection, management and sharing more efficient, as this is crucial for actionable AI outcomes.
  • Invest in research capacity and infrastructure for AI.
  • Ensure that AI models are inclusive and consider local needs and contexts.
  • Developers should ensure transparency in data use, while all actors should ensure that AI tools are ethical and unbiased.
  • Use AI to make forecasts and early warnings actionable; for example, meteorological centres should test the use of AI models for rainfall forecasts.
  • Secure long-term funding so that AI can be sustainably integrated with anticipatory action.

Achieving these goals by 2030 will require enhanced collaboration, better data management, investment in research, a local focus, ethical AI practices, and the sustainable integration of AI across all the sectors involved in anticipatory action. Priority actions include investing in robust data infrastructure, fostering interdisciplinary collaboration, and putting measures in place to ensure that ethical AI practices are at the forefront by 2030.

Ongoing efforts and research from organizations in this field are helping to realize this vision. This led to strong optimism among those at the workshop that AI can be used for a variety of critical tasks in anticipatory action, especially processing larger volumes of information and building new analytical capabilities to improve humanitarian outcomes in the years ahead.

The workshop, held during the 12th Global Dialogue Platform, was an opportunity to delve into these questions and explore issues around the ethics and inclusivity of AI. You can watch the full workshop online.

The ‘Demystifying artificial intelligence in anticipatory action’ workshop was organized by HeiGIT (Heidelberg Institute for Geoinformation Technology) and co-hosted by experts in this field, including those from Welthungerhilfe, the Humanitarian Stabilisation Operations Team, the World Food Programme, the Kenya Meteorological Department, the German Red Cross, Deltares, the Max Planck Institute for Biogeochemistry and CLEAR Global. 

These organizations are currently forming a working group on AI in anticipatory action, which will be a forum to explore these questions further, share experiences and foster collaboration. If you’re interested in joining the group, please complete this survey.