Help, Hazard, or Hype? Navigating AI in Humanitarian response
- elizabeth1207
- 5 days ago
- 5 min read
Updated: 2 days ago
By Links Research & Evaluation When we were thinking about a catchy title for this blog post, we did the natural thing and used AI to propose one – what do you think of the result? The image below is also AI generated (through the website editor) - it took a few search terms to obtain something we felt could be somewhat suitable but it is still meeting our request for an original image. On balance though, in this case, we believe that an image that we have taken of our work might be more engaging and 'real'.

The Humanitarian Leadership Academy has identified that the term ‘Artificial Intelligence’ was first documented in 1956. Shortly before this, in 1950, Alan Turing (British founder of modern computing) posed the question “can computers think?” This laid the foundations for the research and development of AI.
Start of the noughties
Some AI tools utilised in the humanitarian sector have been in place for longer than realised. Tools widely adopted before 2010 include the Integrated Food Security Phase Classification (IPC), Geographical Information Systems (GIS mapping) and Early Warning Systems (FEWSNET), which was initiated in 1985 and continued to evolve using satellite imagery and climate data to predict food shortages.
Another dynamic development around 2010 was the utilisation of crowdsource crisis mapping, used during the Haiti earthquake in 2010, when AI was used to generate real-time crisis maps, from SMS and text-based data from social media, with similar actions taken after the Cyclone Pam natural disaster in Vanuatu in 2015. The 2015 Nepal earthquake also sparked massive crowdsourced mapping efforts, led by organizations like the Humanitarian OpenStreetMap Team (HOT) and Kathmandu Living Labs (KLL), using volunteers worldwide to map damaged roads, buildings, and essential relief areas on OpenStreetMap (OSM) and platforms like Ushahidi, providing critical real-time data for responders, even before they arrived.
I got involved in this mapping effort as a volunteer and it was very accessible; I also got family on the other side of the world to spend some time contributing to this huge online effort. Multiple organisations around the world continue to engage people in crowdsourcing mapping efforts[1].
The now
Currently, 93% of aid workers use AI but only 8% report it as widely integrated into their organisations (DFS, 2025). But are we all defining AI in the same way? We all know what social media is – but AI lingo may still be catching up with actual use.
Leading up to 2025, there has been more and more adoption of ‘shadow AI’, taken to mean AI tools used for daily tasks such as offline AI assistants, chatbots to support community members, meeting/interview note taking and language translation software, logistics optimisation, content creation, crop health monitoring, search & rescue and predictive analytics & preparedness (e.g. for population movements).
Platforms such as GANNET AI also support with humanitarian data collaboration and effective decision-making. Then we have existing software that is adapting, such as Stata, used for statistical data analysis and data visualisation. Although Stata is not considered AI itself, it is rapidly incorporating external AI and machine learning. And not to mention the use of AI in drones supporting humanitarian response, which is a whole other topic that we will write about soon.
Fellow professional Sarah Weber (Independent Consultant; Global Health & Fundraising) said this about her experiences of of AI in the humanitarian sector: ''I see the value of AI in a few areas: 1) efficiently creating drafts/rapidly pulling together information so the users have a speeding starting place 2) generating a range of ideas which can spur the user to think outside the box or consider aspects they might not otherwise and 3) creating shortcuts or fixes that might be out of the norm but works when a gold standard option isn't available''.
Key considerations
Our take at Links Research & Evaluation is also that, ultimately, AI does depend on the information, research and thinking provided by humans – the quality of ‘the input’. Yes, AI can come up with ideas, some of which may be new solutions - but it is not able to conduct original research or create, in the same way that a human can.
AI says: ‘‘based on current technology and expert analysis, AI is best understood as a highly advanced, transformative tool rather than a "brain" in the biological, conscious sense. While it is inspired by the structure of the human brain (neural networks), it operates through mathematical, statistical pattern recognition rather than understanding, feeling, or self-awareness[1].
Experts emphasize that AI should serve as a complement to human judgment, requiring "human-in-the-loop" models to ensure accuracy and ethical compliance[2].
Our prediction is also that that soon we think about AI more in terms about the levels of usefulness: weak (narrow) AI: focuses on a single task and cannot perform beyond its limits (common in our daily lives); strong (general) AI: Can understand and learn any intellectual task that a human can and super AI: surpasses human intelligence and can perform any task better than a human.
It is currently a tool, a potentially extremely powerful one, that like social media, can enhance our lives and work with amazing possibilities - or (and) pose risks and negative outcomes that we have not thought of yet.
The risks that we have already widely recognised include: bias and discrimination risks; data privacy, hacking and security concerns; high implementation costs; erosion of trust and the potential for misuse and manipulation[3].
The application of AI in emergency relief is an example of how AI is a force for good, helping organisations work together with partners and communities on evidence-based and informed responses. Ethical frameworks will be increasingly needed to ensure fairness, data protection, transparency of use and accountability; implementing governance and regulation; fostering global collaboration for standards; normalise monitoring and auditing and prioritising diverse data to combat bias.
It is reported that although 93% of humanitarian workers report using or having tried AI tools, yet fewer than 25% of organisations have formal AI policies in place[4].
Organisations that have developed and adopted ethical frameworks to guide the responsible use of Artificial Intelligence (AI) and data includes: the International Committee of the Red Cross (ICRC), the United Nations (UN) System, NetHope, Humanitarian Data Science and Ethics Group (DSEG), the SAFE AI project, Mercy Corps, GiveDirectly and more.
It will naturally follow that more and more organisations, of all types, locations and sizes, will adopt similar frameworks, especially those that support and collect information about and from people affected by disasters or who are working to boost their resilience.
Final thought
In a final reflection, AI policies should also include how we will work effectively with these valuable online/digital tools, not forgetting to emphasise how we will sustain the human skills that we continue to need, for example, fixing an aid delivery truck close to sunset, without the needed parts, relying completely on the skills, experience and resilience of the mechanics.

Image of Medair staff, taken by Elizabeth of Links Research & Evaluation, Province Orientale, north-east D.R. Congo.
--------------------------------------------------------------------------------------------------------------------------
Links Research & Evaluation is a humanitarian consultancy organisation based in Kenya, with a global focus. We support humanitarian organisations and agencies, by designing and leading evaluations, needs assessments & studies with vulnerable communities, about the impact and quality of emergency response and resilience programmes.
Specialising in a range of humanitarian sectors, including primary health care, nutrition, protection, WASH, cash assistance, food security and adaption to climate change, we take a collaborative, inclusive, technical and tailored approach. Links Research & Evaluation also supports organisations with MEAL strategies, systems and dashboards.
Led by an experienced team, we partner with specialists from around the globe, to deliver impactful, quality and evidence-based services.
[1] IBM ‘What is AI?’
[2] DFS, Humanitarian Leadership Academy ‘How are humanitarians using artificial intelligence in 2025? Mapping current practice and future potential: initial insights report, August 2025’
[3] University Canada West ‘Advantages and disadvantages of AI in education’
[4] Humanitarian Leadership Academy ‘The humanitarian sector’s AI paradox: individual adoption outpaces organizational infrastructure’
[1] Humanitarian Leadership Academy ‘The history of artificial intelligence in humanitarianism’

.png)




Comments