By Dr Gbenga Shadare
Introduction
The intersection of artificial intelligence (AI), research ethics, and fragile contexts presents one of the most pressing challenges facing contemporary researchers. As AI tools become increasingly integrated into research methodologies, scholars working in fragile and conflict zones, humanitarian crises, post-disaster environments, and politically unstable regions must navigate an evolving ethical landscape. The promise of AI to enhance data collection, analysis, and impact is undeniable. Yet, its deployment in fragile contexts raises profound questions about consent, privacy, power dynamics, and the potential for harm.
Understanding Fragile Contexts
According to OECD (2025), a fragile context (FC) refers to environments characterised by instability, fragility, vulnerability, and a lack of resilience, often resulting from conflict, political instability, economic collapse, environmental disasters, or other crises. These contexts include weak governance or limited and contested state authority, insecurity, limited access to basic services, communities facing ongoing violence or displacement, or the collapse of essential services and infrastructure (World Bank, 2020; 2021). FC encompass active conflict zones, refugee camps, areas recovering from natural disasters, and regions experiencing severe political instability. Research in these settings has always required heightened ethical scrutiny because participants are often vulnerable, traumatised, and facing existential threats to their safety and dignity (Shadare, 2021; Shanks & Paulson, 2022).
According to the Armed Conflict Location and Event Data (ACLED), the traditional principles of research ethics – respect for persons, beneficence, and justice – take on heightened significance in fragile contexts (ACLED, 2023). Participants may have diminished capacity to provide truly voluntary consent when they are dependent on humanitarian assistance or living under authoritarian surveillance (Shanks & Paulson, 2022). The balance between potential research benefits and risks becomes more precarious when communities are already bearing extraordinary burdens. And questions of justice become more acute when researchers from wealthy, stable countries extract data from populations experiencing crisis (Jacobsen & Landau, 2003).
The AI Revolution in Research
Artificial intelligence has transformed research capabilities across disciplines (Pal, 2023). Machine learning algorithms can identify patterns in massive datasets that would be impossible for human researchers to detect (Ghavami, 2019). Natural language processing can analyse thousands of interview transcripts or social media posts in multiple languages (Nijhawan et al., 2022). Computer vision can process satellite imagery to document displacement, infrastructure damage, or environmental change. Predictive models can forecast disease outbreaks, food insecurity, or conflict escalation (Hossain et al., 2023).
For researchers working in fragile contexts, these tools offer tremendous potential. AI can help humanitarian organisations optimise resource allocation during crises. It can enable researchers to work with larger, more representative samples (Lythreatis et al., 2026). It can reduce the need for intrusive data collection by leveraging existing digital traces. It can accelerate the translation of research findings into actionable insights when time is of the essence (Madianou, 2019).
However, the deployment of AI in fragile contexts also introduces new ethical risks that compound existing vulnerabilities (Kilian, 2025). The opacity of many AI systems makes it difficult for participants to understand how their data will be used. The permanence of digital data means that information collected during a crisis could be weaponised against participants years later if political circumstances change. The capacity of AI to generate insights from seemingly innocuous data can expose participants to surveillance or targeting they never anticipated (Muldoon & Wu, 2023). And the concentration of AI capabilities in the hands of researchers and organisations from powerful countries can reproduce colonial patterns of knowledge extraction (Salami, 2025)
Core Ethical Challenges
Meaningful Consent in Digital Environments
Obtaining informed consent has always been challenging in fragile contexts, but AI complicates this further. When researchers deploy AI tools to analyse social media posts, satellite imagery, or mobile phone metadata, traditional consent processes may be impossible or impractical. Yet the absence of direct consent doesn’t eliminate ethical obligations.
Researchers must grapple with whether the public nature of some digital data truly implies consent for research use, especially when users in fragile contexts may have limited digital literacy or may be forced to use monitored platforms to access essential services (McInnis et al., 2024). The concept of “contextual integrity” (Nissenbaum, 2004) or the idea that information shared in one context carries expectations about how it will flow, suggests that data posted publicly during a crisis may not carry implicit consent for research purposes, particularly when AI enables analysis at scales and with insights that users couldn’t have anticipated.
Moreover, AI systems often require training data, raising questions about whether individuals whose data trains an algorithm should have a say in that use, even if the data were collected for other purposes (Van Bekkum, 2025). When AI models trained on data from one crisis are applied to another, consent considerations become even more complex.
Privacy and Surveillance Risks
The capacity of AI to re-identify individuals from supposedly anonymised data poses acute risks in fragile contexts. Research participants may face threats from multiple actors – government forces, non-state armed groups, criminal organisations, or hostile community members (Dorriza, 2025). Data that seems innocuous in aggregate might enable the identification of specific individuals when combined with other information sources.
Researchers must consider not just the immediate privacy risks but also the long-term dangers. Data collected during a humanitarian crisis might be relatively safe in the hands of researchers and aid organisations, but what happens if government authorities demand access? What if the data is breached? What if political circumstances change and yesterday’s research participants become tomorrow’s targets? AI’s ability to derive sensitive inferences from seemingly benign data – predicting someone’s location, affiliations, or vulnerabilities – means that even carefully anonymised datasets may not truly protect participants (Hendrycks et al., 2023).
The use of AI for surveillance by state and non-state actors in fragile contexts makes these concerns more than theoretical. Researchers must consider whether their data collection and AI applications could inadvertently feed into broader surveillance ecosystems or could be repurposed for harmful ends.
Power Asymmetries and Digital Colonialism
The global AI landscape is characterised by profound inequalities. The technical capacity, computational resources, and expertise to deploy sophisticated AI tools are concentrated in wealthy countries and powerful institutions (Ilcic et al., 2025). This creates a risk that AI-enabled research in fragile contexts reproduces colonial dynamics, with researchers from the Global North extracting data from the Global South without meaningful participation from affected communities in research design, analysis, or benefit-sharing, creating a situation that Madianou (2019) describes as “technocolonialism”.
These power asymmetries extend to who controls data and AI systems. When international researchers or organisations deploy AI tools in fragile contexts, local communities typically have no ownership over the algorithms, no access to the insights generated, and no say in how findings are used (Brown et al., 2025). This raises fundamental questions about epistemic justice – who has the right to produce knowledge about communities in crisis, and who benefits from that knowledge?
AI’s “black box” nature (Wischmeyer, 2020) can exacerbate these power dynamics. When researchers cannot explain how their algorithms reach certain conclusions, it becomes impossible for communities to scrutinise or challenge research findings meaningfully. This opacity can undermine trust and reinforce the sense that research is something done to communities rather than with them.
Bias and Representation
AI systems inherit and often amplify biases present in their training data and design processes. When these systems are applied in fragile contexts, the consequences can be severe. An algorithm trained primarily on data from stable, wealthy contexts may perform poorly or generate misleading insights when applied to crisis settings (Chuan et al., 2024). Systems trained on data from one cultural context may fail to account for important variations in other settings.
More insidiously, AI systems can encode harmful stereotypes or perpetuate discrimination. Facial recognition systems that perform poorly on non-white faces, natural language processing that misclassifies certain dialects as threatening, or predictive models that systematically underestimate needs in specific communities – all of these can cause real harm when deployed in research in fragile contexts (Hofmann et al., 2024).
Researchers have an obligation to test their AI systems for bias rigorously and to be transparent about their limitations. This requires diverse teams, inclusive design processes, and genuine engagement with affected communities to understand which errors or biases might cause harm in specific contexts (World Bank, 2025).
Ethical Frameworks for AI-Enabled Research
The Do No Harm Principle
The medical ethics principle of “first, do no harm” must be central to AI-enabled research in fragile contexts. This requires a comprehensive risk assessment that considers not just immediate harms but also downstream and cumulative effects. Researchers should ask: Could this data collection or analysis expose participants to retaliation? Could it reinforce harmful stereotypes? Could it be misused by other actors? Could it damage trust between communities and humanitarian actors?
When harms are possible, researchers must determine whether the potential benefits justify the risks. In many cases, the answer may be no; some research questions, no matter how intellectually interesting or potentially useful, may be too dangerous to pursue in certain contexts. The burden should be on researchers to demonstrate that benefits clearly outweigh risks, not on communities to prove potential harm.
Data Minimisation and Purpose Limitation
Researchers should collect only the data necessary for specific, well-defined purposes and should resist the temptation to gather additional data simply because AI makes it possible. The principle of data minimisation is particularly important in fragile contexts, where each additional piece of information collected could potentially increase risk.
Purpose limitation means being clear about what data will be used for and honouring those boundaries (Mühlhoff & Ruschemeier, 2024). AI’s capacity to repurpose data for secondary analysis is powerful but ethically fraught. Researchers should be cautious about using data collected for one purpose to train algorithms or answer questions beyond the original scope, particularly without returning to participants for additional consent.
Participatory and Community-Based Approaches
Ethical AI research in fragile contexts demands meaningful participation from affected communities throughout the research process. This includes involving community members in decisions about whether to use AI tools, what kinds of data to collect, how to interpret findings, and how to share or act on results.
Community-based participatory research approaches can help ensure that AI applications are contextually appropriate and that potential harms are identified early. Local research partners, community advisory boards, and participant feedback mechanisms are essential for ethical oversight (Moreno-Sanchez et al., 2025). These approaches also help address power asymmetries by creating space for communities to shape research agendas rather than simply being subjects of study.
Participation also means building local capacity. Rather than simply deploying AI tools from outside, researchers should invest in training and infrastructure that enables local researchers and organisations to use these technologies themselves (Madianou, 2019). This shifts the power dynamic and ensures that communities can continue to benefit from AI capabilities after external researchers leave.
Transparency and Explainability
To the extent possible, researchers should use explainable AI systems that can provide clear rationales for their outputs. When using more opaque systems, such as deep neural networks, researchers should supplement their work with additional transparency mechanisms, including documenting training data sources, acknowledging limitations, and providing opportunities for community members to understand and question findings (World Bank 2025).
Transparency also means being honest about uncertainty and potential errors. AI systems are not infallible, and in fragile contexts, overconfidence in algorithmic outputs could lead to harmful decisions. Researchers should communicate the limitations of their tools and resist the temptation to present AI-generated insights as more authoritative than they truly are.
Practical Strategies for Ethical Implementation
Researchers planning to use AI in fragile contexts should begin with a comprehensive ethical review that goes beyond standard institutional processes (Shadare, 2021). This should include consultation with ethics experts who understand both the specific context and the technical capabilities of AI systems being deployed. It should also include input from community members and local partners who can identify risks that external researchers might miss.
Data security protocols must be robust, including encryption, secure storage, and clear procedures for data access and retention. Researchers should have plans for what to do with data if the security situation deteriorates or they receive legal demands for access. In some cases, this may mean destroying data rather than risking participant safety.
Researchers should establish clear governance mechanisms for AI systems, including oversight committees with diverse membership and clear criteria for when to pause or terminate research if risks become apparent. These mechanisms should include community representatives and should have genuine authority to shape research decisions.
Building relationships of trust with communities takes time and cannot be rushed. Researchers should invest in long-term partnerships rather than extractive, short-term data collection. This means sharing findings with communities in accessible formats, acknowledging community contributions, and ensuring that research translates into tangible benefits.
Finally, researchers should contribute to broader policy discussions about AI governance in humanitarian and development contexts. Individual ethical research practices, while essential, are not sufficient to address systemic issues. Researchers have a responsibility to advocate for regulatory frameworks, industry standards, and institutional policies that protect vulnerable populations from AI-related harms.
Conclusion
The integration of AI into research methodologies offers significant opportunities to generate insights that could improve outcomes for communities facing crises. However, these opportunities come with profound ethical responsibilities. Researchers working in fragile contexts must approach AI tools with both enthusiasm for their potential and humility about their limitations and risks.
Ethical AI research in these settings requires more than following established protocols; it demands ongoing reflection, dialogue with affected communities, and a willingness to forgo certain research paths when risks cannot be adequately mitigated. It requires acknowledging power dynamics and actively working to redistribute power rather than concentrating it further. It requires transparency about what AI can and cannot do, and honesty about uncertainty and potential harms.
As AI capabilities continue to evolve, the ethical frameworks governing their use must evolve as well. Researchers, ethicists, technologists, and most importantly, communities affected by crises must work together to ensure that these powerful tools serve humanity’s most vulnerable populations rather than exposing them to new forms of harm. The future of ethical research in fragile contexts depends on our collective willingness to prioritise human dignity and community wellbeing over technological possibility and research expediency.
References
ACLED (2023). ACLED Codebook. Armed Conflict Location and Event Data Project (ACLED). Available at: www.acleddata.com (accessed 2 November 2025).
Brown, V., Larasati, R., Kwarteng, J., & Farrell, T. (2025). Understanding AI and Power: Situated Perspectives from Global North and South Practitioners. AI & Society. DOI: https://doi.org/10.1007/s00146-025-02731-x.
Chuan, C.H., Sun, R., Tian, S., Tsai, W.H.S., (2024). EXplainable Artificial Intelligence (XAI) for facilitating recognition of algorithmic bias: An experiment from imposed
users’ perspectives. Telematics and Informatics 102135. https://doi.org/10.1016/j.tele.2024.102135.
Dorizza, A. (2025). Exploring the impact of AI-based services on humans. A case study: echo chambers on social media.
Ghavami, P. (2019). Big data analytics methods: analytics techniques in data mining, deep learning and natural language processing. Walter de Gruyter GmbH & Co KG.
Hendrycks, D., Mazeika, M., & Woodside, T. (2023). An overview of catastrophic AI risks. arXiv preprint arXiv:2306.12001.
Hofmann, V., Kalluri, P. R., Jurafsky, D., & King, S. (2024). AI generates covertly racist decisions about people based on their dialect. Nature, 633(8028), 147–154. https://doi.org/10.1038/s41586-024-07856-5
Hossain, E., Ashik, A. A. M., Rahman, M. M., Khan, S. I., Rahman, M. S., & Islam, S. (2023). Big data and migration forecasting: Predictive insights into displacement patterns triggered by climate change and armed conflict. Journal of Computer Science and Technology Studies, 5(4), 265-274.
Ilcic, A., Fuentes, M., & Lawler, D. (2025). Artificial intelligence, complexity, and systemic resilience in global governance. Frontiers in artificial intelligence, 8, 1562095. https://doi.org/10.3389/frai.2025.1562095
Jacobsen, K., & Landau, L. B. (2003). The dual imperative in refugee research: some methodological and ethical considerations in social science research on forced migration. Disasters, 27(3), 185–206. https://doi.org/10.1111/1467-7717.00228
Kilian, K. A. (2025). Beyond accidents and misuse: Decoding the structural risk dynamics of artificial intelligence. AI & SOCIETY, 1-20.
Lythreatis, S., Acikgoz, F., & Yassine, N. (2026). Artificial intelligence in humanitarian aid: A review and future research agenda. Technovation, 151, Article 103415. Advance online publication. https://doi.org/10.1016/j.technovation.2025.103415
Madianou, M. (2019). Technocolonialism: Digital innovation and data practices in the humanitarian response to refugee crises. Social media+ society, 5(3), 2056305119863146.
McInnis, B. J., Pindus, R., Kareem, D., & Nebeker, C. (2024). Considerations for the Design of Informed Consent in Digital Health Research: Participant Perspectives. Journal of empirical research on human research ethics : JERHRE, 19(4-5), 175–185. https://doi.org/10.1177/15562646241290078
Moreno-Sanchez, P. A., Del Ser, J., Van Gils, M., & Hernesniemi, J. (2025). A Design Framework for operationalizing Trustworthy Artificial Intelligence in Healthcare: Requirements, Tradeoffs and Challenges for its Clinical Adoption. Information Fusion, 103812.
Muldoon, J., & Wu, B. A. (2023). Artificial intelligence in the colonial matrix of power. Philosophy & Technology, 36(4), 80.
Mühlhoff, R., & Ruschemeier, H. (2024). Updating Purpose Limitation for AI: A normative approach from law and philosophy. Available at SSRN 4711621.
Nissenbaum, H. (2004). Privacy as contextual integrity. Washington Law Review, 79, 119.
Nijhawan, T., Attigeri, G., & Ananthakrishna, T. (2022). Stress detection using natural language processing and machine learning over social interactions. Journal of Big Data, 9(1), 33.
OECD (2025). States of Fragility 2025, Paris: OECD Publishing, Available at: https://doi.org/10.1787/81982370-en
Pal, S. (2023). A paradigm shift in research: Exploring the intersection of artificial intelligence and research methodology. International journal of innovative research in engineering & multidisciplinary physical sciences, 11(3), 1-7.
Salami, A. O. (2024). Artificial intelligence, digital colonialism, and the implications for Africa’s future development. Data & Policy, 6, e67.
Shadare, G. A. (2021). Managing ethical tensions when conducting research in fragile and conflict-affected contexts. In Qualitative and digital research in times of crisis: Methods, reflexivity, and ethics (pp. 218-234). Policy Press.
Shanks, K., & Paulson, J. (2022). Ethical research landscapes in fragile and conflict-affected contexts: understanding the challenges. Research Ethics, 18(3), 169-192.
Van Bekkum, M. (2025). Using sensitive data to de-bias AI systems: Article 10 (5) of the EU AI Act. Computer Law & Security Review, 56, 106115.
Wischmeyer, T. (2020). Artificial intelligence and transparency: Opening the black box. In Regulating Artificial Intelligence. Cham: Springer International Publishing, pp. 75–101
World Bank (2025). How the World Bank Supports Adaptive Social Protection in Crisis Response: An Independent Evaluation. Independent Evaluation Group, Washington, DC: World Bank (accessed 10 November 2025)
World Bank (2021). World Development Report 2021: Data for Better Lives. Washington, D.C: World Bank
World Bank (2020). World Bank Group Strategy for Fragility, Conflict, and Violence 2020-2025, Washington, DC: World Bank (accessed 3 November 2025)