Document Type : Research Paper
Author
M.A. in Earthquake Engineering, Istanbul Technical University, Istanbul, Turkey
Abstract
This study examines the intersection of artificial intelligence, global governance, and disaster risk management through a qualitative investigation of 92 Iranian experts across disciplines such as geoinformatics, emergency planning, and environmental engineering. While AI offers a significant promise for enhancing early warning systems, damage assessments, and real-time decision-making, its integration into DRM systems remains constrained by fragmented data infrastructures, institutional silos, and geopolitical exclusions. Participants underscored AI’s potential to improve response coordination and risk forecasting, but emphasized the need for robust data governance, algorithmic transparency, and capacity building. The study highlights critical ethical and political concerns—particularly in countries like Iran facing technological marginalization due to sanctions and limited access to global data ecosystems. Drawing on grounded theory and thematic analysis, the research identifies institutional fragmentation, interoperability barriers, and normative governance deficits as primary obstacles to AI-enabled DRM. It argues for a globally coordinated approach grounded in justice, inclusivity, and human-centered design.
Keywords
- Algorithmic Ethics
- Artificial Intelligence
- Data Interoperability
- Disaster Risk Management
- Global Governance
- Institutional Capacity
Main Subjects
This is an open access work published under the terms of the Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0), which allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use (https://creativecommons.org/ licenses/by-sa/4.0/)
- Introduction
The intensification of disasters driven by climate change, urbanization, and global socio-political volatility has significantly elevated the urgency of building resilient and adaptive Disaster Risk Management (DRM) systems. While global frameworks such as the Sendai Framework for Disaster Risk Reduction (UNDRR, 2015) emphasize the importance of science and technology in disaster governance, the practical integration of cutting-edge technologies such as AI into DRM systems remains fragmented and unevenly distributed. In recent years, AI has emerged as a potentially transformative force for enhancing the predictive, analytical, and operational dimensions of DRM (Cutter, 2018; Alexander, 2020). AI’s ability to process heterogeneous data sources, model complex scenarios, and support real-time decision-making presents new avenues for strengthening early warning systems, optimizing emergency response, and supporting long-term recovery (Vinuesa et al., 2020; Wamba-Taguimdje et al., 2020). However, despite this promise, its implementation is constrained by significant institutional, infrastructural, ethical, and political challenges—particularly in countries grappling with technological dependency, governance fragmentation, or limited international cooperation (Müller, 2020; Whittlestone et al., 2019). Moreover, as digital technologies reshape not only institutions, but also individual behaviors and cultural norms—as seen in the influence of social media on personal decision-making (Nosrati et al., 2023)—AI adoption in DRM must also contend with evolving societal expectations and value systems that affect how risk and authority are perceived.
Disaster governance is fundamentally a problem of coordination under uncertainty and time pressure (Tierney, 2012). In this context, AI offers unique capabilities that could enhance multi-sectoral integration and responsiveness in disaster settings. For instance, machine learning algorithms can analyze vast and diverse data—from seismic sensors to social media—in real-time, potentially outperforming traditional models in early warning and situational assessment. However, as AI systems become embedded in critical infrastructures, they raise a host of concerns including data integrity, algorithmic opacity, and sociotechnical bias (Eubanks, 2018; Sandvig et al., 2014). These issues are magnified in contexts such as Iran, where data systems are siloed, cross-border information flows are impeded by geopolitical restrictions, and institutional capacity for AI governance remains limited (UNDRR, 2022). Such systemic vulnerabilities underscore the necessity of a holistic, ethically grounded, and globally integrated approach to AI in DRM—one that balances innovation with equity, transparency, and local contextualization (Jobin et al., 2019).
The Iranian context provides a particularly valuable perspective for examining the intersection of AI and disaster governance. As a country highly susceptible to natural hazards such as earthquakes, floods, and droughts, Iran’s disaster risk management (DRM) system must navigate not only environmental threats, but also structural challenges, including limitations in data infrastructure, inter-agency coordination, and restricted access to international resources and collaboration (Salehipour Milani et al., 2021). Yet, as this study demonstrates, Iranian experts across domains—including geoinformatics, emergency operations, and policy advisory—recognize AI’s potential to fill crucial gaps in disaster forecasting, damage assessment, and crisis coordination. Drawing on 92 in-depth interviews with these professionals, this research reveals both the aspirational trajectories and pragmatic constraints shaping AI’s role in DRM. It highlights themes such as the fragmentation of data governance, the risks of over-reliance on automated systems, the lack of institutional preparedness, and the need for ethical and democratic oversight in AI deployment. These themes align with a growing global literature warning that technical sophistication in AI must be matched by robust governance mechanisms if it is to serve the public interest during crises (Cath, 2018; Leslie, 2019; Nemitz, 2018).
Moreover, this inquiry situates the Iranian experience within broader debates about global governance and technological equity. While AI is often portrayed as a globally available tool, access to its infrastructure, expertise, and data ecosystems remains profoundly unequal (Crawford, 2021; Milan & Treré, 2019). Many of the most powerful AI tools are owned by private firms and governments in the Global North, raising questions about sovereignty, dependency, and geopolitical asymmetry in the management of disaster data and response tools (Taylor, 2017; Latonero, 2018). This corporate dominance extends to shaping academic narratives, with documented cases of tech giants manipulating research to legitimize exploitative data practices (Sarfi et al., 2021)—further skewing the knowledge base available to marginalized regions. For countries under sanctions or geopolitical isolation, such as Iran, these inequalities translate into material vulnerabilities—where the inability to access satellite imagery or interoperable platforms could delay critical decisions in the early hours of a disaster. This condition of "technological marginality" demands not only local capacity-building, but also international frameworks for inclusive AI governance, grounded in principles of justice, solidarity, and reciprocal data sharing (Floridi et al., 2018; United Nations, 2021).
Ethical concerns are not ancillary to these issues, but central to the integration of AI in DRM. From algorithmic bias and exclusion of informal settlements, to the potential misuse of surveillance tools under the guise of crisis management, the ethical landscape of AI deployment is fraught with risks of harm, opacity, and social injustice (Eubanks, 2018; Noble, 2018). In high-stakes scenarios, where decisions made by AI can affect life and death outcomes—such as evacuation prioritization or triage—the lack of explainability and accountability in algorithmic systems is particularly alarming (Weller, 2019; Mittelstadt et al., 2016). Interviewees in this study frequently expressed concern about AI becoming a "black box" that overrides human judgment rather than supporting it—a pattern echoed in international critiques of technocratic governance in humanitarian contexts (Sandvik et al., 2017; Taddeo & Floridi, 2018). Thus, integrating AI into DRM requires more than technical alignment; it necessitates a normative framework that foregrounds human agency, contextual sensitivity, and procedural transparency.
This paper seeks to advance the ongoing discourse on AI and disaster governance by highlighting empirical insights from Iranian experts, and positioning these findings within a wider theoretical and policy context. It advocates for an integrated disaster risk management (DRM) model, where AI functions not as a replacement for institutional reform, but as a catalyst for cross-sector collaboration, ethical reflection, and enhanced global coordination. Building on qualitative interviews, the study explores how AI technologies can be responsibly embedded in DRM systems that are data-rich, yet contextually grounded, technically sophisticated, yet ethically constrained, and locally situated, yet globally interoperable. Ultimately, this research contends that AI's promise in disaster governance will only be fulfilled if it is accompanied by inclusive data governance, institutional capacity-building, and a multilateral commitment to equity and accountability.
- Method
This study employed a qualitative research design centered on semi-structured interviews with 92 Iranian experts engaged in DRM, encompassing fields such as crisis response, urban planning, remote sensing, and environmental engineering. The research adopted purposive sampling, a non-probability approach commonly used in exploratory research to identify information-rich cases with direct relevance to the phenomenon under investigation (Patton, 2015). Given the study's emphasis on institutional and technological integration, participants were selected based on their professional expertise, organizational roles, and sectoral engagement in DRM-related activities. Notably, the sample was predominantly male (n = 79), reflecting gender imbalances in Iranian technical and governmental institutions—a demographic limitation that may shape the interpretive lens of the findings (Creswell & Poth, 2018).
Ethical protocols were rigorously applied throughout the research process. Participation was voluntary, with informed consent obtained from all interviewees. Many participants requested full anonymity, especially those employed in politically sensitive institutions or constrained by national information governance policies. Others permitted attribution of their functional expertise or institutional affiliation. In accordance with qualitative research ethics, these preferences were honored to balance transparency with participant safety (Tracy, 2010). Interviews were conducted in either Persian or English, based on respondent preference, and were digitally recorded with permission. The interviews followed a semi-structured format that allowed for both standardized questioning and the flexibility to probe emergent themes in depth (Kallio et al., 2016).
The data were analyzed using grounded theory methods to inductively derive themes and conceptual categories from the interview transcripts (Charmaz, 2014). Transcriptions were coded iteratively, using a mix of open and axial coding strategies to identify recurring patterns and relational linkages. This approach allowed the researchers to trace how expert perceptions of AI in DRM coalesced around specific issues such as data interoperability, algorithmic opacity, institutional fragmentation, and geopolitical constraints. NVivo software supported the coding and organization of data, enhancing reliability through systematic management of textual evidence. Validity was reinforced through member-checking and cross-verification of themes among research team members (Nowell et al., 2017).
- Findings
- 1. Technological Potential and Application of AI in DRM
AI is increasingly regarded as a pivotal enabler in the modernization and strengthening of DRM systems. Through its ability to process vast amounts of heterogeneous data in real time, identify complex patterns, and support decision-making under conditions of uncertainty, AI holds the potential to significantly enhance the effectiveness of disaster preparedness, response, and recovery. Within the scope of this study, Iranian experts with professional backgrounds in crisis management, remote sensing, environmental engineering, and urban resilience offered a range of insightful perspectives on the potential applications of AI in supporting both global and national disaster risk management (DRM) initiatives.
A recurring theme across the interviews was the ability of AI to improve the accuracy and timeliness of early warning systems. Many participants emphasized the potential of AI to integrate meteorological data, seismic readings, satellite imagery, and real-time social media inputs to generate predictive models that outperform traditional forecasting methods. These capabilities align with documented AI strengths in complex data synthesis though their effectiveness hinges on addressing implementation barriers such as technical preparedness and institutional resistance (Rahmatian & Sharajsharifi, 2022). One expert, a senior researcher in seismology commented:
In our current system, early warnings are issued based on isolated data streams that often lack precision. What AI can do is to merge these disparate sources—sensor data, weather models, remote sensing feeds—and generate a much more precise risk map, updated continuously. In the case of an impending earthquake or flood, this difference in speed and accuracy could save thousands of lives. But we are not yet institutionally prepared to make this transition.
In addition to predictive capabilities, participants pointed to AI's capacity to support decision-making during crisis response. Particularly in fast-evolving emergencies, AI was seen as a tool for managing complexity and generating rapid situational awareness. Research confirms that these applications are valued across professional sectors, although consistently accompanied by concerns about data privacy, algorithmic bias, and implementation challenges remained (Rahmatian & Sharajsharifi, 2021). Respondents described scenarios in which AI algorithms could simulate multiple disaster response pathways, optimize evacuation routes, and dynamically allocate emergency resources based on population distribution and road conditions. A director of emergency operations described a recent simulation exercise in which AI-assisted modeling was tested:
We used an AI platform to simulate various earthquake response scenarios. The algorithm evaluated not only the fastest routes for ambulances, but also which roads were structurally compromised and which neighborhoods were most at risk due to building age and density. It would have taken us days to manually process that kind of data. With AI, we had a full strategic map within minutes. The potential for decision support in real-world crises is enormous—if we have the infrastructure to support it.
The use of AI in post-disaster recovery and damage assessment was also noted as a key area of interest. Several experts discussed the application of image recognition systems trained on drone and satellite imagery to rapidly assess the extent of physical damage and identify areas in need of immediate intervention. A professor of geoinformatics explained:
After a major disaster, the first 72 hours are critical. Traditional damage assessments rely on manual fieldwork, which is time-consuming and sometimes dangerous. AI can automate much of this process. For example, we’ve seen systems that can classify building damage levels using aerial imagery within hours after an event.
However, the experts also identified significant limitations and risks associated with the adoption of AI in DRM. Chief among these was the reliance on high-quality, standardized, and interoperable datasets—a condition that many participants argued is not yet met in Iran or in many other low- and middle-income countries. Moreover, concerns were raised about the potential opacity of AI decision-making processes, particularly in high-stakes environments where accountability is critical. One official offered a cautious view:
AI is only as good as the data and assumptions behind it. If your data is incomplete, outdated, or biased—as it so often is in disaster-prone regions with poor documentation—then your AI outputs will not just be inaccurate; they could be catastrophically misleading. [. . . ] This is not a hypothetical risk—it is a real and present danger. The problem is that once these systems are embedded into operational workflows, there’s a tendency for decision-makers to treat them as authoritative. They stop asking critical questions. They defer to the algorithm, even when its recommendations conflict with local knowledge or common sense. This is dangerous. AI must be used to support human decision-making, not to override it. Disaster management demands ethical judgment, contextual awareness, and accountability—none of which can be fully outsourced to a machine. We cannot afford to treat AI as a black box that makes decisions for us, especially in high-stakes environments where lives and livelihoods are on the line. What we need is a culture of informed skepticism—an institutional norm where AI is interrogated, not idolized.
The perspectives gathered from the participants reflect that while there is considerable optimism about the technological potential of AI to transform DRM, there is also a sober recognition of the practical, institutional, and ethical challenges involved in its implementation. The findings indicate that for AI to be effectively integrated into disaster management systems, especially within a global governance framework, it must be accompanied by investments in data infrastructure, algorithmic transparency, technical training, and cross-border cooperation.
- 2. Interoperability and Data Governance
The effective deployment of AI in disaster risk management is contingent not only on technical capabilities, but also on the availability, accessibility, and governance of data across institutional and national boundaries. Rapid changes in digital communication and economic models are reshaping social behaviors and data flows, often outpacing existing governance frameworks and creating new challenges for data ethics, privacy, and interoperability (Arsalani et al., 1403 [2024 A.D.]). Interoperability—the ability of diverse systems, organizations, and technologies to work together seamlessly—is essential for leveraging AI tools at both local and global scales. Equally important is the establishment of coherent data governance frameworks that ensure the ethical, secure, and standardized use of data for disaster-related decision-making. Research underscores that cultivating trust and cooperation among stakeholders depends on transparent governance structures, clear accountability mechanisms, and comprehensive support systems that empower institutional actors and foster resilience (Kodabakhshi et al., 1399 [2021 A.D.]). Insights from the 92 Iranian experts interviewed in this study reveal a shared recognition of these issues as both structural obstacles and potential leverage points in the integration of AI into DRM systems.
One recurring concern among participants was the fragmentation of data sources and the absence of standardized protocols for data collection, storage, and exchange. Many respondents described the current data landscape in Iran’s DRM institutions as siloed and inconsistent. A senior adviser remarked: “[. . .] even within a single organization, the databases often don’t communicate with each other. For AI to function properly, especially in a crisis, this kind of fragmentation is fatal. We are feeding the machine incomplete or contradictory data”.
Beyond the national context, the issue of cross-border data interoperability emerged as a major theme, particularly with respect to transboundary hazards such as floods, dust storms, and earthquakes that affect the wider region. Participants emphasized that AI applications in global disaster governance must be supported by international agreements that facilitate real-time data sharing, while also respecting national sovereignty and privacy. One participant, a risk analyst, underscored the geopolitical complexities of such cooperation:
Disaster management demands ethical judgment, contextual awareness, and accountability—none of which can be fully outsourced to a machine. We cannot afford to treat AI as a black box that makes decisions for us, especially in high-stakes environments where lives and livelihoods are on the line. What we need is a culture of informed skepticism—an institutional norm where AI is interrogated, not idolized. [. . .] One of the greatest challenges we face in building effective AI systems for disaster management in Iran is the lack of access to real-time regional data due to international sanctions. These sanctions don’t just restrict financial transactions—they severely limit our ability to collaborate with neighboring countries and global data platforms. When a flood or dust storm is forming across a shared border, we often cannot receive live satellite feeds or share sensor data with counterparts in the region. This isolation prevents the development of comprehensive early warning systems and undermines the very principle of collective risk governance. AI systems rely on cross-border data flows to make accurate forecasts in transboundary disaster scenarios. Without that interoperability—without access to global data ecosystems—we are essentially trying to operate intelligent systems in an informational vacuum. The political restrictions placed on us through sanctions make that vacuum almost impossible to overcome.
Participants also expressed concern about the quality and reliability of input data, which directly affect the performance and legitimacy of AI systems. These technical challenges are exacerbated by a more fundamental issue: the widespread lack of critical skills to evaluate data sources, assess algorithmic outputs, and identify systemic biases in AI systems (Khodabin et al., 2024). Poor data quality—due to gaps in coverage, outdated information, or sensor inaccuracies—was seen as a significant limitation, particularly in rural or marginalized areas. A disaster information systems engineer noted:
We can’t expect accurate AI forecasts if we’re working with low-resolution or outdated datasets. In some provinces, we have no real-time sensors at all. And when data is available, it often lacks metadata or proper classification. If AI is to become a core component of our DRM strategy, we first need to invest in the basics—data infrastructure, documentation, and interoperability standards.
The governance of data also raised ethical and operational challenges. Participants were acutely aware of the tensions between open data initiatives and national security considerations, particularly in a country like Iran. Several highlighted the importance of establishing legal and institutional frameworks to manage data ownership, access rights, and accountability, as explained by a legal advisor:
AI thrives on open data, but disaster data is often seen as politically sensitive. For example, sharing real-time information about a nuclear facility’s structural vulnerability during an earthquake might be useful for response planning, but it’s also a security risk. We need governance models that balance transparency, security, and public good. Right now, those models are missing—not just in Iran, but globally.
Another area of discussion was the lack of institutional capacity for data stewardship. Several respondents noted that while some government agencies are beginning to digitize their records and adopt geographic information systems (GIS), they lack trained personnel to manage data systems or interpret AI outputs effectively. One urban planning consultant observed:
Data governance isn’t just about software—it’s about people. We need professionals who understand not only how to operate AI tools, but also how to verify and interpret their outputs, how to handle uncertainty, and how to make data-driven decisions in high-stakes situations. Right now, this expertise is concentrated in a few elite institutions, and that creates bottlenecks.
It can thus be inferred from the expert interviews that realizing AI’s full potential in disaster risk management (DRM) depends on fundamental reforms in data governance and system interoperability. This involves not only technical integration across platforms, but also the establishment of common ethical guidelines, legal structures, and institutional capacities. For global governance frameworks to effectively integrate AI into DRM, they must foster not only innovation, but also coordination and trust across a range of political and organizational settings.
- 3. Ethical and Political Dimensions
The incorporation of AI into DRM presents not only technical challenges but also a range of ethical and political dilemmas that must be addressed to ensure just, accountable, and context-sensitive implementation. Participants in this study repeatedly emphasized that AI, while promising, is not a neutral tool. Its deployment intersects with questions of power, sovereignty, transparency, and justice—both within states and at the global governance level. These concerns become particularly acute in disaster contexts, where decisions made by or with the aid of AI systems can have immediate and life-altering consequences. Moreover, as digital technologies increasingly shape not only institutional processes but also individual psychology and collective behavior, their broader societal effects must be acknowledged. Algorithmically-driven environments—from social media to disaster management systems—have demonstrable impacts on mental health, risk perception, and emotional resilience (Zamani et al., 2021), directly influencing public capacity to engage with crisis responses.
One of the most widely discussed ethical concerns among the participants was the opacity of AI decision-making, particularly in high-stakes situations such as evacuations, triage, or infrastructure prioritization. Just as social media algorithms have been shown to systematically exacerbate anxiety and depression through opaque design choices and engineered dependency (Nosraty et al., 2021), AI systems in DRM risk, replicate these harms when deployed without transparency or accountability - potentially turning life-saving tools into sources of institutionalized trauma during crises. Many experts were wary of what they referred to as the “black box problem”—the lack of interpretability in AI models, especially in deep learning systems. This concern resonates across high-stakes fields; Iranian healthcare professionals similarly emphasize that AI must preserve human decision-making autonomy in critical scenarios (Tomraee et al., 2024). As one crisis management official explained: “When an AI system tells you to reroute ambulances or allocate emergency aid to one region instead of another, you need to understand why. If that decision leads to harm, who is responsible? The machine? The developer? The operator?”.
Another ethical concern raised by participants was the potential for algorithmic bias, particularly in relation to marginalized populations. Several experts warned that if AI systems are trained on data that underrepresent informal settlements, rural areas, or -supposedly- neglected ethnic groups, those communities may be further excluded from emergency response and recovery efforts. A sociologist working on disaster vulnerability mapping emphasized:
In Iran, like many countries, not all communities are equally visible in official data. If AI systems are trained on what is available—urban infrastructure, census records, registered buildings—they will miss those who are already most vulnerable. AI has the power to reproduce and even amplify existing inequalities unless we are very careful about data inclusion.
Political concerns were equally prominent in the interviews. Participants pointed to the geopolitical asymmetries that currently shape access to AI technologies and data infrastructures. These disparities are further compounded by the digital literacy divide—where vulnerable populations lacking the skills to interpret AI-driven warnings or challenge algorithmic decisions become disproportionately dependent on opaque systems (Arsalani et al., 2022). This dynamic mirrors and intensifies existing power imbalances in disaster governance. Several noted that while multinational corporations and governments in the Global North are developing sophisticated AI tools for DRM, countries under economic sanctions or with limited technological sovereignty—such as Iran—often remain dependent on external platforms and proprietary systems. This creates vulnerabilities in both sovereignty and resilience. One participant, a disaster policy advisor, framed the issue bluntly:
We talk about AI as if it is universally available. It is not. Many of the most advanced platforms are owned by private firms in countries that are not politically neutral. If a government under sanctions needs access to real-time satellite data processed by U.S. firms, what happens in a crisis? Do we wait for permission? Do we get second-rate access? AI without governance becomes a tool of exclusion and control.
Several respondents also raised concerns about surveillance and authoritarian misuse of AI in the name of disaster preparedness. While acknowledging the utility of real-time tracking and predictive analytics, participants emphasized the risk that such tools could be used to monitor populations, suppress dissent, or prioritize the power’s stability over public safety. An expert offered the following critique:
Disaster management can become a pretext for surveillance. Drones, sensors, facial recognition—they can all be justified as tools for saving lives. But where is the line between protection and control? In the absence of legal safeguards, AI systems used for DRM could easily be redirected toward social monitoring, especially in politically sensitive regions.
The participants’ remarks also reflected a wider skepticism regarding technocratic models that treat AI as a replacement for human judgment, democratic processes, or localized knowledge systems. Although few participants dismissed the utility of AI entirely, many emphasized the need for human-centered governance frameworks capable of contextualizing and critically evaluating AI-generated outputs. As one university lecturer summarized: “AI should support—not replace—human decision-making. We must ask: Whose priorities are embedded in these algorithms? Whose knowledge counts? If disaster governance becomes automated, we risk losing sight of local realities and ethical accountability”.
Taken together, the interviews suggest that ethical and political considerations must be central—not peripheral—to the integration of AI in DRM systems. These systems must be designed with transparency, fairness, and democratic oversight in mind. Moreover, global governance institutions must ensure that AI does not exacerbate geopolitical inequalities or become a tool for soft coercion in the international disaster management landscape. The promise of AI will only be realized if it is matched by robust ethical frameworks and inclusive political mechanisms that safeguard public trust and collective autonomy.
- 4. Institutional Capacity and Governance Integration
The 92 Iranian experts interviewed for this study widely acknowledged that the technological potential of AI in DRM is often constrained by insufficient institutional capacity, fragmented bureaucratic systems, and the absence of integrated governance models. These constraints are not unique to Iran, but reflect broader global challenges in aligning AI innovation with disaster governance mechanisms.
A prevailing theme in the interviews was the lack of institutional coordination and coherence within the national DRM architecture. Experts described a landscape in which multiple agencies possess overlapping mandates, competing priorities, and inconsistent standards for data sharing and operational protocols. Systematic reviews of AI adoption across sectors identify these very patterns—fragmented accountability, infrastructure gaps, and resistance to operational change—as universal barriers to technological integration (Tomraee et al., 2022). As one technician noted:
In Iran, we have separate agencies responsible for urban resilience, rural development, emergency medical services, and infrastructure protection—but they operate in silos. There is no central mechanism for integrating AI tools across these sectors. Each institution develops its own pilot projects, often duplicating efforts or working with incompatible platforms. Without institutional alignment, AI becomes another layer of complexity rather than a solution.
Many participants emphasized that AI implementation is not simply a technical issue, but a governance challenge, requiring clear mandates, cross-sectoral collaboration, and long-term political commitment. In this regard, several experts highlighted the absence of a national strategy for AI in DRM—one that articulates goals, defines institutional roles, and establishes accountability mechanisms. Moreover, research shows that comprehensive support systems, including psychological and organizational support, are critical for successful integration of complex technologies, as they foster trust, resilience, and cooperation among stakeholders (Toosi & Sajjadi, in press). Building this trust, in turn, requires transparent governance structures and sustained political commitment to ensure accountability and clear institutional roles (Maleki Borujeni et al., 1401 [2022 A.D.]). A policy researcher at the Iranian Parliament Research Center commented:
We urgently need a strategic framework—something like a National AI for Resilience Policy—that sets out who does what, what ethical standards apply, and how resources will be allocated. Right now, AI is being introduced in a piecemeal way, often by individual champions within ministries. That is not sustainable. If disaster governance is to become intelligent, it must first become integrated.
The lack of human capital and institutional learning mechanisms was another recurring concern. Participants repeatedly stressed that the majority of DRM institutions in Iran—and indeed in many parts of the world—lack trained personnel who can develop, implement, and critically evaluate AI systems. This skills gap mirrors broader challenges in technological adoption - where, as research shows, effective training is crucial for resisting manipulation and engaging with digital tools responsibly (Sakhaei et al., 2023). These systemic gaps persist when education fails to engage all levels of decision-making, from frontline operators to institutional leadership (Hosseini et al., 2025). The issue is not only technical skill, but also organizational culture: a capacity to adapt, learn, and incorporate innovation into existing workflows. A professor elaborated:
Even when institutions acquire AI software, they often don’t know what to do with it. There is a huge gap between having the tool and having the capacity to use it intelligently. We need not just engineers, but interdisciplinary teams—planners, ethicists, crisis managers—who can work together to make AI a meaningful part of decision-making. That requires serious investment in training and institutional reform.
At the global level, participants raised questions about how AI technologies could be integrated into international DRM frameworks such as the Sendai Framework for Disaster Risk Reduction, the Paris Agreement, or United Nations coordination platforms. Several experts observed that while these frameworks acknowledge the role of science and technology, they remain vague on the governance of AI specifically. A researcher argued:
The Sendai Framework talks about innovation and risk knowledge, but it does not yet provide a roadmap for AI governance. There is no global institution responsible for setting ethical standards, ensuring interoperability, or mediating disputes over AI usage in disaster scenarios. This is a vacuum that needs to be filled. Otherwise, we will see fragmentation and growing inequality in how AI is deployed and who benefits from it.
Some participants proposed the creation of a global AI governance body within the United Nations system or a multilateral platform involving both state and non-state actors. This body would be tasked with developing global norms, facilitating capacity-building in low- and middle-income countries, and ensuring transparency and accountability in AI-assisted disaster response. As one respondent put it:
If AI is to become part of global disaster governance, then global institutions must step up. This is not just about funding or technical advice—it’s about legitimacy, trust, and coordination. We need global rules for AI in disasters just like we have for humanitarian aid or climate action.
The collective insight from these interviews makes clear that institutional capacity and governance integration are foundational to the responsible and effective use of AI in DRM. Without coherent policies, skilled personnel, inter-agency collaboration, and inclusive global frameworks, AI risks becoming a fragmented and underutilized asset. Bridging this gap requires not only technological investment but also deep institutional reform—locally, nationally, and globally.
- 5. Capacity Building
The integration of AI into DRM demands substantial investments in human capital, institutional learning, and organizational adaptability. Throughout the interviews conducted for this study, Iranian experts repeatedly emphasized that the potential of AI in DRM will remain unrealized unless matched by systematic and sustained capacity-building efforts. These efforts must address not only technical training, but also intersectoral knowledge integration, ethical literacy, and adaptive governance.
One of the most frequently cited challenges was the shortage of skilled personnel with the expertise required to operate, interpret, and oversee AI systems in the context of disaster governance. Respondents described a structural gap between the rapid development of AI technologies and the limited availability of professionals who can bridge the domains of data science, crisis management, and public policy. An activist articulated the issue as follows:
We are seeing new AI tools being introduced—systems for flood forecasting, landslide modeling, damage detection. But who is going to manage them? Most of our disaster response agencies do not have data scientists or AI engineers. Even when we outsource to private tech firms, there is little capacity within the public sector to evaluate their outputs. This dependency creates risk—technical risk, but also institutional risk.
Beyond technical skills, participants highlighted the importance of interdisciplinary knowledge that combines technological understanding with social, environmental, and political dimensions of risk. This need mirrors findings from organizational leadership research, where education is proven to be a strategic investment that enhances technological adoption and systemic resilience (Zamani et al., 2024). It also aligns with comprehensive reviews indicating that effective AI engagement requires bridging technical, ethical, and civic competencies (Khodabin et al., 2022). AI, they argued, should not be treated as a neutral tool, but as part of a socio-technical system that must be navigated with critical insight. A university professor stated:
We need more than engineers. We need people who understand how AI interacts with vulnerability, inequality, and governance. For example, an algorithm might optimize evacuation routes based on traffic data, but what if it doesn’t account for informal settlements or people without access to vehicles? These are not technical problems—they are problems of understanding the social context in which technology is used.
Participants also called for institutional learning mechanisms that would enable organizations to adapt and evolve in response to technological change. Rather than relying on one-off training sessions or isolated pilot projects, experts advocated for long-term strategies that embed learning into the organizational culture of DRM institutions. This includes knowledge-sharing platforms, staff exchanges, scenario-based exercises involving AI tools, and partnerships with academic institutions. A senior employee in Helal Ahmar noted:
We don’t just need workshops—we need learning systems. After every flood, every earthquake, we should be asking: What worked? What didn’t? How did our AI tools perform? But that feedback loop is missing. Our institutions need to learn from experience, especially when experimenting with new technologies. Otherwise, we just repeat the same mistakes—with more expensive tools.
International collaboration was seen as a crucial component of capacity building. Several participants emphasized the role of cross-border training initiatives, regional centers of excellence, and partnerships with global organizations such as the United Nations Office for Disaster Risk Reduction (UNDRR), UNESCO, and the World Bank. These collaborations were viewed as vital not only for technical training, but also for shaping normative frameworks and ensuring equitable access to AI knowledge. A participant reflected:
Iran is not isolated when it comes to disasters—we are part of a regional ecosystem. AI capacity building should be approached regionally. We can learn from other countries, share tools, and co-develop training materials. But we also need support from global institutions, especially given the limitations we face due to sanctions and resource constraints.
Participants stressed the importance of public sector leadership in AI capacity development. While private firms and universities play important roles, experts argued that core competencies must reside within government agencies responsible for disaster preparedness and response. Without internal capacity, public institutions risk becomes overly reliant on external consultants, undermining long-term resilience and sovereignty. As one commentator concluded:
You can’t govern what you don’t understand. If the government wants to lead in AI for disaster management, it must build its own capacity—not just contract it out. That means investing in people, building career paths, and creating institutions that can attract and retain talent in this field.
We can see that capacity building is not a secondary concern, but a foundational requirement for the responsible and effective deployment of AI in disaster contexts. It involves developing not only technical expertise, but also institutional reflexivity, interdisciplinary collaboration, and strategic foresight. Without these elements, AI risks become a fragmented, underutilized, or even counterproductive addition to disaster governance systems.
- 6. Global Equity, Inclusion, and Access
While AI has the potential to transform how societies anticipate and respond to hazards, its benefits are unequally distributed across geopolitical, economic, and technological divides. The interviews conducted for this study revealed a shared concern among Iranian experts that AI, if not governed inclusively, may exacerbate existing disparities in disaster vulnerability and resilience—both within and between nations.
A dominant theme across the interviews was the technological asymmetry between high-income and low- to middle-income countries in terms of access to AI tools, infrastructure, and expertise. Participants noted that advanced AI models and real-time data analytics platforms are often proprietary, developed and controlled by corporations or governments in the Global North. This asymmetry limits the ability of countries like Iran to autonomously develop or adapt AI systems for local disaster contexts. One participant explained:
Most of the cutting-edge AI platforms—especially those used in real-time satellite analysis, crisis mapping, or predictive modeling—are either owned by Western tech firms or housed in academic institutions that have no obligation to share them. This creates a dependency. If you don’t have the right geopolitical alignment or financial resources, you’re left out of the loop. In a disaster, that exclusion becomes lethal.
Another recurrent concern was the digital divide within countries, particularly between urban and rural areas, and among populations with differing levels of access to connectivity, literacy, and digital services. Several experts warned that AI-driven DRM systems could unintentionally marginalize vulnerable populations who are not adequately represented in datasets or cannot interact with digital platforms. A regional relieve activist observed:
If AI models rely on smartphone data, but millions of people don’t own smartphones, what happens to them during a crisis? They become invisible. AI systems can only be inclusive if the data and interfaces are inclusive. Right now, most are not.
Participants also addressed the global governance vacuum around equitable access to AI resources for DRM. This disparity is exacerbated in sanctioned environments like Iran, where research shows AI adoption faces compounded barriers—from infrastructural deficits to regulatory opacity—that mirror constraints observed in other technologically dependent sectors (Khodabin et al., 2023). While international frameworks such as the Sendai Framework for Disaster Risk Reduction emphasize the need for international cooperation and capacity building, they provide limited guidance on the governance of AI specifically. One expert stated:
There is no global mechanism that guarantees equitable access to AI tools for disaster management. Countries that are politically isolated or economically disadvantaged are systematically excluded from the emerging AI ecosystem. We need something like a Global AI Commons for Disaster Risk—an open-access repository of tools, datasets, and best practices that anyone, regardless of geography or politics, can use.
Participants also highlighted the importance of inclusive knowledge production. Many AI systems are developed using datasets, ontologies, and modeling assumptions rooted in Western epistemologies and institutional norms. This epistemic imbalance can lead to DRM tools that fail to reflect local conditions, traditional knowledge, or culturally specific risk perceptions. Perceptions and understandings are often influenced and shaped by external narratives and dominant frameworks, which can limit the recognition of diverse viewpoints and local realities (Sabbar et al., 2023). A professor explained:
AI isn’t neutral. It reflects the priorities and knowledge systems of its creators. If the models are developed in Silicon Valley or London, they may not capture the way communities in Iran or Central Asia understand risk, respond to hazards, or define resilience. Inclusion must start from the level of design—not just distribution.
Several experts warned that the global push for AI-based DRM must not distract from or displace investment in basic disaster preparedness infrastructure in low-resource settings. While AI can enhance efficiency and foresight, its deployment cannot compensate for the absence of functioning emergency services, resilient infrastructure, or responsive governance. A civil defense officer in Khuzestan expressed this concern plainly:
We don’t need a fancy algorithm to tell us where the flood will hit if we don’t have boats, shelters, or functioning hospitals to respond. Technology should support preparedness—not replace it. And in many places, the basics are still missing.
The perspectives shared by the Iranian experts emphasize that the global AI agenda in DRM must be guided by principles of justice, solidarity, and inclusiveness. This means ensuring not only access to tools, but also participation in the governance of AI systems, and recognition of diverse forms of knowledge. Without such commitments, AI risks reinforcing patterns of exclusion and inequality—transforming disaster management into a site of technological privilege rather than collective resilience.
- Discussion and Conclusion
This study demonstrates that while artificial intelligence (AI) holds considerable potential to enhance disaster risk management (DRM) systems—by enabling more accurate early warning, dynamic crisis coordination, and rapid post-disaster assessment—its implementation remains severely constrained by institutional fragmentation, data governance deficits, and geopolitical exclusion. Drawing on qualitative interviews with 92 Iranian experts across domains such as geoinformatics, emergency planning, and crisis policy, the findings illuminate both the technological aspirations and the systemic impediments to effective AI integration in DRM.
Participants consistently identified the ability of AI to synthesize heterogeneous data streams—ranging from seismic sensors and satellite imagery to social media inputs—as a transformative tool for predictive analytics and real-time decision-making. However, this potential is undermined by siloed data infrastructures, inconsistent classification standards, and a lack of cross-border interoperability. Experts emphasized that without integrated databases and standardized protocols, AI systems risk functioning on incomplete or biased inputs, leading to misleading forecasts or inequitable interventions.
Moreover, the interviews revealed deep concerns about the opacity of algorithmic decision-making in high-stakes contexts such as triage, evacuation, or aid allocation. Many participants warned of a growing institutional tendency to over-rely on AI outputs, while neglecting local knowledge, contextual nuance, and human judgment. This “black box” dynamic was seen not only as a technical flaw, but as an ethical hazard—especially where accountability mechanisms and critical interpretive skills are lacking. Cross-sector evidence reveals that AI adoption consistently amplifies existing inequities when three conditions converge: (1) infrastructure gaps, (2) unregulated private-sector dominance, and (3) absent public literacy frameworks (Toosi et al., 2024).
The geopolitical dimensions of AI deployment in DRM were also sharply articulated. Iranian experts detailed how international sanctions, restricted access to real-time satellite data, and dependency on foreign-owned platforms perpetuate a condition of technological marginalization. This exclusion, they argued, impairs national sovereignty in disaster governance and undermines timely, evidence-based responses in transboundary crises. Such findings challenge dominant narratives that portray AI as a universally accessible tool, exposing the unequal architectures of power and access that shape its real-world application. Crucially, such models must address how societal pressures to prioritize technological expediency over equitable design can exacerbate health disparities and economic inequities (Nosraty et al., 2020), reinforcing the need for context-sensitive frameworks.
Equally pressing were calls for institutional reform and capacity building. Experts pointed to an urgent need for interdisciplinary teams, long-term training initiatives, and adaptive governance structures capable of integrating AI tools meaningfully into operational workflows. Importantly, many respondents stressed that technological deployment must not displace investment in basic emergency infrastructure or inclusive governance practices—especially in underserved rural or informal areas often invisible in official datasets.