Document Type : Research Paper
Author
MBA in Global Management, Thunderbird School of Global Management, Arizona, USA
Abstract
This research explores how Iranian MBA students view artificial intelligence as a driver of change in global power structures. Using a mixed-methods design, it combines survey results from 394 respondents with thematic analysis of open-ended answers. The quantitative data indicate a widespread belief that AI will hasten the decline of established global powers, widen global disparities, and offer emerging economies chances for geopolitical advantage. Qualitative themes include AI as a soft power tool, concerns over technological dependence, entrepreneurial optimism, and regulatory inadequacy. The analysis situates participants’ views within broader theoretical frameworks articulated by Innis, McLuhan, Castells, and Toffler, emphasizing AI’s capacity to redefine sovereignty, governance, and economic competitiveness. Statistical tests highlight how demographic variables, such as employment sector and academic status, significantly influence attitudes toward AI’s disruptive potential. These results underline both the optimism and anxiety among future business leaders regarding Iran’s capacity to harness AI’s transformative possibilities amidst structural and regulatory challenges.
Keywords
- Artificial Intelligence
- Geopolitical Disruption
- Global Power Dynamics
- MBA Students
- Technological Sovereignty
- Iran
Main Subjects
This is an open access work published under the terms of the Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0), which allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use (https://creativecommons.org/ licenses/by-sa/4.0/)
- Introduction
Traditionally, global power has been closely tied to military might, industrial capacity, and natural resource control. However, in the twenty-first century, the ability to develop, deploy, and regulate AI technologies is increasingly seen as a critical determinant of national influence and sovereignty. AI’s capacity to automate decision-making, optimize economic processes, and amplify soft power through media and surveillance infrastructures is reshaping the foundations of international competition and cooperation. Consequently, understanding how emerging business leaders perceive these shifts is vital for anticipating future dynamics in global governance, technological sovereignty, and economic competitiveness. Critically, benefiting from AI requires parity between technological capabilities and human competencies—where advancements in infrastructure remain inert without equally transformative shifts in workforce skills to operationalize systems effectively (Hosseini et al., 2021).
The literature to date highlights the diverse and often contested interpretations of AI’s geopolitical implications. Scholars such as Zuboff (2019) have warned about the rise of "surveillance capitalism", wherein corporate actors accumulate disproportionate political and economic influence through AI-driven data extraction. Similarly, Bellini et al. (2024) have documented how AI transforms geopolitical equilibria by enabling new forms of cyber conflict and regulatory realignment. Meanwhile, theorists such as Castells (1996) and Toffler (1980) have emphasized the emergence of new forms of global organization, structured around information networks and decentralized knowledge economies. Although much scholarly attention has been paid to the macro-level transformations induced by AI, there remains a relative paucity of research on how these dynamics are perceived at the micro-level—particularly by those poised to shape future business and policy environments. Systematic research on digital literacy demonstrates that structured educational frameworks are prerequisite for transforming technological access into critical engagement capabilities, with policy-supported training being pivotal (Sakhaei et al., 2023).
This study explores how Iranian MBA students and recent graduates perceive AI’s capacity to reshape global power dynamics. Iran serves as a fascinating case due to its distinct geopolitical stance, ambitions for technological independence, and the mix of opportunities and risks tied to AI advancement. By analyzing the views of this key group, the research seeks to illuminate wider regional and global transformations expected in an AI-influenced world order.
- Review of Literature
Dafoe (2018) presented a comprehensive research agenda for AI governance, identifying it as one of the most urgent and consequential global challenges of the 21st century. The report outlined how artificial intelligence, as a potent general-purpose technology, offers transformative opportunities in areas such as medicine, education, and environmental sustainability, while simultaneously posing substantial risks including labor displacement, global inequality, reinforced authoritarianism, strategic instability, and potential AI races that compromise safety. Dafoe (2028) proposed organizing AI governance research into three interconnected clusters: understanding the technical landscape of AI capabilities and their development trajectories; examining AI politics, including the domestic and international political dynamics influenced by AI, and envisioning ideal governance structures to ensure the beneficial deployment of advanced AI. The agenda emphasized the need for proactive global norms, policies, and institutions to navigate the transition to transformative AI systems safely. Through a detailed examination of technical possibilities, political dynamics, and ideal governance models, Dafoe's agenda sought to catalyze scholarly attention and inform policymakers, technologists, and global leaders regarding preparing for the profound societal shifts that AI advancements could precipitate.
Horowitz (2018) assessed the implications of artificial intelligence (AI) for international competition and the global balance of power, emphasizing AI's role as an enabling technology, analogous to the combustion engine or electricity, rather than a standalone weapon system. The article examined how narrow AI applications—although still developing—are expected to significantly impact military capabilities and strategic stability. Horowitz (2018) argued that the organizational and institutional choices made during the early phases of AI adoption are critical in shaping its long-term influence. The study highlighted the dual-edged nature of private-sector-led AI innovation: on the one hand, rapid diffusion of military-relevant AI technologies could diminish first-mover advantages; on the other, the complexities involved in translating commercial AI into effective military systems might preserve such advantages for technologically advanced states like the United States and China. The article also critically analyzed the alignment between U.S. military rhetoric and actual investment patterns in AI, suggesting that gaps between discourse and implementation could affect future strategic positioning. Ultimately, Horowitz underscored the uncertainty surrounding AI's evolution and its potential to either reinforce or destabilize current power hierarchies, depending on how states navigate technological integration and strategic adaptation.
Zuboff (2019) critically examined the transformative impact of artificial intelligence within the broader framework of what she termed "surveillance capitalism", emphasizing its profound implications for global power dynamics and democratic governance. In The Age of Surveillance Capitalism, she argued that AI technologies, particularly in service of major technology corporations, enable unprecedented capabilities to monitor, predict, and manipulate human behavior. This asymmetrical accumulation of behavioral data and predictive analytics grants disproportionate power to a small cohort of corporate actors, effectively bypassing traditional democratic institutions and regulatory mechanisms. Zuboff (2019) posited that the unchecked expansion of AI-driven surveillance practices undermines individual autonomy, erodes societal norms of consent, and facilitates new forms of economic and political domination. She warned that these developments could entrench a new global order in which sovereignty shifts from nation-states to transnational tech conglomerates, creating what she described as an "instrumentarian power"—a novel form of governance based on data extraction rather than democratic legitimacy. Consequently, global power equilibria risk being destabilized, with corporate actors potentially wielding greater influence over social, economic, and political processes than sovereign governments. Zuboff (2019) called for urgent regulatory interventions and a reinvigoration of democratic oversight to reclaim citizen rights and prevent the consolidation of power in the hands of surveillance capitalists.
Polcumpally (2022) examined the role of artificial intelligence (AI) in reshaping the global power structure through the lens of Luhmann’s systems theory, utilizing a second-order observation model grounded in the Triple Helix (TH) framework and Shannon’s Information Entropy. The study aimed to analyze the systemic interactions among universities, industries, and governments as sub-systems influenced by the socio-technical diffusion of AI. By applying information entropy measures to data derived from the 2019 Sanford Artificial Intelligence Laboratory report, the research quantified the uncertainty generated through these sub-system interactions. Polcumpally (2022) found that AI's pervasive horizontal influence has the potential to significantly alter the international power hierarchy, particularly by diminishing the global influence of certain European states. The study suggested that this emerging power void could be occupied by rising actors such as India, Canada, South Africa, and Brazil, which demonstrate favorable system dynamics in adapting to AI-driven transformations. This systems-theoretic approach provided a novel conceptual framework for understanding the dynamic redistribution of global influence in the AI era, emphasizing the need for nuanced, interdisciplinary models to interpret evolving power relations.
Kamran (2023) critically examined the ethical and sociopolitical dimensions of artificial intelligence (AI) by interrogating the embedded biases, power structures, and colonial continuities that persist within AI systems. Framing the analysis through a decolonial lens, the study emphasized how AI technologies, although often perceived as neutral or purely technical, are shaped by historically situated data practices and algorithmic designs that reproduce global hierarchies and systemic inequalities. Kamran (2023) highlighted the challenges associated with AI’s capacity for autonomous decision-making, particularly in high-stakes and uncertain environments, where such systems may operate beyond direct human oversight. The paper raised concerns about the looming possibility of a superintelligence and the implications this poses for accountability, especially when AI systems are deployed in contexts marked by geopolitical and racial power imbalances. By focusing on the processes of data collection and algorithmic training, Kamran (2023) illustrated how existing global inequities can be amplified, rather than mitigated by AI technologies. The study argued for a deliberate reconfiguration of AI development practices—one that centers marginalized voices, challenges epistemic hierarchies, and resists the reproduction of colonial logics—to create more equitable and socially responsible AI systems.
Zhang et al. (2023) advanced the understanding of strategic interactions among countries within international security networks by building upon their previously proposed "games-on-signed graphs" framework. Recognizing the need for greater complexity in modeling these interactions, the authors modified existing preference axioms to better reflect countries' pursuit of self-survival, defense of allies, and offensive strategies against adversaries. The study introduced a novel algorithm that not only accounted for these revised strategic behaviors, but also demonstrated the existence of a pure strategy Nash equilibrium within the updated framework. To validate the model, the authors utilized historical data from 1940 to simulate international relations and assess national survivability outcomes. Their contributions enhanced the real-world applicability of the initial framework, offering deeper insights into the dynamics of international relations within a networked security context. Simulations further corroborated the model’s predictive capabilities, emphasizing its potential utility in both academic and policy-making spheres.
Bellini et al. (2024) examined the transformative role of artificial intelligence and data proliferation on political power and geopolitical equilibria. The chapter traced the historical evolution of information management, highlighting its strategic importance from the Cold War to the contemporary era of cyberwarfare. The authors discussed how AI technologies have redefined power structures by enabling new forms of cyber conflict, illustrated through case studies such as Russia's information threats against Italy. In addressing the regulatory landscape, the chapter analyzed frameworks like the EU’s General Data Protection Regulation (GDPR) and the forthcoming AI Act, underscoring the necessity for robust, balanced governance of digital ecosystems. Recommendations were provided to strengthen technological infrastructures, refine regulatory measures, and promote digital literacy as essential components for achieving digital sovereignty and securing national interests. Overall, the chapter offered a comprehensive overview of how AI is reshaping global political dynamics and emphasized the urgency of proactive policy responses. This policy imperative becomes particularly salient when examining how infrastructural limitations and regulatory environments mediate technological implementation across sectors (Khodabin et al., 2023). As emerging technologies outpace traditional legal systems, the absence of clear, unified regulatory policies has been shown to foster uncertainty and weaken institutional responses—highlighting the urgency of preventive, system-wide strategies (Taheri et al., 1401 [2022 A.D.]).
Iqbal et al. (2024) provided a comprehensive analysis of the integration of artificial intelligence (AI) into contemporary security and defense frameworks, highlighting its transformative influence on military strategies, security policies, and global power dynamics. The study examined a range of AI applications in military contexts, including cyberwarfare, autonomous weapons systems, surveillance technologies, and predictive analytics. It emphasized how AI has enhanced decision-making efficiency, tactical agility, and operational precision within defense sectors. Alongside these strategic benefits, the authors critically addressed the ethical and legal challenges posed by AI-enabled warfare, particularly the risks associated with autonomous decision-making and the potential for unintended consequences. The paper also analyzed how different nations are adapting their security doctrines to incorporate AI, raising concerns about accountability, transparency, and the likelihood of an AI-driven arms race. Furthermore, the research explored AI’s role in reshaping global power structures, arguing that disparities in AI capabilities among nations could significantly influence international relations and geopolitical alignments. Through its multidimensional approach, the article offered valuable insights into the evolving landscape of AI in defense, contributing to scholarly and policy-level debates on the ethical, strategic, and geopolitical implications of this technological shift.
Gerlich (2024) investigated the interrelated impacts of artificial intelligence (AI) and geopolitical developments on global systems in the near future, constructing scenario-based projections for the period 2025–2040. Employing a Delphi method and probabilistic modeling, the study developed future societal scenarios that assessed AI’s transformative influence across economic, societal, and security dimensions. Central to the findings was the prediction that AI could lead to widespread employment displacement, with estimated unemployment rates between 40–50%, driven by the rapid pace of technological innovation outstripping current regulatory capacities. The research also identified rising economic inequality and societal fragmentation as probable outcomes, emphasizing that these risks are exacerbated by limited governmental readiness—estimated at only a 10% probability of being adequately prepared. Parallel to technological developments, the study explored geopolitical dynamics, forecasting intensifying nationalism, prolonged conflicts such as the Russia–Ukraine war, and assertive strategic behavior by powers including China and Israel. Gerlich (2024) argued that the convergence of these forces, compounded by short-termism in Western corporate practices, poses significant threats to global stability. The study concluded with a strong call for anticipatory governance, international collaboration, and the development of adaptive regulatory frameworks to mitigate the risks and harness the opportunities of the AI revolution and ongoing geopolitical realignments.
Zirojević (2024) examined the digital transformation of geopolitics, analyzing how emerging technologies have redefined traditional power structures and geopolitical frameworks. The article argued that while classical geopolitics was anchored in territorial and physical characteristics, the rise of digital tools—such as artificial intelligence, big data, and digital infrastructures—has shifted the focus toward more intangible and interconnected domains. Zirojević (2024) highlighted how digital technologies have blurred the lines between domestic and foreign affairs, enabling new actors, including individuals and non-state entities, to participate in global political processes. This democratization of influence, while offering broader engagement, also introduces vulnerabilities, including the potential exploitation of digital platforms for geopolitical manipulation. The study emphasized the dual nature of digital geopolitics: on the one hand, it expands access to geopolitical discourse and reshapes ideological narratives; on the other, it complicates governance and control, as digital tools can both empower and destabilize. Zirojević (2024) concluded that the convergence of traditional and digital paradigms necessitates a reassessment of how power is constructed and contested in the modern era, urging a more nuanced understanding of influence, representation, and rivalry in an increasingly digitalized global order.
Korkmaz (2024) examined how artificial intelligence (AI) has influenced power asymmetries in the international system, framing this development within the broader evolution of global power structures from the Cold War to the present multipolar order. He contextualized the transition from a bipolar world, dominated by military and nuclear capabilities, to a unipolar moment following the dissolution of the USSR, and subsequently to a multipolar configuration increasingly shaped by technological leadership. The chapter emphasized that AI has emerged as a transformative force impacting key dimensions of international relations, including economic growth, military strategy, cybersecurity, and governance. Korkmaz (2024) argued that nations at the forefront of AI innovation are accruing new strategic advantages, thereby reshaping traditional metrics of power and challenging established international norms. This technological shift, he contended, signals more than the enhancement of national capabilities; it necessitates a rethinking of diplomatic frameworks and conflict management strategies at the global level. The analysis underscored the growing importance of AI as a systemic force that redefines the contours of global influence and alters longstanding paradigms of international interaction.
Challoumis (2025) explored the profound and multifaceted long-term implications of AI on global economic equilibrium. The article provided a historical perspective on technological revolutions, tracing parallels between past economic transformations and current AI revolution. It emphasized AI's potential to disrupt employment patterns, alter wealth distribution, and reshape international trade dynamics. Key concerns included widespread job displacement, increased social inequality, and the redefinition of work and human purpose. The article also discussed the environmental impacts of AI, both as a tool for climate change mitigation and as a contributor to ecological degradation. Further, Challoumis (2025) considered geopolitical ramifications, such as AI’s influence on warfare, international relations, and global governance. The ethical and moral challenges surrounding AI development were examined, particularly the necessity of aligning AI systems with human values. Finally, the article forecasted the emergence of new decentralized and autonomous economic systems enabled by AI, advocating for proactive regulation, equitable economic strategies, and societal investment in education and continuous learning to ensure AI benefits are shared widely and sustainably. These decentralized systems will require governance frameworks that learn from legal systems' hardest-won lesson: even technically sound regulations fail without societal buy-in and mechanisms for iterative adaptation—a principle as critical for AI governance as for criminal justice reform (Aghigh et al., 2022).
Leiashvili (2025) introduced the Symmetric Model of Economic Equilibrium as an innovative analytical framework developed through a dialogue with the artificial intelligence system Grok 3. The model reconceptualized the economy as a self-regulating, operationally closed, and causally open system characterized by cyclical flows and recursive feedback loops. It sought to overcome limitations inherent in traditional economic models such as Walrasian equilibrium and the Arrow-Debreu framework, which rely on unrealistic assumptions like perfect information and a virtual auctioneer. Through detailed mathematical analysis and computational simulations, the Symmetric Model demonstrated the emergence of equilibrium prices and production quantities as “fixed points” generated by internal recursive dynamics, rather than external coordination mechanisms. The dialogue highlighted the model’s ability to explain economic self-regulation, stability boundaries, and the generation of economic cycles based on shifting marginal propensities to save and invest. While idealized, the model provided a deeper theoretical understanding of market processes under conditions of perfect competition, offering a more realistic representation of the dynamic and nonlinear nature of real economies compared to classical models.
Avloniti (2025) examined the relationship between artificial intelligence (AI) and international business through five conceptual lenses: intelligence, interconnectedness, complexity, mindset, and foresight. The chapter began by contrasting human and machine intelligence, emphasizing their respective roles in learning and adaptation within complex business environments. It then explored how globalization and AI co-evolve in a feedback loop, intensifying the interconnectedness among multinational enterprises and accelerating global business transformations. The discussion further distinguished between complicated and complex systems, advocating for different cognitive approaches—inductive reasoning for the former and abductive logic for the latter—alongside relevant AI tools such as machine learning and deep learning. Avloniti (2025) highlighted how multinational firms operate within this intricate networked context, requiring adaptive mindsets to navigate uncertainty. The chapter concluded with a discussion on the ‘race condition’, offering forward-looking insights into AI’s role in shaping international business strategies and emphasizing the need for further scholarly investigation into these evolving dynamics. It should be noted that these governance frameworks risk remaining theoretical without parallel progress in developing standardized AI literacy competencies that bridge technical, ethical, and civic understanding across populations (Khodabin et al., 2022).
However, some scholars contend such protections should extend beyond political and economic safeguards to encompass psychological wellbeing, as platform architectures that optimize engagement routinely compromise mental health through engineered comparison and attention fragmentation (Zamani et al., 2021).
- Theoretical Framework: Media as Agent of Change
New media technologies have historically played a pivotal role in transforming global power dynamics by reshaping the means through which information is controlled, disseminated, and consumed. Harold Innis, a foundational figure in communication theory, argued that media technologies inherently favor particular organizational structures and temporal-spatial biases, which in turn influence the rise and fall of civilizations (Innis, 1951). Innis (1951) introduced the concepts of "time-biased" and "space-biased" media, suggesting that societies dominated by different media forms evolve distinctively: time-biased media (e.g., stone inscriptions, manuscripts) preserve continuity and tradition, whereas space-biased media (e.g., paper, print, and now digital platforms) facilitate expansion, administration, and empire-building. In the context of the twenty-first century, the advent of digital media — particularly the internet, social networks, and artificial intelligence systems — represents an intensification of space-biased communication, leading to profound shifts in global power structures.
Innis (1951) contended that control over communication technologies enables control over societies themselves. Today, digital infrastructures have become critical assets in geopolitics, where entities that master data flows, algorithmic governance, and digital platforms wield disproportionate influence. The United States' dominance through Silicon Valley’s tech giants — Google, Amazon, Facebook, Apple, and Microsoft — has extended American soft power globally, embedding its values, norms, and surveillance architectures into the fabric of everyday life (Morozov, 2011). Simultaneously, emerging powers such as China have cultivated their own ecosystems — notably Tencent, Alibaba, and ByteDance — as part of a strategic move to counterbalance Western dominance and assert a "cyber-sovereignty" model (Deibert, 2019). Digital media, thus, not only redistribute information, but they also serve as a new battleground for ideological and economic supremacy.
Marshall McLuhan, expanding upon Innis’s foundational ideas, famously declared that "the medium is the message" (McLuhan, 1964). For McLuhan, the form of a medium — rather than the specific content it carries — profoundly reshapes human cognition, social organization, and ultimately, civilization itself. In McLuhan’s view, each technological innovation in media recalibrates the balance between individual and collective consciousness. The printing press fragmented communal identities by promoting individualism, while electronic media (e.g., radio and television) re-tribalized humanity by reintroducing acoustic, simultaneous communication forms. In the digital era, characterized by instantaneous global connectivity, McLuhan’s notion of a "global village" materializes as digital platforms erode traditional national borders and cultivate planetary-scale communities (McLuhan, 1964).
McLuhan’s insights sharpen the understanding of current transformations: the global diffusion of smartphones, social media, and AI-driven platforms is not merely changing how information is transmitted, but alter the fundamental structures of political authority, cultural production, and economic value creation. Political revolutions — from the Arab Spring to contemporary protest movements like Black Lives Matter — demonstrate how decentralized digital communication can challenge entrenched power hierarchies. However, McLuhan also warned about the destabilizing consequences of such media environments, predicting an intensification of tribalism, polarization, and violence when new media accelerate the collapse of traditional social structures without offering coherent replacements (McLuhan, 1964). This structural collapse manifests in the recomposition of social authority, where digital platforms facilitate new hierarchies of influence—evident in how algorithmic recommendation systems and influencer cultures actively reshape fundamental human behaviors and value assessments (Nosrati et al., 2023).
Building upon these foundational theorists, Castells (1996) provided a comprehensive sociological analysis of how information technologies create new forms of social organization, which he termed the "network society". According to Castells (1996), power increasingly accrues to those who control the "flows" of information across global networks, rather than to those who control physical territory. The network society is marked by a decoupling of sovereignty from geographic borders, as digital platforms enable actors — corporations, activists, states — to operate transnationally. Castells' concept of "timeless time" — the collapse of traditional temporal structures under the pressure of instantaneous digital communication — echoes Innis’s concerns about how new media forms disrupt existing civilizations (Castells, 1996).
This disruption necessitates new forms of digital citizenship, as individuals must now navigate AI-mediated environments, while confronting challenges of algorithmic bias, data privacy, and unequal access to technological understanding (Khodabin et al., 2024). Moreover, Toffler’s (1980) vision of the "Third Wave" civilization forecasted the rise of a knowledge-based economy, driven by information technologies. Toffler argued that new media would upend industrial-age institutions — governments, schools, corporations — by making decentralized, flexible, and adaptive systems more viable. According to Toffler, societies that could rapidly adapt to these technological waves would prosper, while those clinging to industrial-era paradigms would decline. This prediction resonates with contemporary observations that states and corporations adept at integrating AI, big data analytics, and platform-based economies gain competitive advantages on the global stage (Schwab, 2016). These advantages nevertheless introduce new systemic vulnerabilities, as AI's predictive capabilities and algorithmic decision-making create dependencies that transcend sectoral boundaries—a paradox evident in domains ranging from economic policy to crisis response (Sakhaei et al., 2024b).
Importantly, Innis’s framework also helps explain the current contest over technological standards, infrastructure, and platforms. For Innis, empires expanded by monopolizing certain media forms, but also decayed when new media undermined existing monopolies (Innis, 1950).
Today, this monopoly dynamic manifests through platform algorithms that structure visibility and monetization—where fame and digital attention are actively engineered into economic capital, as seen in influencer economies like Instagram (Arsalani et al., 2024). Applying this lens, the contemporary battle over 5G infrastructure, semiconductor supply chains, and AI regulation reflects a struggle for control over the emerging media regime. China's Belt and Road Initiative, for example, includes a "Digital Silk Road" aimed at exporting Chinese standards for data governance, cybersecurity, and smart city technologies across Asia, Africa, and Latin America (Triolo et al., 2020). The United States and its allies have responded with initiatives like the "Partnership for Global Infrastructure and Investment" to promote alternative standards and resist authoritarian technological encroachments. At the same time, the growing power of tech giants like Google reveals another dimension of this struggle, as these companies not only shape technological ecosystems, but also influence academic narratives and public perception to legitimize their dominance—making democratic oversight and regulation even more urgent (Sarfi et al., 2021).
Furthermore, Innis’s concern with monopolies of knowledge is acutely relevant in today's era of platform monopolies. He warned that societies in which communication was monopolized by elites tended toward rigidity, intolerance, and eventual decline (Innis, 1949). In contemporary terms, the concentration of data, algorithmic control, and platform governance in the hands of a few corporations — often referred to as "Big Tech" — raises alarms about democratic erosion, economic inequality, and epistemic capture (Zuboff, 2019). Surveillance capitalism — the commodification of personal data for profit and behavioral prediction — illustrates how new media architectures enable new forms of domination that transcend traditional nation-state sovereignty.
Moreover, new media technologies are altering not only the loci of power, but also its very nature. These transformations include the strategic engineering of perceptions through algorithmic media ecosystems that systematically shape collective understandings of technological and political legitimacy (Kharazmi & Mohammadi, 2020). McChesney (2013) argues that digital capitalism intensifies the commodification of attention and communication, restructuring the global political economy to favor those actors who can aggregate, manipulate, and monetize user data. These processes mirror McLuhan’s insight that new media create environments faster than societies can adapt, leading to profound disorientation and conflict. This adaptive lag becomes particularly hazardous when technological adoption outpaces critical understanding, potentially transforming tools of progress into vectors of harm (Soroori Sarabi et al., 2020). The digital public sphere has thus become both a battleground and a market, where states, corporations, and individuals vie for influence in an increasingly fragmented and volatile communicative landscape. Systematic research confirms that critical thinking is the essential antidote to digital manipulation, enabling individuals and institutions to decode algorithmic biases and resist adversarial narratives in contested information ecosystems (Sakhaei et al., 2023). Complementary findings suggest that holistic media literacy — especially when expanded to include parents and institutional actors — can reinforce this resistance by fostering healthier decision-making frameworks and critical engagement from an early age (Hosseini et al., 2025).
The emergence of "hybrid warfare" — wherein disinformation campaigns, cyberattacks, and information sabotage are routinely employed by state and non-state actors — underscores how media technologies now serve as weapons of geopolitical competition (Pomerantsev, 2019). Russia’s alleged use of social media manipulation during the 2016 U.S. presidential election exemplifies how new media environments facilitate novel forms of asymmetrical conflict, targeting the epistemic foundations of democratic societies. In such a context, sovereignty is increasingly determined not solely by territorial control, but by mastery over informational ecosystems.
Looking forward, the implications of these transformations are profound. Harold Innis’s cyclical theory of media suggests that today's digital empires may face declines if new communication forms — such as decentralized blockchain technologies or emerging quantum networks — disrupt their monopolies. McLuhan’s vision of a constantly evolving "global village" hints at the paradoxical outcomes of intensified connectivity: greater opportunities for global cooperation alongside heightened risks of cultural fragmentation and violence. New studies confirm this paradox, showing how social media's global connectivity has simultaneously increased rates of mental issues including anxiety, depression, and social isolation (Nosraty et al., 2021). Castells’ "network society" thesis suggests that future geopolitical influence will depend less on industrial capacity and more on the ability to navigate and shape transnational information networks.
- Methodology
This study adopted a mixed-methods research design to comprehensively investigate Iranian MBA students’ perceptions of AI and its transformative potential in altering global power structures. The mixed-methods approach was selected to enable both statistical generalizability and in-depth thematic exploration, providing a richer understanding of the nuanced perspectives held by future business leaders. Specifically, the research integrated quantitative survey data with qualitative analysis of open-ended responses to capture not only measurable attitudes, but also the underlying rationales and contextual interpretations behind those attitudes.
The quantitative component of the study was based on a structured questionnaire distributed electronically between January and March 2025. Participants were recruited through a purposive sampling strategy, targeting current MBA students and recent MBA graduates (within the last five years) across multiple Iranian universities and private business schools. Eligibility criteria included active enrollment or recent graduation from an accredited MBA program in Iran. A total of 394 valid responses were collected, representing a diverse demographic in terms of gender, employment sector, and age group. Given the exploratory nature of the study, the sample size was deemed sufficient to support both descriptive statistics and inferential analyses, enhancing the credibility and generalizability of the findings within the Iranian MBA context.
Data analysis was conducted in two phases. Initially, descriptive statistics were used to outline demographic profiles and key attitudinal patterns concerning AI’s geopolitical and economic effects. Next, inferential statistical methods, such as chi-square tests of independence and Spearman’s rank-order correlation, were applied to investigate relationships between demographic factors and core attitudinal responses. All analyses were performed using IBM SPSS Statistics (Version 26), with a significance threshold of p < .05. Special focus was given to examining how employment sector, academic status, gender, and age related to views on technological dependence, competitive pressures, and entrepreneurial optimism. Cross-tabulations were employed where relevant to illustrate statistically significant associations.
To complement the quantitative data, a qualitative component was incorporated through open-ended survey questions, asking participants to elaborate on their views about AI’s role in reshaping international hierarchies, national sovereignty, and business opportunities. Qualitative responses were analyzed following Braun and Clarke’s (2006) six-phase thematic analysis procedure, including familiarization with the data, initial coding, theme identification, theme review, theme definition, and final write-up. Coding was conducted manually by the primary researcher and cross-validated by an independent reviewer to enhance the reliability of the thematic categorization. Through this process, four salient themes were identified: AI as a soft power instrument, local vulnerability to technological dependence, entrepreneurial opportunity through AI, and regulatory dissonance and institutional lag. These emergent themes provided crucial interpretive depth to the statistical trends observed.
Although the study benefits from methodological triangulation, it is not without limitations. First, the use of purposive sampling restricts the external validity of the findings, and results may not be directly generalizable beyond Iranian MBA populations. Second, the reliance on self-reported perceptions introduces the possibility of response biases, such as social desirability effects. Third, while the qualitative analysis was rigorously conducted, the open-ended survey format may limit the depth of qualitative insights compared to full interviews or focus groups.
- Findings
This study surveyed 394 Iranian MBA students and recent graduates to assess their perceptions regarding the disruptive potential of artificial intelligence for the global balance of power. Table 1 provides the demographic data of this study. Table 2 summarizes the key perceptions of AI’s impact. A chi-square test was applied to assess whether employment sector was associated with the belief that AI will intensify global competition, especially in ways that would challenge domestic firms. Responses to the relevant survey item were categorized on a three-point scale (Agree, Neutral, Disagree) and grouped by sector: public, private, entrepreneurial, and unemployed.
The sample included a near-equal gender distribution, with 54.1% identifying as male and 45.9% as female. In terms of academic status, 55.1% were currently enrolled MBA students, while the remaining 44.9% had graduated within the last five years. The respondents represented diverse professional sectors: 48.0% were employed in the private sector, 26.4% in public institutions, 14.2% were self-employed or entrepreneurs, and 11.4% reported being unemployed or actively seeking employment. The majority of participants were between the ages of 21 and 40, with 50.3% aged 21–30 and 38.3% aged 31–40.
Table 3 provides a thematic analysis of the responses received in this study. As illustrated in Table 3, main themes include: (1) AI as a Soft Power Tool; (2) Local Vulnerability to Technological Dependence; (3) Entrepreneurial Opportunity through AI; and (4) Regulatory Dissonance and Institutional Lag.
Participants were asked to evaluate how AI might shift the existing global power equilibrium. The responses revealed a prevailing belief that AI would serve as a transformative geopolitical force. A significant majority (68.3%) agreed or strongly agreed that nations failing to integrate AI into their national strategies risk a decline in global influence. Furthermore, 74.6% believed that emerging economies that strategically invest in AI would gain outsized geopolitical leverage, effectively redefining traditional notions of development and power. Another notable proportion (61.2%) expressed concern that AI would exacerbate existing global inequalities, particularly by deepening the technological divide between nations. This view reflects an underlying anxiety among Iranian MBA students that technologically advanced nations may increasingly dominate not only global markets, but also international governance structures through AI-enabled mechanisms such as algorithmic decision-making and digital surveillance.
Regarding the business environment, both globally and within Iran, the participants voiced a nuanced mixture of optimism and apprehension. An overwhelming 82.4% agreed that AI would fundamentally disrupt global value chains by reducing the centrality of geographical proximity in supply chain management. This shift was perceived as both an opportunity and a challenge: while it may reduce barriers to entry for firms in peripheral economies, it also intensifies competition and demands rapid technological adaptation. Approximately 65.7% of respondents indicated that without a significant commitment to AI integration, Iranian firms risk becoming noncompetitive in an increasingly digital global economy. Sectoral studies confirm this implementation gap—while professionals across fields recognize AI's transformative potential, most report their training systems remain misaligned with the strategic competencies needed to operationalize these technologies effectively (Tomraee et al., 2024). Despite these concerns, 58.9% maintained that AI presents an opportunity for local firms to leapfrog traditional industrial development phases—particularly if government policy, infrastructure investment, and educational systems can be aligned to support this transition.
Qualitative data drawn from open-ended responses further enriched the analysis. Four dominant themes emerged. First, many respondents conceptualized AI as a tool of soft power, suggesting that influence in the AI domain may increasingly parallel or even surpass traditional military or economic forms of power. Second, a recurrent concern was Iran’s vulnerability to technological dependency, particularly given the dominance of Western and Chinese AI platforms. This dependency was perceived not only as an economic liability, but also as a potential threat to national sovereignty. Studies demonstrate that transcending technological dependence requires literacy programs evolving beyond technical skills to cultivate system-level critique—particularly for deconstructing opaque algorithmic infrastructures (Hosseini et al., 2025). Third, despite systemic constraints, a number of participants viewed AI as a potential equalizer for Iranian entrepreneurs, allowing them to access global markets through data-driven innovation. Finally, several respondents critiqued the misalignment between Iran’s rigid regulatory environment and the agile, fast-moving nature of AI development, warning that without institutional reform, local businesses would be ill-equipped to harness AI’s benefits. Studies of technology adoption in Iran reveal a recurring pattern, where rapid innovation outpaces safeguards, exacerbating mental health risks and social inequalities when regulatory frameworks fail to adapt (Nosraty et al., 2020). This underscores respondents’ concerns that AI—without proactive governance—could institutionalize similar disparities between economic gains and human costs.
Participants employed in the private and entrepreneurial sectors were significantly more likely to agree with the statement that there is currently a market pressure for adopting AI, suggesting heightened awareness of market pressures among those more directly exposed to competitive business environments. A correlation test revealed a statistically significant association, χ²(6, N = 394) = 14.21, p = .027. Conversely, respondents in the public sector were more evenly split, reflecting either a perceived buffer from competitive forces or a lag in institutional adaptation.
Another chi-square test examined the relationship between academic status and perceptions of regulatory dissonance—the idea that Iran’s institutional frameworks are poorly aligned with the demands of AI development. The analysis produced a statistically significant result, χ²(2, N = 394) = 6.38, p = .041.
In addition to the initial set of analyses, further inferential testing was conducted to explore further potential associations between demographic variables and key attitudinal responses related to AI’s geopolitical and economic impacts. These tests aimed to uncover statistically meaningful patterns that could inform both policy and academic understanding of how future business leaders in Iran conceptualize the AI-driven transformation of global and local systems.
One hypothesis examined the relationship between gender and concern about technological dependence on foreign AI platforms. Using a three-point categorical scale (Agree, Neutral, Disagree), responses were cross-tabulated by gender, and analyzed via a chi-square test of independence. The test yielded statistically significant results, χ²(2, N = 394) = 6.59, p = .037, indicating that gender was associated with differing levels of concern. Specifically, male respondents were slightly more likely to agree that Iran’s reliance on externally developed AI systems posed a sovereignty risk, whereas female respondents exhibited a more balanced distribution of views. This finding suggests that gendered experiences or professional exposures may influence how technological dependency is perceived among Iranian MBA populations.
Another hypothesis tested the association between academic status (current student vs. recent graduate) and the belief that AI will disrupt traditional business models. Using the same three-point agreement scale, a chi-square test revealed a statistically significant association, χ²(2, N = 394) = 7.92, p = .019. Current MBA students were more likely to strongly endorse the notion that AI will radically alter existing business paradigms, perhaps reflecting their greater exposure to contemporary discussions of AI through ongoing coursework and media engagement. Graduates, while also largely in agreement, were somewhat more reserved in their assessments, possibly due to real-world exposure to the slower pace of institutional and market adaptation.
In terms of ordinal-level relationships, a correlation analysis was conducted to examine the association between perceptions of regulatory inadequacy and entrepreneurial optimism regarding AI. Participants rated the extent to which they believed Iranian regulatory systems were prepared for AI (on a 5-point Likert scale), and separately, the extent to which they believed AI could empower Iranian startups to bypass traditional developmental stages. Spearman’s rank-order correlation revealed a strong positive relationship, ρ = 0.853, p < .001. This result implies that those who perceive regulatory institutions as inadequate are also the most optimistic about AI’s potential to circumvent these very structures through disruptive innovation. Such a pattern may reflect a broader tension between institutional critique and technological idealism—participants appear to place their faith in technological solutions precisely because of their skepticism toward existing regulatory frameworks.
A chi-square test was applied to assess whether employment sector was associated with the belief that AI will intensify global competition, especially in ways that would challenge domestic firms. Responses to the relevant survey item were categorized on a three-point scale (Agree, Neutral, Disagree) and grouped by sector: public, private, entrepreneurial, and unemployed. The test revealed a statistically significant association, χ²(6, N = 394) = 14.21, p = .027. Participants employed in the private and entrepreneurial sectors were significantly more likely to agree with the statement, suggesting heightened awareness of market pressures among those more directly exposed to competitive business environments. Conversely, respondents in the public sector were more evenly split, reflecting either a perceived buffer from competitive forces or a lag in institutional adaptation.
Another chi-square test examined the relationship between academic status and perceptions of regulatory dissonance—the idea that Iran’s institutional frameworks are poorly aligned with the demands of AI development. The analysis produced a statistically significant result, χ²(2, N = 394) = 6.38, p = .041. Current students were more inclined to view Iran’s regulatory infrastructure as inadequate for the challenges posed by AI. This finding may reflect a generational shift in expectations or a closer engagement with global discourses on AI governance within academic settings. Graduates, while not completely dismissive of regulatory shortcomings, appeared more tempered in their critiques, possibly influenced by their experience navigating real-world policy landscapes.
A final correlation test evaluated the relationship between age and optimism about AI's potential to help Iranian firms leapfrog traditional stages of industrial development. Using age-group identifiers (coded ordinally from 1 = 21–30, to 3 = 41+) and Likert-scale optimism scores, a strong and statistically significant positive correlation was found: ρ = 0.592, p < .001. Interestingly, older respondents were more optimistic about leapfrogging opportunities. This may be interpreted in several ways. It is possible that older participants, having witnessed or experienced Iran’s economic bottlenecks, perceive AI as a long-awaited opportunity to bypass entrenched structural challenges. Alternatively, it may reflect a more strategic or policy-oriented mindset that views AI as a systemic tool for overcoming decades of technological inertia.
- Conclusion
This study has sought to illuminate how Iranian MBA students and recent graduates perceive the disruptive potential of AI in reshaping global power dynamics. By integrating quantitative and qualitative data, the findings present a nuanced view of a generation poised to influence Iran's economic and strategic trajectory. The results affirm that AI is widely perceived not merely as a technological development, but as a geopolitical force capable of redefining national sovereignty, economic competitiveness, and global hierarchies. Research demonstrates that technological literacy initiatives can mitigate exclusion in digital transitions, offering a framework for ensuring AI's disruptive potential strengthens rather than fragments societal resilience (Sakhaei et al., 2024a). Participants consistently expressed both optimism and anxiety: optimism regarding AI’s capacity to enable emerging economies like Iran to bypass traditional stages of industrial development, and anxiety concerning technological dependence and the risks of widening global inequalities. Such dual perspectives reflect broader professional attitudes toward AI, where recognition of transformative potential coexists with concerns about infrastructural readiness and equitable implementation (Tomraee et al., 2022).
Consistent with broader theoretical frameworks, such as Innis’s (1951) and McLuhan’s (1964) arguments regarding media technologies as agents of civilizational change, participants recognized AI as fundamentally altering the very structures through which power is exerted and maintained. The themes of soft power expansion, vulnerability to foreign technological hegemony, entrepreneurial disruption, and institutional lag parallel the predictions made by theorists who argue that control over information and media determines future dominance. The quantitative findings reinforce these themes: significant majorities agreed that AI will accelerate the decline of traditional global powers that fail to innovate, and that emerging economies investing in AI will gain disproportionate geopolitical leverage. Moreover, the significant statistical associations between employment sector, gender, academic status, and perceptions of AI’s consequences demonstrate that these views are not monolithic, but shaped by socio-economic positioning and professional orientation.
Critically, the findings reveal a tension between technological idealism and institutional skepticism. Participants who perceived Iran’s regulatory infrastructure as inadequate were often the most optimistic about AI's potential to empower entrepreneurial initiatives. This suggests a latent belief that technological innovation could circumvent or compensate for structural deficiencies, an insight that carries profound implications for policy-making. If regulatory inertia persists, Iran risks being trapped in a cycle of dependency on external platforms, undermining both economic competitiveness and national sovereignty. Conversely, targeted reforms fostering AI literacy, encouraging innovation ecosystems, and updating regulatory frameworks could harness this emerging optimism and position Iran more favorably in the global AI race. Research emphasizes that digital literacy must evolve alongside technology, serving as both a shield against digital manipulation and a foundation for ethical engagement with emerging systems (Arsalani et al., 2022).
The study’s findings should be viewed within the wider conversation on AI and global governance. Scholars like Zuboff (2019) have noted that the concentration of AI capabilities in a handful of global corporations threatens to undermine traditional democratic structures and reinforce new forms of economic control. Participants’ concerns about technological reliance align with these concerns, underscoring the need for national and international policies to tackle the uneven distribution of AI power. Additionally, as Castells (1996) and Toffler (1980) have argued, the rise of information-driven societies requires nations and communities to adapt their political, economic, and cultural frameworks in order to remain competitive. In this context, the perspectives gathered in this study act as early signals of the challenges that will influence Iran’s future development trajectory.
While offering valuable insights, the study also highlights areas for further exploration. Iran’s unique context may not fully apply to other emerging economies. Future research should include cross-country comparisons to determine if similar patterns of optimism, vulnerability, and entrepreneurial adaptation appear in varied socio-political environments. Long-term studies could also track how perceptions shift as AI becomes more integrated into economic and governmental systems. Additionally, deeper qualitative methods, such as in-depth interviews or focus groups could provide richer insights into how future business leaders plan to navigate the intricate relationship between technology, sovereignty, and global power.