Document Type : Research Paper
Authors
1 M.A. in English Language Teaching, Khatam University, Tehran, Iran
2 B.A. in Law, Payame Noor University, Bostan Abad, Iran
Abstract
This study investigates the ways in which Iranian sociologists conceptualize artificial intelligence as both an epistemic infrastructure and a geopolitical force within global knowledge production. Drawing on 32 in-depth interviews and grounded theory methodology, the research identifies a central analytic category: contested epistemic futures. This concept encapsulates the tensions between structural epistemic asymmetries and local efforts to reappropriate AI for culturally specific ends. Participants critiqued AI systems as carriers of Eurocentric epistemologies and instruments of digital colonization, but also highlighted strategic opportunities for local innovation, agency, and resistance. The analysis reveals five key thematic axes: epistemic asymmetry, decontextualization versus cultural specificity, technological determinism versus strategic agency, epistemic justice, and sociotechnical governance. These axes describe AI as neither a neutral tool nor an inevitable threat, but as a socially contingent technology shaped by political choices, institutional infrastructures, and cultural values. The study contributes to critical AI and decolonial epistemology by centering non-Western academic voices and proposing a relational, justice-oriented framework for AI governance.
Keywords
- Artificial Intelligence
- Digital Colonialism
- Epistemic Inequality
- Iranian Sociology
- Knowledge Production
Main Subjects
This is an open access work published under the terms of the Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0), which allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use (https://creativecommons.org/ licenses/by-sa/4.0/)
- Introduction
Artificial intelligence is becoming increasingly vital in organizing, assessing, and sharing knowledge worldwide. Advocates highlight AI as a revolutionary tool that boosts productivity, improves decision-making, and broadens access to information. However, critical scholars challenge these assertions, arguing that AI systems are not neutral tools, but sociotechnical frameworks intertwined with global power and knowledge hierarchies (Birhane, 2021; Crawford, 2021; Mohamed et al., 2020; Shahghasemi, 2025). As algorithms play a larger role in areas like scientific authorship and educational evaluation, concerns grow about whose knowledge systems are embedded in these technologies and whose knowledge systems are excluded. This literature contends that AI often reinforces Eurocentric modernity, favoring data-driven, Anglophone, and technocratic perspectives, while sidelining local, embodied, and diverse ways of knowing.
Within this critical landscape, the voices and experiences of scholars outside the Global North—particularly from postcolonial and semi-peripheral contexts—remain underexamined. Iranian sociologists, situated within a nation marked by both rich intellectual traditions and structural exclusions from the global knowledge economy, offer a uniquely positioned critique of AI’s epistemic implications. Iran’s history of colonial modernity, its politically charged scientific infrastructure, and its complex stance toward Western technological models offer a rich context for exploring how AI is perceived, understood, and possibly re-envisioned within distinct epistemic perspectives. While prior research has highlighted the colonial contours of algorithmic design (Couldry & Mejias, 2019; D’Ignazio & Klein, 2020), few studies have examined how academics in the Global South engage with AI not only as a technical phenomenon, but also as a field of epistemic negotiation and resistance.
This study explores the ways in which Iranian sociologists interpret the relationship among artificial intelligence, knowledge production, and global hierarchies. Drawing on 32 in-depth interviews and using grounded theory methodology, the research investigates the tensions, strategies, and imaginaries that structure local engagements with AI. Rather than treating AI as a value-neutral innovation, participants conceptualized it as an epistemic apparatus—one that both mirrors and magnifies longstanding asymmetries in who is authorized to produce, circulate, and validate knowledge.
- Review of Literature
The following review synthesizes recent scholarship exploring the complex interplay between artificial intelligence and global knowledge systems, power dynamics, and marginalized communities. This body of literature critically examines AI’s role in reshaping epistemic, social, and environmental landscapes, highlighting both its transformative potential and its risks of perpetuating inequities. By addressing themes such as digital colonization, algorithmic bias, and ethical governance, the studies provide a foundation for understanding how AI influences cultural sovereignty, relational ethics, and equitable access to knowledge in diverse contexts.
Gasparotto (2016) critically examined the intersection of indigenous knowledge systems and digital infrastructures, arguing that contemporary online platforms perpetuate colonial power structures through what she terms “digital colonization”. Focusing particularly on Latin American contexts, she explored how algorithmic bias and inequitable access to internet infrastructure systematically marginalize indigenous voices. Using case studies such as the temporary rejection of the Indigital Storytelling app by Apple, Gasparotto (2016) illustrated how gatekeeping by global tech corporations constrains indigenous self-representation. She critiqued the dominant discourse of the “digital divide”, demonstrating how it compels indigenous communities to frame their needs in terms of educational or economic utility in order to access technological support, thus reinforcing conditional inclusion. The paper further emphasizes the gendered dimensions of digital exclusion, noting the limited access and digital participation afforded to indigenous women. Gasparotto (2016) also discussed how algorithmic structures—such as biased search engine ranking systems and machine learning processes trained on skewed data—systematically de-prioritize indigenous-produced content, particularly in minority languages or multimedia formats. In response, she highlighted several community-led initiatives that resist digital marginalization, including the Local Contexts project, which supports the use of Traditional Knowledge Labels, and indigenous Wikipedia initiatives that contribute to linguistic visibility and algorithmic relevance. Gasparotto concluded that while these efforts cannot fully overcome structural inequities, they represent crucial interventions in reclaiming digital space and redefining access and authorship on indigenous terms.
Whaanga (2020) critically reflected on the implications of AI for Indigenous peoples, framing AI as both a potential catalyst for innovation and a new form of colonization. His position paper, presented as part of the Indigenous Protocol and Artificial Intelligence Workshops, interrogated whether AI represents a revolutionary opportunity or a threat to Indigenous cultural continuity and sovereignty. Whaanga (2020) contextualized AI within broader technological transformations, emphasizing the ways in which it affects knowledge creation, access to knowledge and knowledge distribution. He underscored the vulnerability of Indigenous languages and knowledge systems to globalizing forces that promote homogenization, warning that AI may exacerbate this process by embedding colonial logics into algorithms and digital platforms. Drawing on discussions with Māori cultural and technological experts, the paper explored themes such as data sovereignty, Indigenous coding practices, algorithmic bias, and cultural protocols for knowledge management. Whaanga (2020) invoked the concept of “mental colonization”, articulated by scholars like Frantz Fanon and Ngũgĩ wa Thiong’o, to argue that AI risks further entrenching colonial dynamics if Indigenous epistemologies are excluded from its development. The paper critiqued existing AI ethics frameworks for largely overlooking Indigenous perspectives and called for inclusive, culturally-grounded standards that respect Indigenous intellectual capital and ways of knowing. Ultimately, Whaanga (2020) advocated for active Indigenous engagement in AI discourse and design, emphasizing the need to co-create systems that uphold cultural values, language preservation, and self-determination.
Arora et al. (2023) critically examined the dual risks and benefits of AI development through a relational risk perspective, emphasizing the compounding effects of algorithmic bias, data colonialism, and global marginalization. The authors explored how AI, particularly generative models, have accelerated transformative capabilities, while simultaneously introducing significant ethical and socio-political challenges. They underscored how algorithmic bias—rooted in unequal training data—disproportionately affects marginalized populations, notably in healthcare, where diagnostic systems underperform for underrepresented groups. A notable example included a case study on retinal diagnostics, where AI systems trained predominantly on light-skinned patient data demonstrated reduced diagnostic accuracy for dark-skinned individuals. To address such disparities, the authors highlighted the use of generative adversarial networks (GANs) to produce synthetic data that can rebalance training datasets. However, they cautioned that technical solutions alone are insufficient without addressing underlying structural inequalities. The article further unpacked the notion of “data colonialism”, describing how labor and data from the Global South are exploited by firms in the Global North, often without adequate compensation or ethical oversight. Drawing from high-profile cases, such as labor outsourcing by OpenAI to Kenyan workers, the authors illustrated the material and psychological harms experienced by invisible data workers. The paper ultimately proposed a relational understanding of AI risk as a duality—where benefit and harm coexist—calling for responsive policy frameworks, ethical design practices, and inclusive governance models that center the needs of marginalized populations in shaping the AI future.
Campo Ruiz (2024) critically examined the integration of AI into environmental governance, highlighting both its transformative potential and its risks for reinforcing global inequalities and neo-colonial dynamics. The essay outlined the capacity of AI to support climate mitigation, biodiversity protection, resource management, and environmental crime prevention by processing vast datasets and automating surveillance. Examples such as AI-enabled forest monitoring in the Congo water optimization systems demonstrated these utilities. However, Campo Ruiz (2024) underscored how the development and deployment of AI are largely concentrated in affluent countries, while its impacts are globally distributed—creating a structural power imbalance that may marginalize voices from lower-income regions. The author further emphasized that AI systems are not value-neutral tools; they are embedded in sociotechnical systems shaped by design cultures, infrastructural dependencies, and biased training datasets. These systems can replicate and amplify existing social, economic, and environmental injustices, especially where data scarcity or representational bias skews outcomes. The essay also warned of AI's potential to manipulate environmental data and public perception, particularly through misinformation and deepfakes, which can distort climate narratives and hinder democratic processes. While acknowledging AI’s environmental applications, Campo Ruiz (2024) stressed the urgent need for governance frameworks that center ethical, cultural, and local considerations. She advocated for AI development that respects humanistic principles and avoids the reductive techno-centric logic that has historically sidelined ecological and social complexity. The essay concluded by calling for more transparent, inclusive, and globally equitable approaches to integrating AI into environmental stewardship.
Duarte and Clark (2024) examined the ethical, epistemological, and ontological challenges posed by AI through the lens of Indigenous philosophies, proposing a framework that reorients AI development toward relationality and interdependence. Drawing on diverse Indigenous knowledge systems, the authors critiqued the dominant Western paradigm that underpins much of AI design—particularly its emphasis on autonomy, efficiency, and abstraction. They argued that such frameworks often marginalize Indigenous worldviews that prioritize interconnectedness, responsibility, and care. The article explored how Indigenous epistemologies, which view knowledge as contextually embedded and socially accountable, offer vital alternatives to the decontextualized data extraction practices common in AI systems. Through the lens of “relational ethics”, the authors proposed that AI should be designed not as detached instruments, but as entities situated within networks of responsibility and kinship. They also warned against the risks of extractivism and epistemic violence when Indigenous knowledge is appropriated or instrumentalized without consent or reciprocity. Highlighting the inadequacy of current AI ethics guidelines in addressing these concerns, the paper called for protocols that foreground Indigenous sovereignty, data governance, and the rights of both human and non-human actors. Duarte and Clark (2024) ultimately advocated for an AI paradigm shift—from domination to relational accountability—emphasizing that Indigenous philosophies can guide the development of technologies that sustain, rather than erode cultural and ecological lifeways.
Avraamidou (2024) critically reflected on the rapid integration of generative AI technologies in science education, framing this trend as a form of digital colonization that mirrors historical patterns of exploitation, exclusion, and epistemic domination. The article explored how AI tools—marketed as revolutionary instruments to optimize scientific learning—often operate within monocultural and neoliberal logics that prioritize productivity over pluralism, and automation over relationality. Avraamidou (2024) highlighted how AI systems perpetuate racial and gender biases, marginalize diverse forms of knowledge, and risk exacerbating educational inequities. Drawing from a systematic literature review, she noted that most AI applications in school science remain atheoretical and focused on automating existing practices, neglecting socio-emotional, cultural, and ethical dimensions of learning. Furthermore, she critiqued the environmental cost of AI technologies and emphasized the dehumanizing effects of algorithm-driven learning systems, which reduce education to mechanistic input–output processes. To counter this trajectory, Avraamidou (2024) proposed a feminist and human-centered AI framework grounded in relationality, embodiment, and justice. This alternative vision calls for critical AI literacy in science education, emphasizing questions of accountability, representation, and environmental sustainability. She argued that reclaiming science education from AI industrial agendas requires resisting the illusion of AI as a neutral tool and instead fostering pedagogies that prioritize care, cultural sustainability, and epistemic diversity.
Stefanija (2023) critically examined the epistemic and power asymmetries inherent in algorithmic systems, arguing that the growing entrenchment of automated decision-making in social and institutional contexts reflects a dominant “algorithmic epistemology” that obscures understanding, while consolidating authority. The chapter interrogated how algorithmic power extends beyond technical function to become a form of epistemic governance, shaping not only what is known, but who is authorized to know and question algorithmic processes. Stefanija (2023) detailed the barriers to knowledge about algorithms, including the technical opacity of proprietary systems, limited public access to training data and design rationales, and the broader discursive framing that constructs algorithms as neutral and objective. This obscurity, she argued, fosters epistemic imbalances, where individuals and communities are subject to algorithmic authority without reciprocal transparency or accountability. The chapter further situated these imbalances within broader societal and discursive structures, demonstrating how algorithmic authority intersects with and exacerbates existing social hierarchies. In response, Stefanija (2023) called for a critical reframing that challenges algorithmic normativity and opens space for resistance and alternative epistemologies. By emphasizing strategies of refusal, critique, and epistemic disruption, she advocated for the reclamation of interpretive agency and knowledge sovereignty in the face of pervasive algorithmic governance.
Arab and Dominguez-Péry (2022) examined the ethical dimensions of knowledge asymmetry in human-AI collaboration, arguing that unequal knowledge distribution between humans and algorithms generates risks for autonomy, transparency, and informed decision-making. Framed around the central question of how knowledge asymmetry manifests in human-AI teams, the authors introduced a conceptual framework identifying four collaborative scenarios, each illustrating different degrees of human and AI knowledge dominance. Drawing on real-world and speculative cases—from virtual hospital assistants to BlackRock’s Aladdin system—they highlighted how increasing AI capabilities shift the balance of expertise away from human agents, potentially rendering them “ignorant lords” over opaque algorithmic systems. The study emphasized that AI’s growing epistemic authority, especially in high-stakes contexts such as healthcare and finance, risks displacing human judgment and know-how. The authors noted that AI lacks the social institutions and experiential grounding that grant human experts their legitimacy, making its authority more difficult to scrutinize or regulate. They also addressed the ethical implications of this asymmetry, including the erosion of human autonomy, diminished responsibility attribution, and the challenges of ensuring explainability and accountability. To mitigate these concerns, the paper advocated for reframing AI as a collaborative partner rather than a tool, emphasizing transparent knowledge exchange and co-evolution of expertise. The authors concluded by calling for further research into mechanisms for ethical knowledge transfer and collaborative design in hybrid intelligence systems.
Vyshnevskyi (2024) investigated the role of AI in mitigating information asymmetry within the conceptual framework of “platform strategiarchy”, proposing a novel synthesis of economic signaling theory and digital governance structures. The article framed information asymmetry as a universal phenomenon shaping interactions among four key actors: individuals, organizations, nature, and AI. Through this lens, the study mapped ten types of dyadic interactions (e.g., person-AI, organization-nature), categorizing them into peer-level and hierarchical relations based on asymmetries in information volume. Vyshnevskyi (2024) argued that asymmetrical access to information underlies economic inequality, and that AI, when embedded in a digital platform of strategizing, can act as a third-party agent to reduce these disparities. This can occur via two primary mechanisms: (1) enhancing the less informed party’s access to relevant knowledge, or (2) guaranteeing compensatory mechanisms to offset informational disadvantages. Building on signaling theory, the author proposed that AI can moderate strategic communications by verifying public strategies and aligning them with smart contract execution. The platform strategiarchy concept assumes that all actors maintain transparent strategic intentions, which AI systems can process to facilitate equitable and informed decision-making. The paper concluded that AI’s mediating potential offers a scalable model for reducing transaction costs and redistributing informational advantages, contributing to a more symmetric and ethically grounded information economy.
- Method
This research utilized a constructivist grounded theory approach to explore how Iranian sociologists perceive AI within global epistemic hierarchies. Grounded theory was chosen for its capacity to derive theory from data, making it ideal for studying under-researched sociotechnical issues in diverse cultural settings (Tie et al., 2019; Thornberg & Charmaz, 2014). Thirty-two semi-structured, in-depth interviews were conducted in Farsi with sociologists from various Iranian academic institutions, selected via purposive and snowball sampling to capture diverse disciplinary perspectives and familiarity with AI discussions. Interviews were fully transcribed and translated into English, with strict adherence to ethical guidelines, including informed consent, anonymity, and secure data management, consistent with qualitative research standards (Orb et al., 2001; Kaiser, 2009).
The analytic process followed the three-stage grounded theory framework: initial (open) coding, axial coding, and selective coding (Corbin & Strauss, 2015). During the initial coding phase, data were examined line-by-line using NVivo software to produce detailed, inductive codes drawn from participants’ language. Over 75 distinct codes were identified and then clustered into five major code families reflecting recurrent conceptual patterns. Analytic memos were maintained throughout to record interpretive insights, challenges, and emergent links across the dataset (Lempert, 2007). To strengthen trustworthiness and credibility, intercoder reliability was assessed through dual-coding of a sample of transcripts by a second researcher, with discrepancies resolved via dialogic consensus—an approach recommended for qualitative validity (Campbell et al., 2013; O’Connor & Joffe, 2020).
Axial coding involved the reorganization of initial codes into five thematic axes, each articulating a central analytic tension in participants’ discourse: epistemic asymmetry as structural condition, decontextualization versus cultural specificity, technological determinism versus strategic agency, epistemic justice and the ethics of recognition, and contingency in sociotechnical governance. These themes guided the selective coding process, which identified “contested epistemic futures” as the study’s core category. This integrative concept captured the ongoing struggle over who controls knowledge production and legitimacy in a digitally mediated world (Fricker, 2007; Medina, 2013). The resulting grounded theory contributes to a critical sociological understanding of AI as both a product and producer of global epistemic inequality—one whose outcomes remain contingent on cultural adaptation, local governance, and the recognition of plural knowledge systems.
- Findings
- 1. Initial Coding: Emergent Themes from Interview Data
The initial coding phase of this study was designed to systematically examine the perspectives of 32 Iranian sociologists on the intersection of AI, global knowledge production, and epistemic asymmetry. Drawing from grounded theory methodology (Charmaz, 2006), this phase entailed an inductive, line-by-line coding of interview transcripts to generate preliminary conceptual categories grounded in the data itself. Rather than imposing a predefined theoretical structure, the goal of initial coding was to remain close to participants' language and meanings, while identifying patterns, divergences, and thematic salience within their narratives.
The process began with an immersive reading of all transcripts to develop a holistic understanding of participants’ discursive styles and intellectual orientations. Transcripts were then uploaded to a qualitative data analysis software (NVivo), where codes were manually assigned to discrete units of meaning—typically sentences or paragraphs expressing a singular idea or evaluative claim. This stage yielded over 75 distinct initial codes, many of which were overlapping or interrelated. Through iterative comparison, abstraction, and clustering, these were then organized into broader thematic categories, referred to here as code families. This process was documented in analytic memos that captured emerging insights, unresolved tensions, and researcher reflexivity regarding interpretation.
To enhance transparency and analytical clarity, Table 1 below presents the five major code families and their sub-codes, each representing a cluster of conceptually coherent responses from interviewees.
- 2. Epistemological Centralization and Western Dominance
One of the most consistent themes across the interviews was a critique of the epistemological architecture underlying dominant AI systems. A large majority of participants emphasized that AI technologies are not epistemically neutral, but instead reflect the values, logic systems, and ontological assumptions of the Global North. This concern was often expressed through references to the "Westernization" of knowledge infrastructures and the "standardization" of cognitive processes.
Multiple participants articulated that AI models—especially those trained on massive, uncurated Anglophone data sets—tend to reinforce deductive reasoning styles and scientific rationalism associated with Enlightenment thought. One respondent remarked, “These models know how to think like a university in Boston, not like a seminary in Qom or a village school in Khuzestan”. This sentiment reflects a broader anxiety that epistemic frameworks embedded in AI systematically marginalize local or indigenous knowledge forms. Codes such as “Western logic as default”, “Homogenization of knowledge”, and “AI as epistemic infrastructure” were frequently co-occurring in discussions about the algorithmic exclusion of diverse worldviews.
- 3. Decontextualization and Intellectual Dependency
Closely linked to epistemic centralization was a second theme: the perceived decontextualization of AI systems and their consequences for national and intellectual autonomy. Interviewees warned that adopting AI technologies without cultural and pedagogical adaptation risks undermining local systems of knowledge production. They emphasized that the architecture of many commercial AI tools is rooted in social and political contexts alien to Iran, leading to what several interviewees referred to as “technological imposition”. One participant characterized this process as a form of “epistemic outsourcing”, noting that while AI promises efficiency and automation, it also introduces conceptual dependencies that are difficult to reverse. Another interviewee lamented, “When we use these systems to assess our students or write our papers, we are no longer thinking in our own terms”. The sub-code “Epistemic dependency syndrome” captured a recurring concern that the blind importation of AI systems could weaken Iran’s scholarly sovereignty.
These critiques extended beyond pedagogical issues to encompass broader geopolitical anxieties. Several sociologists situated these dynamics within the context of digital colonialism, suggesting that technological dependency mirrors historical patterns of knowledge extraction and intellectual subordination.
- 4. Strategic Agency and Subaltern Appropriation
Although the critical perspective dominated, a significant minority of participants offered a more cautious or optimistic account. This group cautioned against a deterministic or overly pessimistic stance that portrays AI solely as an agent of epistemic oppression. For these respondents, AI—despite its ideological biases—remains a tool that can be reappropriated, adapted, and recontextualized for local needs.
These participants emphasized the importance of digital literacy, localized innovation, and institutional agency. Several pointed to examples of Iranian engineers modifying open-source AI platforms for Farsi language processing or developing culturally appropriate educational chatbots. Such efforts were coded under “Creative adaptation” and “Emancipatory potential”. Importantly, these interviewees rejected technological fatalism. As one sociologist explained, “Just because the road is built in Washington, it doesn’t mean we can’t drive it to Shiraz”.
This more pragmatic approach centered on the concept of subaltern appropriation—the idea that marginalized actors can strategically engage with hegemonic technologies to pursue their own epistemic goals. While not dismissing the risks of digital asymmetry, these respondents encouraged a forward-looking strategy grounded in local capacity building.
- 5. Cognitive Justice and Plural Epistemologies
Several interviewees, particularly those trained in philosophy of science, employed the language of cognitive justice to argue for a more inclusive, pluralistic design of AI systems. These participants articulated that current AI tools reflect a monological view of knowledge that is antithetical to the intellectual diversity of the world. They called for an explicit recognition of epistemic multiplicity—not merely as a form of representation, but as a structural principle of algorithmic design.
Under codes such as “Plurality of knowledges” and “Decolonizing algorithms”, participants proposed the integration of non-Western logics, languages, and problem-solving traditions into AI development. A few suggested partnerships between local universities and software engineers to develop regionally tailored AI models that respect indigenous cosmologies and ethical frameworks.
One striking statement was: “We must build machines that know how to ask questions the way our grandparents did—not just how to answer them the way Google does”. This orientation toward epistemic dignity challenges the instrumentalist logic of most commercial AI and foregrounds the ethical imperative to diversify epistemic infrastructures.
- 6. Contingency and Sociotechnical Governance
Finally, several respondents advanced a more structural or institutional perspective, arguing that the effects of AI are not preordained, but shaped by how the technology is governed, localized, and interpreted. These participants emphasized the role of state policy, educational institutions, and civil society in mediating the social life of AI. Codes such as “Governability of AI” and “Policy interventions” were applied to reflections on how Iran might exert epistemic sovereignty through strategic regulation and investment.
These perspectives framed AI as a sociotechnical assemblage—one composed not only of code and data, but also of institutions, labor, cultural assumptions, and political will. For this group, the question was less about whether AI is epistemically biased, and more about whether local actors have the tools and agency to shape AI’s development in ways that align with national interests and cultural values.
- 7. Reflexivity and Analytical Validity
Throughout the coding process, care was taken to bracket the researcher’s own assumptions about AI and knowledge hierarchies. Memos were used to track interpretive decisions and to document cases where participants’ meanings were ambiguous or culturally specific. A small subset of transcripts was also double-coded by a research assistant, and discrepancies in interpretation were discussed and resolved collaboratively. This intercoder dialogue served to clarify code boundaries and ensure analytic reliability.
Importantly, participants’ quotes were not simply taken at face value, but contextualized within their broader narratives and intellectual trajectories. For example, the same participant who critiqued epistemic centralization might also express interest in local AI innovation. This complexity was preserved in the coding to avoid flattening the data into binary positions.
- Axial Coding: Organizing Thematic Relationships and Constructing Analytic Dimensions
Following the initial open coding of the interview data, the next analytic step involved axial coding—a process central to grounded theory methodology, in which emergent codes are re-examined and organized into broader conceptual categories based on their relational properties. While initial coding allowed for a granular breakdown of participants’ language and meaning, axial coding sought to reassemble these fragments into structured thematic constellations that illuminate the underlying logic of the data (Strauss & Corbin, 1998). Specifically, this stage aimed to uncover how different categories interrelate causally, contextually, and functionally within the broader discourse on AI, knowledge production, and epistemic power among Iranian sociologists.
Axial coding proceeded in three overlapping phases: (1) re-examination of the most frequent and conceptually significant codes; (2) identification of causal and contextual linkages between codes; and (3) grouping of codes into analytic axes or meta-themes. Throughout this process, coding memos and participant quotes were repeatedly consulted to ensure fidelity to the original meanings expressed in the interviews. Rather than treating categories as static or mutually exclusive, axial coding emphasized the dynamic interplay among themes, particularly the tensions between critique and strategy, between structure and agency, and between epistemic centralization and pluralization.
- 1. Centralizing Axes: From Codes to Meta-Themes
Through this analytical process, five meta-themes were identified, each representing a conceptual axis, along which the data could be organized. These axes reflect the dominant tensions and logics in the discourse and are not reducible to the code families used in the initial coding phase. Rather, they operate at a higher level of abstraction, revealing the dialectical structure of participants’ concerns and aspirations. The five axial themes are as follows:
- Epistemic asymmetry as a structural condition
- Decontextualization versus cultural specificity
- Technological determinism versus strategic agency
- Epistemic justice and the ethics of recognition
- Contingency in sociotechnical governance
Each of these axes will be elaborated below, along with their constituent sub-themes and representative quotes where relevant.
- 2. Epistemic Asymmetry as a Structural Condition
The most prevalent axial theme centered on the structural nature of epistemic asymmetries encoded in AI systems. Across interviews, participants portrayed AI not simply as a technological artifact, but as an epistemic apparatus that embeds and perpetuates hierarchies of knowledge. This theme was not limited to critiques of data sets or algorithms; rather, it encompassed the entire ecosystem of AI production, including its institutional sponsors, epistemological assumptions, and transnational flows of influence.
Participants repeatedly linked the centralization of AI development in the Global North to broader historical patterns of coloniality and intellectual dependency. The recurring use of terms such as “epistemic colonization”, “cognitive hegemony”, and “algorithmic centralization” signaled that respondents understood AI as participating in a long-standing tradition of Eurocentric knowledge domination. The asymmetry was seen as structural insofar as it was not a product of individual bias, but the outcome of systemic inequalities in research funding, language dominance, and technological infrastructure.
This axis unites multiple codes from the initial coding phase, including "Western logic as default", "Anglophone data bias", and "loss of academic autonomy". Axial coding revealed that these were not isolated critiques, but interrelated symptoms of a deeper epistemological architecture that privileges dominant worldviews.
- 3. Decontextualization versus Cultural Specificity
A second major axis concerns the tension between the universalizing tendencies of AI systems and the need for cultural and epistemic specificity. Many interviewees criticized the decontextualized deployment of AI technologies in Iranian educational and research settings, arguing that such systems fail to accommodate the linguistic, cultural, and philosophical particularities of local environments.
Axial coding here connected themes of “technological imposition”, “context-erasure”, and “epistemic dependency syndrome” into a broader pattern of critique that emphasized the mismatch between imported AI tools and indigenous contexts of knowledge production. These reflections were often grounded in concrete institutional examples, such as university-level plagiarism detection software that failed to recognize Persian syntax, or citation recommender systems that excluded Iranian scholarly publications.
At the heart of this theme was a normative claim: that knowledge systems cannot be transplanted wholesale across cultural borders without substantial loss or distortion. Participants who emphasized this concern often articulated a desire for AI systems that were co-designed with local epistemic communities and capable of recognizing culturally embedded categories of knowledge.
However, this theme also intersected with the strategic agency axis (discussed below), as some participants viewed cultural specificity not only as a critique, but as a design imperative—an opportunity to re-engineer AI tools from the ground up.
- 4. Technological Determinism Versus Strategic Agency
A third axis captures the tension between deterministic accounts of AI as a tool of domination and more agency-oriented views that emphasize local appropriation, resistance, and innovation. While many participants expressed profound concerns about the epistemological and political consequences of AI, a notable minority resisted framing AI as an inherently oppressive force. Instead, they positioned it as a malleable tool that could be reconfigured for emancipatory or locally grounded purposes.
Axial coding grouped responses expressing anti-determinism, creative adaptation, and AI opportunity into a thematic cluster that emphasized human agency in shaping technological outcomes. These participants highlighted examples of AI localization efforts in Iran, such as the development of Farsi-language natural language processing models or the use of AI in preserving endangered dialects. Their emphasis was not on rejecting AI, but on reclaiming it—infusing it with local logics, ethical commitments, and educational aims.
The juxtaposition of deterministic and agentic narratives revealed an important cleavage in the dataset. While some participants focused on AI’s structural biases and epistemic dangers, others framed those same conditions as challenges to be strategically navigated. This divergence did not necessarily reflect a difference in empirical experience, but rather a difference in epistemological orientation: between critical structuralism and pragmatic instrumentalism.
- 5. Epistemic Justice and the Ethics of Recognition
The fourth axial theme emerged from participants’ calls for a more inclusive and pluralistic epistemic order—one that recognizes and values diverse knowledge systems. This theme consolidated various discourses around “plurality of knowledges”, “decolonizing algorithms”, and “knowledge dignity” into a more coherent analytic axis concerned with epistemic justice.
Participants invoking this theme often framed their arguments in normative terms, emphasizing the ethical and political necessity of creating AI systems that do not simply extract and reformat knowledge from the periphery, but genuinely include and uplift marginalized epistemologies. Some suggested building AI models informed by Islamic philosophy, Persian mysticism, or regional oral traditions. Others proposed institutional partnerships between Global South universities to develop shared repositories of indigenous knowledge for AI training.
This axis is significant not only for its conceptual richness. but also for its aspirational character. While many of the concerns articulated elsewhere were grounded in critique or caution, the epistemic justice axis introduced a vocabulary of hope and normative possibility. It gestured toward a future in which AI could be a site of epistemic repair rather than erasure.
- 6. Contingency in Sociotechnical Governance
The final axial theme addresses the idea that the impacts of AI are not inevitable, but contingent upon governance, design, and institutional engagement. Participants advancing this theme rejected both technological determinism and simplistic optimism, instead emphasizing the role of sociotechnical arrangements in mediating AI’s epistemic consequences.
Axial coding brought together codes such as “policy interventions”, “AI as sociotechnical assemblage”, and “governability of AI” into a thematic narrative that foregrounded the political economy of AI development. These participants often called for national strategies to regulate AI in culturally appropriate ways, including through educational curricula, funding mechanisms, and institutional oversight. They advocated for participatory design processes that involve educators, sociologists, linguists, and ethicists in shaping AI systems.
This axis draws attention to the institutional and policy dimensions of AI governance and the ways in which societal actors can exert influence over technological development. It also reinforces the idea that epistemic outcomes are not embedded in the technology itself, but emerge from how it is situated and operationalized in particular contexts.
- 7. Integrating the Axes: Toward a Relational Understanding
The five axial themes do not operate in isolation, but intersect and overlap in complex ways. For instance, the recognition of epistemic asymmetry often serves as a precondition for the call for epistemic justice, while the critique of decontextualization often leads directly into discussions of sociotechnical governance. The strategic agency axis functions as a bridge between critique and practice, providing a middle ground between fatalism and futurism.
These intersections were mapped visually using concept maps and matrix displays during the coding process. This allowed the research team to identify which themes co-occurred most frequently and how different narratives clustered around particular conceptual tensions. The axial coding process thus served not only to classify the data, but also to animate it—to reveal the relational structures that undergird participants’ varied engagements with AI.
- Selective Coding: Constructing a Core Category and Theoretical Integration
The final phase of the grounded theory analytic process—selective coding—involves the identification and integration of a core category that synthesizes the various axial themes into a unified conceptual framework. Whereas initial and axial coding serve to deconstruct and organize the data, selective coding is a reconstructive process: it seeks to articulate the central storyline that gives coherence and explanatory power to the entire dataset (Strauss & Corbin, 1998; Charmaz, 2006). This stage involves selecting the most theoretically saturated and integrative category, systematically relating it to the subsidiary categories, validating these relationships, and refining them into a parsimonious theoretical model.
In this study, which examined how Iranian sociologists interpret the relationship between AI, knowledge production and power, the central category that emerged from the data was what may be termed contested epistemic futures. This concept captures the underlying dynamic that organized participants' views across a wide spectrum—from critique to strategy, from caution to imagination. Contested epistemic futures refers to the ongoing struggle over how knowledge will be produced, legitimized, and governed in a world increasingly mediated by AI systems that are themselves shaped by unequal geopolitical, epistemological, and institutional arrangements.
This core category serves as an anchoring concept that links all five axial themes. It frames the participants’ narratives not as isolated commentaries on technology or education, but as engagements with a broader political and philosophical question: whose knowledge will count in the digital future, and under what conditions?
- 1. Core Category: Contested Epistemic Futures
The notion of contested epistemic futures highlights that AI is not simply a tool of knowledge automation but a site of epistemic struggle. For participants in this study, AI technologies were viewed as both a mirror and a mechanism of global hierarchies: they reflect existing power structures while simultaneously reproducing them through data regimes, algorithmic design, and institutional uptake.
This category is "contested" in at least three senses. First, there is a contest over the epistemic legitimacy of AI-generated knowledge—whether it should be seen as objective, culturally neutral, or ideologically laden. Second, there is a contest over access and agency—who gets to shape the design and deployment of AI tools, and for whose benefit. Third, there is a contest over the future—whether AI will entrench global knowledge hierarchies or whether it can be redirected toward more pluralistic, emancipatory outcomes.
By elevating this concept to the level of the core category, the selective coding phase brings together the analytical work of earlier stages and provides a framework for theorizing AI not only as a technological phenomenon but also as an epistemopolitical one.
- 2. Integration of Axial Themes with the Core Category
Each of the five axial themes contributes a distinct dimension to the understanding of contested epistemic futures.
The first theme, epistemic asymmetry as a structural condition, identifies the entrenched global hierarchies that form the backdrop against which AI technologies are developed and deployed. Participants’ repeated emphasis on the dominance of Anglophone data sets, Western logics of categorization, and Eurocentric frameworks of scientific rationality suggests that AI operates within an epistemic economy that systematically privileges some ways of knowing, while marginalizing others. This structural asymmetry is not incidental, but foundational to the contested nature of AI’s epistemic future.
The second theme, decontextualization versus cultural specificity, reinforces the core category by demonstrating the ways in which the universalizing tendencies of AI clash with locally embedded knowledge systems. Participants viewed AI as potentially corrosive to epistemic diversity, especially when imported without adaptation. Yet, this very tension also revealed the possibility of contestation—of reasserting cultural and contextual specificities against algorithmic standardization.
The third axial theme, technological determinism versus strategic agency, contributes a dialectical layer to the core category. It captures the ambivalence with which participants approached AI: as both a threat and a tool. The minority of interviewees who emphasized agency introduced an important analytic pivot point. Their responses suggest that epistemic futures are not completely predetermined by technological structures, but remain open to negotiation, resistance, and redesign.
The fourth theme, epistemic justice and the ethics of recognition, introduces a normative dimension to the core category. Participants who invoked the concept of cognitive justice challenged the exclusionary logics of existing AI systems and called for the active inclusion of marginalized epistemologies. This theme does not simply critique the status quo, but imagines an alternative future—one in which AI becomes a means of epistemic repair rather than consolidation.
Finally, the fifth theme, sociotechnical governance and contingency, grounds the theoretical model in institutional and political realities. It reminds us that epistemic futures are not only contested in the abstract, but are shaped through concrete practices of regulation, education, and policy-making. Participants viewed the governance of AI as a critical site, where epistemic futures could be steered in more equitable directions.
- 3. Theoretical Model: Toward an Epistemopolitical Understanding of AI
Drawing on the integration of axial themes under the core category of contested epistemic futures, a theoretical model begins to emerge. In this model, AI is understood not as an autonomous force, but as a sociotechnical assemblage embedded in geopolitical, institutional, and epistemological structures. It is a medium through which power operates, but also a field in which power can be contested.
The model suggests that AI contributes to knowledge production in three interrelated ways:
- As an epistemic infrastructure: AI systems encode assumptions about what counts as knowledge, who is authorized to produce it, and which logics are valid.
- As a mediator of legitimacy: AI tools increasingly shape how academic work is evaluated, disseminated, and cited, thereby influencing standards of epistemic authority.
- As a site of political struggle: Far from being neutral, AI is a terrain on which competing visions of the future are played out—between centralization and pluralism, imposition and adaptation, exclusion and recognition.
Participants’ narratives, when read through this model, suggest that the future of global knowledge production will depend not only on technological advances, but also on the epistemic and political choices made by societies. Whether AI will deepen existing asymmetries or contribute to a more just and inclusive knowledge order remains an open question—one that is actively being negotiated by educators, researchers, and policymakers alike.
- 4. Methodological Reflection and Theoretical Saturation
Throughout the selective coding process, care was taken to ensure that the core category was not artificially imposed, but emerged organically from the data. Memos, analytic diagrams, and iterative discussions were employed to verify the coherence and explanatory scope of contested epistemic futures. The category was found to be saturated in the sense that it was expressed across different types of respondents, applied to a wide range of issues, and offered a conceptual leverage for integrating diverse themes.
Moreover, the theoretical model developed here resonates with and extends existing debates in critical algorithm studies, postcolonial theory, and digital epistemology. It contributes to a growing body of scholarship that treats AI not solely as a technical domain, but as a site of epistemic politics—a contested field where struggles over meaning, legitimacy, and authority are increasingly mediated through digital infrastructures.
- Discussion and Conclusion
This study illuminates the ways in which Iranian sociologists navigate the epistemic and geopolitical dimensions of artificial intelligence (AI) within global knowledge production, revealing a dynamic interplay between critique, agency, and aspiration. Through grounded theory analysis of 32 in-depth interviews, the core category of contested epistemic futures emerges as a lens to understand AI as a site of struggle over knowledge legitimacy, cultural sovereignty, and global hierarchies. The five axial themes—epistemic asymmetry, decontextualization versus cultural specificity, technological determinism versus strategic agency, epistemic justice, and sociotechnical governance—highlight the dual nature of AI as both a perpetuator of Eurocentric dominance and a potential platform for local resistance and innovation. These findings challenge binary narratives of AI as either emancipatory or oppressive, instead portraying it as a socially contingent technology shaped by political, institutional, and cultural choices.
By centering non-Western academic voices, this research enriches critical AI and decolonial epistemology scholarship, underscoring the need for AI governance frameworks that prioritize epistemic plurality and justice. Iranian sociologists’ insights—ranging from critiques of digital colonization to calls for culturally adaptive AI systems—offer a roadmap for reimagining AI as a tool for epistemic repair rather than extraction. The study advocates for policies that foster local AI innovation, enhance digital literacy, and regulate sociotechnical systems to align with cultural values and national sovereignty. Future research should extend these findings through comparative studies across Global South contexts and longitudinal analyses of AI’s evolving epistemic impact. Ultimately, this work affirms that the future of AI-driven knowledge production hinges on inclusive, reflexive, and justice-oriented interventions that amplify marginalized epistemologies and contest global inequities.