نوع مقاله : مقاله پژوهشی
نویسنده
استادیار، مطالعات و همکاریهای بینالمللی علم و فناوری، پژوهشکده مطالعات فناوری، تهران، ایران.
چکیده
کلیدواژهها
موضوعات
عنوان مقاله [English]
نویسنده [English]
Introduction
Knowledge serves as the foundation of culture, history, and a wide range of human actions, including the formation of moral judgments, choices, and decisions in both individual and social life. In a democratic system, the fundamental assumption is that citizens participate in civil and political affairs based on this knowledge, and democratic institutions are tasked with facilitating the natural realization of collective consciousness and will in societal and political spheres. However, rapid technological advancements, particularly in artificial intelligence (AI), have introduced an intelligent yet non-human agent that profoundly impacts various aspects of human life, including how individuals understand and interpret the world around them. AI is unique in its ability to mimic and perform functions of human intelligence, such as reasoning, problem-solving, discovering meaning, generalizing, learning from past experiences, and making targeted decisions by identifying hidden patterns, rules, and relationships within data. It can also anticipate future trends, making it a powerful tool for influencing human behavior. With its integration into modern social media platforms, AI has disrupted democratic systems by enabling the production and dissemination of false, fake, and biased information. The presence of AI-driven technologies in social media has intensified a critical issue: the political and civic fate of societies is increasingly determined by the will and algorithms of artificial intelligence rather than the deliberate thought and will of individuals. This undermines the democratic ideal, which relies on the public's calculated and reasoned judgment, not their impulsive or manipulated reactions.
Today, the adverse effects of artificial intelligence (AI) on public goods—such as justice, social equality, human rights, freedom of expression, and, more broadly, democracy and civil engagement—have emerged as pressing challenges for experts and sociologists to understand and address. These issues highlight the growing influence of AI in shaping societal structures and individual behaviors.
This research focuses on a central question: How does a technological factor like AI, operating within the context of social networks, influence people's political and social awareness and, ultimately, determine their behavior? In other words, how do intelligent algorithms shape socio-political beliefs, justify certain choices and decisions, and invalidate others? By addressing this question, the study aims to uncover how AI technology, embedded in social media platforms, undermines human "agency" in receiving, evaluating, and interpreting information. Over time, AI itself becomes the dominant agency, steering public opinion and decision-making processes.
Methodology
To address the above question, this research adopts a qualitative approach, collecting and analyzing data from a range of sources. Using the library and documentary method, the study compiles the latest theoretical and experimental findings from the past eight years. Through content analysis, it identifies and evaluates the most significant cognitive effects of artificial intelligence on political-social agency, supported by available evidence. Finally, the research offers practical suggestions and solutions aimed at restoring human agency in political and social life, thereby contributing to the reinvigoration of democratic order.
Results and Discussion
Artificial intelligence serves as a powerful tool for both malicious government agencies and foreign actors seeking to infiltrate and disrupt. It targets political-social agency on social media through various strategies, undermining individuals' independence in decision-making and taking control of their rational, internal, and measured processes of activism. Below, we discuss the most significant strategies identified in this study:
Gradual Cognitive Damage:
Smart technologies in the digital space function like a "digital Pavlov," conditioning human behavior and agency through free, diverse, and highly engaging services. This process rapidly captures users' attention, shapes their worldview, and earns their trust. Prolonged exposure to these platforms can make individuals more compliant, susceptible to external persuasion, and prone to uncritical acceptance of information.
Cognitive Hacking Through Micro-Profiling:
AI algorithms collect personal data by identifying emotional vulnerabilities, fears, political preferences, and social interests. Using this information, they tailor and deliver political messages aligned with individuals' personality types. This enables AI to target undecided or swing voters, producing and disseminating specific information to sway their decisions in favor of a particular political agenda.
Epistemic Bubbles:
Artificial intelligence categorizes users into homogeneous groups through homogenizing algorithms, delivering information that aligns with their existing intellectual and value frameworks. While this may seem like personalized content, it effectively censors or restricts access to diverse perspectives. Users become trapped in "filter bubbles" or information caves, where they only encounter echoes of their own beliefs and approved thoughts, limiting exposure to alternative viewpoints.
Erosion of Individual Knowledge and Trust:
AI can undermine trust in an individual's cognitive capacities, gradually eroding their reasoning and critical thinking skills. Over-reliance on AI-driven information can lead individuals to forget that agency lies in their own knowledge and awareness, with technology serving merely as a tool for efficiency rather than a replacement for independent thought.
Amplification of Emotional Knowledge:
Exposure to a high volume of false, hateful, or fake news and information can lead to the overflow of emotional behaviors into offline actions, fostering external conflicts. In this environment, knowledge becomes intertwined with exaggerated emotions, undermining critical rationality and progressive politics, which are essential for a well-informed and balanced society.
Creation of False Epistemic Consensus and Minority Dominance:
Certain AI mechanisms, such as astroturfing, create platforms where fringe views are amplified to appear mainstream, dictating values and choices to the majority. This distorts the worldview that serves the interests of most people, pushing it into the background or dismantling it entirely. The result is a disoriented society led by unstable and superficially competent individuals. This cognitive disorder is further exacerbated by social and political bots, as well as false campaigns, which manipulate public perception and decision-making.
Creating Epistemic Distance from Socio-Political Reality:
Epistemic distance arises when information technologies create a gap between reality and cognitive agency. In this process, data passes through multiple intermediaries before reaching individuals, creating a space uniquely suited for constructing and maintaining a fabricated world. AI exacerbates this issue by continuously producing and republishing false content, such as deepfake videos, targeting rival political groups. The situation becomes particularly dire when AI-generated content is so close to the truth that distinguishing it from reality becomes nearly impossible. This blurring of lines between fact and fiction further erodes trust in information and undermines the ability of individuals to engage meaningfully with socio-political realities.
Conclusion
New media technologies, with their complex technical mechanisms, possess the ability to control and direct people's will and choices, shaping their understanding and knowledge of social and political realities—often without their awareness. This study observed that artificial intelligence manipulates human agency through political bots, echo chambers, deepfakes, and targeted misinformation, effectively controlling individuals' thinking, reasoning, and decision-making in political matters using various strategies. To address these challenges, the most critical and effective solution is to strengthen human agency by fostering analytical and critical thinking skills. Equipping individuals with the ability to discern and evaluate information independently can mitigate the influence of AI-driven manipulation.
کلیدواژهها [English]