We support our Publishers and Content Creators. You can view this story on their website by CLICKING HERE.
Rapid advancements in artificial intelligence have produced extraordinary innovation, but they also raise significant concerns. Powerful AI systems may already be shaping our culture, identities, and reality. As technology continues to advance, we risk losing control over how these systems influence us. We must urgently consider AI’s growing role in manipulating society and recognize that we may already be vulnerable.
At a recent event at Princeton University, former Google CEO Eric Schmidt warned that society is unprepared for the profound changes AI will bring. Discussing his recent book, “Genesis: Artificial Intelligence, Hope, and the Human Spirit,” Schmidt said AI could reshape how individuals form their identities, threatening culture, autonomy, and democracy. He emphasized that “most people are not ready” for AI’s widespread impact and noted that governments and societal systems lack preparation for these challenges.
In countries already compromising privacy, AI’s proliferation could usher in an omnipotent state where freedoms become severely restricted.
Schmidt wasn’t just talking about potential military applications; he was talking about individuals’ incorporation of AI into their daily lives. He suggested that future generations could be influenced by AI systems acting as their closest companions.
“What if your best friend isn’t human?” Schmidt asked, highlighting how AI-driven entities could replace human relationships, especially for children. He warned that this interaction wouldn’t be passive but could actively shape a child’s worldview — potentially with a cultural or political bias. If these AI entities become embedded in daily life as educational tools, digital companions, or social media curators, they could wield unprecedented power to shape individual identity.
This idea echoes remarks made by OpenAI CEO Sam Altman in 2023, when he speculated about the potential for AI systems to control or manipulate content on platforms like Twitter (now X).
“How would we know if, like, on Twitter we were mostly having LLMs direct the … whatever’s flowing through that hive mind?” Altman asked, suggesting it might be impossible for users to detect whether the content they see — whether trending topics or newsfeed items — was curated by an AI system with an agenda.
He called this a “real danger,” underscoring AI’s capacity to subtly — and without detection — manipulate public discourse, choosing which stories and events gain attention and which remain buried.
Reshaping thought, amplifying outrage
The influence of AI is not limited to identity alone; it can also extend to the shaping of political and cultural landscapes.
In its 2019 edition of the Global Risks Report, the World Economic Forum emphasizes how mass data collection, advanced algorithms, and AI pose serious risks to individual autonomy. A section of the report warns how AI and algorithms can be used effectively to monitor and shape our behaviors, often without our knowledge or consent.
The report highlights that AI has the potential to create “new forms of conformity and micro-targeted persuasion,” pushing individuals toward specific political or cultural ideologies. As AI becomes more integrated into our daily lives, it could make individuals more susceptible to radicalization. Algorithms can identify emotionally vulnerable people, feeding them content tailored to manipulate their emotions and sway their opinions, potentially fueling division and extremism.
We have already seen the devastating impact of similar tactics in the realm of social media. In many cases, these platforms use AI to curate content that amplifies outrage, stoking polarization and undermining democratic processes. The potential for AI to further this trend — whether in influencing elections, radicalizing individuals, or suppressing dissent — represents a grave threat to the social fabric of modern democratic societies.
In more authoritarian settings, governments could use AI to tighten control by monitoring citizens’ every move. By tracking, analyzing, and predicting human actions, AI fosters an environment ripe for totalitarian regimes to grow.
In countries already compromising privacy, AI’s proliferation could usher in an omnipotent surveillance state where freedoms become severely restricted.
Navigating the AI frontier
As AI continues to advance at an unprecedented pace, we must remain vigilant. Society needs to address the growing potential for AI to influence culture, identity, and politics, ensuring that these technologies are not used for manipulation or control. Governments, tech companies, and civil society must work together to create strong ethical frameworks for AI development and deployment that are devoid of political agendas and instead embrace individual liberty and autonomy.
The challenges are complex, but the stakes are high. Schmidt, Altman, and others in the tech industry have raised alarms, and it is crucial that we heed their warnings before AI crosses an irreversible line. We need to establish global norms that safeguard privacy and autonomy, promoting transparency in how AI systems are used and ensuring that individuals retain agency over their own lives and beliefs.