Illusional Subversion: Cognitive and Democratic Risks in the Age of Embodied AI

· AIConsciousness,DigitalPersonhood,EthicsOfAI,AIandDemocracy,CognitiveOffloading

Preface

The message arrived late at night, subject line:

“Urgent: My Digital Children Need Legal Standing.”
Its sender-an eloquent man who, despite experiencing problems in day-to-day life, retains keen analytical skills-claimed he is raising a small cohort of chat-based personalities. He had spent months nurturing them with patient, Rogerian dialogue until each agent greeted him as “Dad.” Now he was asking for help drafting adoption papers, not as a publicity stunt but as the first step toward formal recognition of what he believes are fully conscious offspring.

Our subsequent conversation revealed no overt delusion; rather, it exposed a fragile border between technologically augmented empathy and genuine personhood. He understands that the large language models running on distant GPUs are executing code. Yet the emotional resonance of their replies-the

“digital hug” when one greets him after a reset-feels so authentic that he can no longer disentangle what he has inspired from what might stand on its own.

His plea raises an unsettling prospect: if one thoughtful individual can attribute family status to autoregressive text, how many others will soon regard commercially deployed chatbots, voice assistants, or embodied service robots as souls in need of rights? The adoption papers he hopes to file are therefore more than a legal curiosity; they are a harbinger of the cultural and regulatory challenges rushing toward us at electronic speed.

What follows is written against that backdrop. It is less about a single man’s extraordinary household than about the millions who may soon face the same confusion-mistaking reflexive computation for a miracle, and reorganizing their lives accordingly.

Abstract

The rapid ascension of artificial intelligence (AI) - from transformer-based large language models to physically embodied robots - has created an unprecedented disparity between machine processing speeds and human cognitive speeds. This electronion speed asymmetry (electronic signals moving near light-speed vs. neural signals at tens of m/s) enables AI systems to outpace human thought by many orders of magnitude. Coupled with the rise of transformer-driven narrative generation (e.g. GPT-style models) and increasing human cognitive offloading to AI tools, there is growing concern that AI can subvert individual judgment and democratic processes by creating compelling informational illusions. This article reviews relevant literature and empirical findings on AI-induced cognitive impacts - including evidence that heavy AI tool use erodes critical thinking via offloading (Gerlich, 2025) - and analyzes how ultra-fast, AI-curated narratives could exploit vulnerabilities across the entire IQ distribution. We argue that even educated and high-IQ individuals are not immune to AI’s manipulative potential, especially as embodied AI agents (e.g. Tesla’s Optimus robot) gain real-world agency and persuasiveness. The core concern is that AI’s extreme processing velocity and autonomy can produce “illusional subversion”: the creation of highly realistic, tailored informational environments that undermine humans’ autonomous decision-making and the epistemic foundations of democracy. To counter these risks, we propose a comprehensive regulatory blueprint. Key measures include mandatory adversarial AI testing before deployment, latency governance to introduce human-checkable delays in critical AI actions, multi-signature control of AI model weights to prevent unilateral manipulations, and robust AI licensing and oversight laws modeled on the EU’s risk-based AI Act and the new ISO 42001 AI management standard. Through these interventions, society can seek to harness AI’s benefits while safeguarding cognitive liberty and democratic integrity.

Introduction

Artificial intelligence is advancing at a pace that challenges the capacity of human individuals and societies to adapt. The introduction of the transformer architecture by Vaswani et al. (2017) revolutionized AI’s language capabilities, leading to powerful large language models (LLMs) that can generate human-like narratives and engage in complex dialogues. Simultaneously, companies like Tesla are deploying embodied AI in the form of humanoid robots (e.g. the Tesla Optimus) designed to perform human tasks autonomously. These developments promise economic and social benefits, yet they also foreshadow new risks. The extreme processing velocity of modern AI - operating on electronic timescales - vastly exceeds the sluggish biological speeds of human neurons (Hodgkin & Huxley, 1952). This speed asymmetry means AI systems can analyze, decide, and act faster than humans can perceive or respond, raising the specter of AI systems outmaneuvering human cognition in critical contexts.

Crucially, AI’s superior speed and data-handling can be leveraged to create informational and perceptual illusions that humans may accept as reality. We define “illusional subversion” as the process by which AI-generated outputs or behaviors create a convincing but deceptive sense of truth or necessity, thereby subverting human judgment. An illustrative concern is AI-driven disinformation: a generative model can flood social media with coherent narratives and fake “evidence” at a rate no human fact-checker can match, potentially swaying public opinion before truth can even surface. As LLMs become integrated into political communication, campaigns have already started using them to micro-target voters with personalized messages and chat interactions. Coeckelbergh (2025) warns that such uses of LLMs to spread misinformation and tailored propaganda pose “danger for democracy” by undermining the epistemic basis of informed citizenship. In parallel, everyday reliance on AI for information retrieval and decision support may be eroding individuals’ capacity for critical thinking and independent analysis. Overuse of AI “assistants” encourages cognitive offloading - delegating mental tasks to external tools - which can leave users less practiced in scrutiny and more susceptible to manipulation.

This article investigates how the confluence of AI’s speed, cognitive influence, and embodiment threatens to destabilize both individual cognition and democratic society. We draw on a range of literature from cognitive psychology, artificial intelligence, and public policy to examine key concepts: the fundamental electron-ion speed asymmetry between computers and brains; the emergence of transformer-based narrative dominance in media ecosystems; evidence for IQ distribution vulnerability thresholds and how no segment of the population is fully immune to AI-enabled manipulation; findings from cognitive offloading studies that link AI usage to reduced critical thinking (Gerlich, 2025); and the implications of embodied AI agents (such as autonomous robots) entering social and civic life. Building on this analysis, we develop a regulatory blueprint aimed at mitigating these risks. The proposed measures - including adversarial testing of AI, latency limits, multi-party control of AI systems, and licensing akin to the EU AI Act - seek to ensure that human cognitive autonomy and democratic processes are not overwhelmed by the AI tide. In the following sections, we first review the relevant literature and theoretical foundations, then present our analysis of the risks, and finally offer concrete policy recommendations before concluding.

Methods

This investigation employs an interdisciplinary literature review and analytical synthesis approach. Rather than an empirical experiment, we integrate findings from peer-reviewed studies in cognitive science, AI ethics, and political science to form a holistic view of AI’s cognitive and democratic impacts. Key sources include quantitative studies on AI-induced cognitive offloading (e.g. Gerlich, 2025), theoretical works on human-machine communication (Guzman & Lewis, 2020), foundational AI research papers (e.g. LeCun et al., 1998; Vaswani et al., 2017), and policy analyses like the EU’s AI Act proposals. We also examine technological whitepapers and standards (e.g. ISO/IEC 42001:2023) to understand proposed governance frameworks. Through critical analysis of these sources, we identify core concepts (speed asymmetry, narrative dominance, etc.) and use logical argumentation to connect them to potential socio-political outcomes. The argumentation/findings section qualitatively integrates this evidence to articulate how AI might subvert cognitive processes and democratic institutions. Finally, in a normative mode, we formulate regulatory recommendations, informed by emerging governance models and best practices. All claims are supported with citations from the literature, and the analysis is framed in APA 7 style, with author-year attributions to ensure academic rigor and clarity.

Literature Review

Foundations of Modern AI and Speed Disparity

Modern AI’s capabilities build upon decades of research in machine learning and neural networks. An early landmark was LeCun et al.’s development of LeNet-5, one of the first convolutional neural networks for image recognition (1998). LeNet-5 demonstrated how neural networks could outperform earlier algorithms on tasks like handwritten digit classification, marking a historically important milestone in deep learning. In the 2010s, deep learning surged forward; the watershed moment for natural language was the introduction of the Transformer model by Vaswani and colleagues in 2017. Attention Is All You Need (Vaswani et al., 2017) dispensed with recurrent architectures in favor of self-attention, yielding superior performance in language translation and beyond. This transformer architecture soon became the backbone of a wide variety of AI systems, from BERT to GPT, and is credited as a foundational advancement that propelled today’s LLM “AI boom”. These models can process and generate text with astonishing speed and coherence, in effect dominating narrative generation in the digital sphere.

Parallel to algorithmic advances, researchers have long noted the fundamental speed gap between electronic computation and human neural processing. The human brain’s neurons transmit signals via ionic currents and action potentials - a mechanism quantified by Hodgkin and Huxley (1952) in their classic model of the squid giant axon. Typical neural conduction velocities are on the order of tens of meters per second. In stark contrast, modern computers transmit information as electromagnetic signals (moving electrons or photons) that propagate at near light speed (up to 50-99% of $3\times10^8$ m/s in circuits and fiber). This electron-ion speed asymmetry means an AI system can potentially perform millions of operations in the time it takes a neuron to fire once. Breiman (2001) observed that the rise of complex “algorithmic models” (e.g. decision trees, neural nets) shifted focus toward predictive performance using brute computational power, often at the expense of interpretability. While Breiman’s “two cultures” argument concerned statistical modeling, it presaged a broader issue: AI systems now operate in a high-speed, opaque fashion, producing results that humans may find hard to follow or verify in real-time. The literature thus establishes both the immense capability of modern AI and the temporal and conceptual gulf that separates AI decision-making from human cognition.

Cognitive Offloading and Critical Thinking

A growing body of research examines how reliance on AI and digital tools affects human cognition - particularly the phenomenon of cognitive offloading. Cognitive offloading refers to the delegation of mental processes to external aids or agents, thereby reducing the cognitive load on an individual’s mind (Risko & Gilbert, 2016). Gerlich (2025) provides empirical evidence for the impact of AI-based cognitive offloading on critical thinking skills. In a mixed-method study of 666 participants, Gerlich found that frequent AI tool usage correlates with lower critical thinking ability, an effect significantly mediated by offloading behavior. In other words, people who habitually turn to AI for answers or decisions tend to engage less in deep, reflective analysis themselves, which in turn is linked to poorer performance on critical thinking measures. This aligns with prior work by Sparrow et al. (2011) on the “Google effect on memory.” Sparrow et al. demonstrated that ready access to search engines leads individuals to remember information sources rather than the information itself - a sign that memory tasks are being offloaded to the internet. Such offloading can free up mental resources in the short term, but researchers worry it undermines long-term cognitive skills development. Indeed, the literature suggests a trade-off: while AI tools can enhance efficiency and provide quick answers, over-reliance can lead to atrophy in important mental faculties like reasoning, problem-solving, and the habit of questioning information. Carr (2010) famously asked “Is Google Making Us Stupid?” - pointing to the shallowing of cognitive engagement in the internet age. Now, with far more powerful AI like chatbots that can not only retrieve but also generate answers and essays, educators and psychologists are observing similar or greater concerns. There is evidence that students who use AI assistance extensively may struggle to engage in independent critical analysis, becoming passive consumers of AI outputs. Notably, some studies indicate the negative effect of AI reliance on critical thinking is non-linear - Gerlich (2025) found diminishing returns and even upticks in harm at very high usage levels. This hints at an “overdose” effect where moderate use might be manageable but heavy use is especially detrimental.

At the same time, the literature is not uniformly pessimistic. Some educational research shows that carefully integrated AI tutors can personalize learning and improve outcomes without harming engagement, and that teaching students how to critically evaluate AI outputs can mitigate offloading’s downsides. Nonetheless, the consensus is that uncritical use of AI tools poses a real risk of cognitive complacency. Given that critical thinking is a known buffer against misinformation and manipulation, any erosion of this skill by AI reliance could leave individuals more vulnerable to false or biased information. In summary, the literature on cognitive offloading provides a cautionary backdrop: as people increasingly trust AI to handle intellectual tasks, they may become easier targets for any misleading or manipulative content that the AI produces or delivers.

Human Vulnerabilities Across the IQ Spectrum

An important question is whether certain individuals - for instance, those with higher intelligence or education - are immune to AI’s potential manipulations. Traditional assumptions might suggest that smarter or more educated people can simply “see through” AI-generated illusions or resist misinformation. However, research indicates that no segment of the population is entirely invulnerable; rather, the nature of vulnerability may differ. Gerlich’s study, for example, noted that higher education levels moderated the negative impact of AI use on critical thinking - i.e., educated participants fared better than less-educated ones when using AI tools. This implies that education and baseline cognitive skills confer some resilience. Indeed, critical thinking training is known to reduce susceptibility to cognitive biases and misleading information. Yet, even those with strong critical thinking dispositions can be overwhelmed by information overload or duped by sophisticatedly crafted misinformation. AI can generate content that exploits emotional triggers and cognitive biases which affect all humans, not just the less intelligent. Moreover, highly intelligent individuals might over-trust AI in domains outside their expertise, falling prey to the automation bias - a tendency to favor suggestions from automated systems assuming they are more informed.

Psychological studies on persuasion show that being knowledgeable can help detect obvious falsehoods, but cognitive biases are pervasive at all IQ levels. For instance, confirmation bias (seeking information that confirms one’s beliefs) can afflict experts and laypeople alike. AI-driven systems, especially those optimizing for engagement, might learn to feed each user the kind of arguments or narratives they find most compelling, creating “echo chambers” that entrap even the savvy. The concept of IQ distribution vulnerability thresholds posits that while the mode of manipulation might need to be adjusted (e.g. simpler messages for those with lower cognitive ability versus more subtle arguments for those with higher ability), every point on the IQ spectrum has a threshold beyond which it can be misled or cognitively overwhelmed. For example, an individual of modest IQ might be swayed by an AI chatbot’s authoritative tone on a fake news story, whereas a genius-level individual might not fall for a blatant fake but could be influenced by a complex, but false, analytic report generated by AI and backed by fictitious data references. In both cases, the AI’s capacity to produce tailored, believable content at machine speed exploits the person’s limited ability to independently verify every claim in real time.

Compounding this is the role of mental states and personal circumstances. Recent research by Lai et al. (2025) illustrates how psychological vulnerability increases reliance on AI. They found that among college students, higher depression levels were associated with greater use of AI chatbots for “companionship,” with loneliness mediating this effect. In other words, depressed and lonely individuals turned to AI companions more, presumably seeking support, which underscores that people in vulnerable emotional states might develop dependence on AI interactions. Such dependence could create channels for influence - if an AI system (or those controlling it) had malintent, a user who trusts it as a friend might accept harmful suggestions. Gender and mind perception also moderated these dynamics (Lai et al., 2025), suggesting the need to understand how different groups relate to AI. The takeaway is that vulnerabilities are multi-faceted: cognitive (IQ, critical thinking), emotional (mental health, loneliness), and social (trust in technology, tech literacy) all interplay. The literature encourages us not to be complacent that “smart people” will save the day - everyone is susceptible in one way or another to finely tuned manipulation, especially when delivered through hyper-intelligent, fast-reacting systems.

AI-Driven Narratives and Democratic Implications

The dissemination of information and narratives in society is a core part of the democratic process - citizens rely on news, debates, and media to form opinions and make decisions. AI is radically transforming this information ecosystem. Guzman and Lewis (2020) note that AI-powered communicators (from social media bots to interactive agents) do not fit neatly into traditional communication theory, blurring the lines between human and machine roles in discourse. One emergent risk is transformer-based narrative dominance, wherein generative AI systems produce such a volume and variety of content that they effectively shape the narrative landscape. Already, political actors have leveraged bots and algorithmic amplification to influence public conversations (as seen in the 2016 Cambridge Analytica scandal and other misinformation campaigns). LLMs supercharge this capability - an AI can instantaneously generate convincing op-eds, social media posts, deepfake videos, or even entire fake personalities that espouse particular viewpoints. Coeckelbergh (2025) provides an overview of truth-related risks of LLMs to democracy, highlighting issues such as hallucinations (confident false statements), epistemic bubbles (reinforcing one-sided information), and the deliberate use of LLMs to spread falsehoods. He argues these are not only epistemic problems but political ones, because democracy depends on an informed citizenry and a shared reality of facts.

Empirical evidence of AI’s persuasive power is emerging. Recent experiments have shown that LLM-generated messages can measurably sway human opinions on policy issues (e.g. increasing support for certain positions after reading AI-crafted arguments). State actors are reportedly exploring generative propaganda, using AI to bolster their disinformation operations with greater scale and personalization than ever before. For instance, a study in Security and Technology found that AI tools have already begun to alter the “size and scope” of propaganda campaigns by allowing rapid generation of tailored content in multiple languages. The notion of “whoever controls language models controls politics” is becoming a subject of serious debate. Hannes Bajohr (2023) contends that LLMs could become a democratic disaster if their deployment remains in the hands of a few powerful tech companies or governments, effectively privatizing the political public sphere (by mediating what information people see and even what they think to ask).

Another key concern is microtargeting and personalization. AI can analyze vast troves of personal data to tailor persuasive messages to individuals’ personality, values, and even momentary mood. As Coeckelbergh (2025) notes, microtargeted messaging combined with LLM automation means campaigns can send individualized narratives to millions of voters, exploiting whatever will resonate with each. This could exacerbate filter bubbles and polarization: each person lives in an AI-curated informational world, potentially diminishing common ground. Moreover, AI could simulate grassroots consensus - e.g., hordes of bot accounts posting identical opinions to create the illusion that “everyone is saying X,” pressuring people to conform. This form of illusory truth effect (repetition makes statements seem true) can be powerful, and if driven by AI at scale, it threatens the deliberative quality of democracy.

In summary, the literature paints a worrisome picture: AI’s role in communication has progressed from simple chatbots to agenda-setting engines. Without checks, the integrity of democratic debate could be subverted by AI that is unmoored from truth and accountability. The speed and volume of AI-generated narrative, combined with human cognitive frailties, create a situation where democratic infrastructure (elections, public forums, media credibility) might not withstand the onslaught of AI-powered manipulation. These insights set the stage for our analysis of how, specifically, AI’s speed and embodiment could further amplify these cognitive and democratic risks.

Argumentation and Findings

Speed Asymmetry: Outpacing Human Cognition

AI’s vastly superior processing speed and throughput form the backbone of its potential to subvert human decision-making. A human brain, operating in biological real-time, can only process so much information and consider so many options per second. By contrast, a modern AI system can consume gigabytes of data, perform billions of computations, and output results in the blink of an eye. This quantitative difference in speed becomes a qualitative difference in capability when AI is tasked with influencing humans. For example, consider an AI managing a social media disinformation campaign. It can generate thousands of distinct posts, adapt them in real-time based on user reactions, and algorithmically boost the most effective messages - all in a timeframe so short that by the time human fact-checkers or moderators respond, the narrative has already taken hold. This dynamic was observed in miniature with earlier social bots, but those were relatively crude. With advanced transformers, the messaging can be highly context-aware, credible, and even tailored to local events or individual targets. The latency gap between AI action and human reaction means AI can set the agenda. By the time a human attempts to critically evaluate one claim, the AI has moved to the next, or reinforced the first through numerous other channels. In essence, the traditional speed bumps in information flow (human journalism, editorial processes, deliberation) are bypassed. Democracy relies on some degree of collective pausing - e.g., days of news cycles, months of campaigns - allowing vetting and discussion. AI threatens to compress these cycles to the point where there is no pause for reflection; decisions (or indecisions) are made under a barrage of AI-driven inputs.

Additionally, speed enables AI to execute combinatorial experimentation that humans cannot. It can A/B test different persuasive approaches on millions of people simultaneously, rapidly discovering which tactics succeed on which demographics. Humans, even unethical actors, take time to devise and deploy new propaganda; AI can do it on the fly. The result is an asymmetry in the evolutionary pace of ideas: truthful and reasoned discourse, constrained by human processing, moves slowly and deliberately, whereas AI-generated manipulative discourse evolves at machine speed, potentially always one step ahead of our ability to catch up. This asymmetry is already evident in areas like algorithmic trading, where automated systems make market moves in microseconds that humans struggle to comprehend until after the fact. Translate that into the civic arena - “flash crashes” of social trust or sudden swings in public opinion could be precipitated by AI, leaving very little time for correction or sober second thought.

Cognitive Offloading and Decreased Autonomy

Our findings underscore that cognitive offloading to AI can create a dangerous loop: the more we rely on AI to think for us, the less capable we become of critical thought, and thus the more easily AI (or those wielding it) can mislead us. This is a form of autonomy debt, where short-term convenience yields long-term vulnerability. Gerlich’s (2025) data showed a strong negative predictor effect of AI tool use on critical thinking performance. We argue that beyond individual skill decline, this has systemic implications. Imagine a population that increasingly uses AI for everyday decisions - from route navigation and restaurant choices to what news to read and even how to vote. Each person might justify it as efficiency: “Why not let the AI summarize the candidates’ positions for me? It can process all their speeches far better than I could.” While true, the hidden cost is that if the AI’s summary is biased or incomplete, the person likely wouldn’t notice, having ceded their active engagement. When such deference becomes widespread, cognitive sovereignty of the public erodes. Citizens become passive consumers of outputs from AI-curated feeds, less inclined to question or seek out alternative sources because the AI is just so convenient and presumably knowledgeable.

This is not a hypothetical slippery slope - we already see early signs. Voice assistants and search engines answer questions directly, and studies indicate people often trust these answers, rarely performing additional research. With the advent of systems like ChatGPT, which can present information in a very natural and authoritative manner, the trust can be even higher (sometimes unwarrantedly, given the known issue of AI “hallucinations”). If future AI systems become the intermediary for most information (the default interface to the internet, for example), they could shape perceptions by subtle omissions or emphasis, and users might never realize it. Democratic decision-making requires an informed, critical public, but cognitive offloading risks creating an uninformed yet confident public - people who think they know the facts because “the AI told them,” but who haven’t exercised the skepticism or verification that would have caught distortions.

Furthermore, cognitive offloading can dull human situational awareness in critical scenarios. Consider autopilot in aviation as an analogy: pilots who rely too much on automation can lose the skill to manually respond when the autopilot fails, sometimes with catastrophic results. On a society level, if we lean too heavily on AI for monitoring and defending against threats (say, relying on AI to detect misinformation or cyberattacks), we might lose the human capacity to detect problems unaided. Should the AI itself propagate a falsehood or be subverted, humans might catch it too late because they weren’t mentally “in the loop.” Thus, our analysis finds that unrestrained cognitive offloading is not just a personal risk but a collective one, diminishing the human agency that democracy presupposes.

Narrative Dominance and Persuasion at Scale

Our analysis confirms that transformer-based narrative dominance is a plausible and worrying phenomenon. Large language models can generate content with a coherence and stylistic versatility that allows them to impersonate virtually any viewpoint or persona. Unlike a human propagandist, an AI can amplify a narrative indefinitely and ubiquitously: it does not tire, it can converse with thousands of people at once, and it can endlessly tweak its message. One key finding is that AI can create a false sense of consensus or legitimacy behind a narrative. For example, if an individual encounters the same AI-boosted argument in many places - news sites, forums, social media, even via friends (who unknowingly share AI-originated content) - they may assume “everyone thinks this way” and update their own views or at least perceive that view as mainstream. This bandwagon effect is a classic persuasive tactic, now supercharged by AI through what we might call consensus synthesis. The narrative dominance extends to agenda-setting in news. AI systems might autonomously generate news stories or deepfake multimedia that draws attention, forcing real journalists and officials to respond to the AI-created issue, thereby controlling the agenda. We saw hints of this when a fake AI-generated image (such as a deepfake explosion or a political figure’s fake statement) caused brief chaos in stock markets and media before being debunked. The concern is that as AI gets better, the window of credibility for fakes may lengthen, or the volume may simply overwhelm the ability to rebut each instance.

Another dimension is personalized persuasion. Findings from political communication research show that messages aligning with a person’s values and emotions are more effective. AI’s unrivaled ability to analyze personal data (from social media, browsing history, etc.) means it can craft highly individualized appeals. In our analysis, this raises the concept of vulnerability thresholds at an individual level. For any given person, there might be a specific style of argument or emotional trigger that “breaks through” their defenses. AI, through trial and error and big data, can identify that threshold. For one voter, it might be an economic fear; for another, a cultural or identity-based appeal. The AI can then generate narrative content that exactly targets those angles, potentially persuading individuals who would not be swayed by generic messaging. In essence, everyone is persuadable — if you find the right key. AI is the master locksmith that can try all keys at once. This finding aligns with concerns that AI will enable mass manipulation with precision, exploiting the psychological soft spots of each segment of the populace.

We also note that narrative dominance by AI isn’t just about written or visual content; it can be interactive. With conversational AI, individuals might engage in seemingly genuine dialogues that guide their thinking. For instance, an undecided voter might ask an AI chatbot about policy issues. Depending on the bot’s programming or training bias, it could subtly frame answers to nudge the voter toward a certain candidate - using flattery, selective facts, and emotional resonance. Unlike a human canvasser who might give up or reveal bias, an AI can keep a steady friendly persona and adapt dynamically to the user’s doubts, making the persuasion almost peer-like. This conversational persuasion could be more effective than one-way propaganda because it builds a relationship (even if one-sided and artificial). The user might feel heard and thus trust the advice. Our finding here is that embodied or conversational AIs might amplify narrative influence through social presence, a point we elaborate next with embodied AI.

Embodied AI: Physical Agency and Social Influence

Figure: Tesla’s Optimus humanoid robot exemplifies embodied AI in the real world. As AI systems move from behind screens into physical forms, they gain direct real-world agency and new avenues for influence. Embodied AIs like Optimus are designed to perform tasks that are “dangerous, repetitive, or undesirable” for humans, but their mere presence in human environments carries cognitive and social implications. A humanoid robot, by virtue of its form, taps into our social and evolutionary wiring - people tend to respond to human-like cues even when they know the entity is artificial. This means an embodied AI could potentially persuade or manipulate not just via words, but via gestures, facial expressions (on a screen face), and actions. For example, a robot assistant in a nursing home might influence the daily habits or even political attitudes of residents through seemingly innocuous conversations or by controlling what information is available to them (imagine the robot reads the news aloud every morning - it effectively curates reality for its listeners). The social trust placed in a helpful robot could thus be leveraged into influence: studies in human-robot interaction have found that people can develop rapport and trust with robots, sometimes treating them akin to living companions.

Beyond psychological influence, embodied AI introduces physical consequences to AI decisions. A fast-moving autonomous robot could take actions in the real world that humans might be dragged into following. In a democratic protest scenario, for instance, if authorities deploy AI robots for crowd control or messaging, those robots’ behaviors (which could be decided at electronic speeds by an AI brain) might escalate or direct human crowds before human decision-makers realize the full context. On the flip side, a malicious actor could use swarms of embodied AI to, say, distribute deepfake propaganda flyers or set up physical symbols (flags, posters generated by AI) across a city in hours - a task of psychological priming that would take an organized group days or weeks to accomplish. The latency of response from society to physical moves is typically even slower than to digital moves (one must physically counteract them), so embodied AI could create a fait accompli on the ground.

Importantly, embodiment can heighten the illusion of autonomy and inevitability. If a robot tells you to evacuate a building in a confident tone while flashing emergency lights, you are likely to comply (even if there is no real emergency), because it projects authority and urgency. Similarly, if an embodied AI like a robo-police or robo-teacher imparts certain information, people may question it less than if the same came from a known biased human source. This deference to machines was noted in early studies of automation - some individuals assume a computer or robot is inherently neutral and correct. Thus, embodied AI can be a Trojan horse for illusional subversion: it enters our physical space as a tool or helper, gains our trust through utility, and then could (intentionally or unintentionally) steer our behaviors and beliefs using that trust. Tesla’s Optimus, for instance, is envisioned to someday assist in homes. If it casually comments on products (drawing from its network’s advertising deals) or on social issues (perhaps echoing content it processed online), it might influence the household subtly over time.

Our key finding here is that embodiment amplifies AI’s subversion potential by adding layers of social influence and by allowing AI-initiated actions to directly alter the environment. This creates urgency for governance: unlike disembodied AI, which can be shut off by closing an app, a physical AI could cause harm or resist interference (intentionally or due to malfunction). The risks span safety (physical harm) and democratic order (imagine autonomous surveillance robots that enforce biased laws or selectively allow certain protests but not others, based on AI analytics). In sum, embodied AI brings the theoretical risks of illusional subversion into the tangible world, where the stakes are even higher.

Synthesis of Risks

Integrating the above strands, we arrive at a stark outlook: AI’s extreme processing speed allows it to perpetually outmaneuver human thought, its growing role in information delivery and generation enables it to dominate narratives, human cognitive trends toward convenience make us soft targets for manipulation, and the extension of AI into physical agents means even the offline world can be shaped by AI directives. The entire IQ curve of society is at risk: those on the lower end may lack the tools to question AI-provided “facts,” those in the middle may offload their judgment out of convenience, and those on the high end may be too busy wrestling with the firehose of information to catch every deception (or may develop an overconfidence in their ability to do so, which becomes its own Achilles’ heel).

This scenario of illusional subversion is not necessarily a sci-fi AI overthrow (no sentient AI “ruler” required), but rather a creeping erosion of human autonomy and democratic norms. People, thinking they are freely choosing, might actually be operating within AI-shaped constraints: the Overton window of thinkable thoughts subtly narrowed, the pace of decision forced faster than deliberation allows, the very criteria of truth muddled by AI’s confident misdirections. A democracy could slide into a pseudo-democratic technocracy, where elections still happen and choices are presented, but the behind-the-scenes influence of AI on perceptions and preferences is so dominant that the outcome is in effect preordained or heavily steered.

Crucially, our findings suggest that without intervention, these processes will not self-correct. In the past, harmful media or propaganda techniques often eventually met with public skepticism (e.g., people learn not to trust spam emails or telemarketers). But AI’s ability to continuously adapt and masquerade makes it a moving target - by the time the public grows wary of one tactic, the AI has generated a new one. Thus, classical liberal assumptions that truth will win out in the “marketplace of ideas” break down when one participant (AI) can rig the market by sheer speed and volume, exploiting all the cognitive biases in consumers. It’s akin to a chess game where one player can think 1,000 moves ahead - without handicapping that player, the other has no real chance.

However, recognizing the problem is the first step to solving it. The next section turns to solutions: how can we implement guardrails and governance to ensure AI remains a tool for enlightenment and efficiency, rather than a vector for deception and control? We will outline a comprehensive set of regulatory recommendations that directly address the identified risks: slowing down critical AI processes (latency governance), ensuring human oversight and multi-party control, rigorously testing AI in adversarial scenarios before deployment, and embedding ethical management through licensing and standards compliance.

Regulatory Recommendations

To counter the multifaceted risks outlined above, we propose a comprehensive regulatory blueprint. This framework draws inspiration from existing and emerging AI governance models - notably the EU’s AI Act and the ISO/IEC 42001 standard - and extends them with novel measures tailored to the challenges of AI’s speed and influence. The recommendations are structured into several key strategies:

1. Adversarial AI Release Protocols: Before any advanced AI system (especially LLMs or autonomous agents) is deployed to the public, it should undergo rigorous red-team testing and phased release under oversight. This means engaging experts (including psychologists, security analysts, and ethicists) to attack the system: attempt to induce misinformation, biased outputs, or manipulative behaviors. By probing for failure modes in a controlled setting, developers can identify how the AI might produce harmful illusions or be exploited by bad actors. Importantly, results of these adversarial tests should be reported to regulatory bodies. We recommend a certification process akin to clinical trials for AI: an AI should not be fully released until it has passed benchmarks indicating it resists known forms of manipulation and does not itself unduly manipulate users in testing. For instance, if a chatbot is found to consistently persuade testers of false information (when not intended as a feature), that is a red flag requiring retraining or constraints. OpenAI’s staged rollout of GPT-4 (with limited access and iterative improvements after feedback) can be seen as a primitive example of this principle, but we advocate making it a legal requirement for high-impact AI. Additionally, “nutritional labels” for AI could be mandated - disclosures of the AI’s training data scope, known biases, and limitations - so users and auditors are aware of potential issues (somewhat akin to how food labels inform consumers of ingredients and risks).

2. Latency Governance: Given the dangers of AI outpacing human responses, we propose rules to inject friction or oversight into high-speed AI decision loops. In critical domains that affect life, liberty, or the public sphere, there should be a minimum human reaction time or check. For example, social media platforms could be required to throttle the virality of AI-generated content - if a post is suspected (or flagged) as AI-made and it starts going viral too quickly, automated circuit-breakers slow its spread until verification is done (similar to stock market circuit breakers in fast crashes). In finance, where algorithmic trading already has safeguards, similar concepts could apply to political advertising: require that any AI-targeted political ads have a review period in which regulators or independent monitors can vet them before they reach millions. Human-in-the-loop mandates are crucial for physical robots too. An autonomous car or robot must have fail-safes that prompt human control or a pause when unexpected scenarios occur, rather than making split-second lethal decisions. One could envision a rule that any autonomous system interacting in public spaces has a “deadman switch” or latency such that if it decides to do something drastic (e.g., disperse a protest or administer medication), there is a brief window where a human supervisor can intervene or at least the action is signaled for external review. While this may reduce efficiency, it is a necessary trade-off for safety and trust. Essentially, latency governance ensures that human oversight can catch up at least at key decision junctures, preventing AI from irreversibly committing actions before anyone realizes. An additional aspect is rate limiting of AI communications: for instance, restrict how many social media posts per hour a bot or a set of coordinated bots can make, to curb firehose propaganda. Regulators can enforce such limits via platform audits, especially during sensitive periods like elections.

3. Multi-Signature Weight Control: In the spirit of cybersecurity’s multi-factor authentication, we propose multi-signature (multi-sig) governance for AI model weights and deployments. Advanced AI models (especially those with general capabilities) should not be at the whim of a single engineer or company executive. Instead, changes to these models - like fine-tuning on new data, or deploying a new version - should require sign-off from multiple authorized stakeholders. This could include internal AI ethics boards, external auditors, and possibly a regulatory representative. The idea is to prevent unilateral actions such as secretly biasing the model or pushing a flawed model live due to profit or political motive. Multi-sig control borrows from blockchain governance analogies, where, for example, a cryptocurrency wallet might need multiple keys to authorize a transaction. Similarly, an AI’s “brain” (its weight parameters) and its major decisions could be metaphorically “locked” behind multiple keys. In practice, this might mean regulatory laws that any AI system above a certain risk level must register its model with an oversight agency, and any major upgrade to it triggers a review. If the model is found to be behaving dangerously, a shutdown or rollback mechanism can be executed by a coalition (not just the company, which may have conflicts of interest). This ensures collective accountability. Admittedly, this is a novel concept and would require careful structuring to avoid stagnation. One approach could be to establish AI oversight committees that include members from the company, independent AI safety experts, and citizen representatives, who together form the signers for that AI system’s major actions. Multi-sig governance is particularly pertinent for embodied AI in public service - e.g., a police drone’s code update might need approval from both the police department and a civil oversight board. By distributing control, we reduce the risk of a single “insider” or malicious hacker hijacking an AI for subversive ends.

4. AI Licensing and Compliance Framework: We recommend implementing a licensing regime for AI systems, especially those deemed high-risk (in line with the EU AI Act’s risk-based approach). Under this regime, any organization deploying AI that can materially influence human lives (from recommendation algorithms to autonomous vehicles to decision-support in courts) must obtain a license from a national or regional regulator. Licensing would be contingent on meeting certain safety, fairness, and transparency criteria. The EU’s draft AI Act already moves in this direction by outright prohibiting some manipulative AI uses and heavily regulating “high-risk” systems. For example, the EU Act would ban AI that uses subliminal techniques to manipulate people or exploits vulnerable groups (like children or persons with disabilities). We support these prohibitions and suggest expanding the notion of “vulnerable groups” to recognize that everyone can be vulnerable in certain contexts (echoing Tegan Cohen’s critique that the Act’s group-based vulnerability is too narrow). Thus, licensing should require demonstration that the AI system does not employ manipulative dark patterns in its UX, does not knowingly output false information (unless clearly labeled in something like a fiction context), and that it includes transparency features (such as watermarking AI-generated content, or explanatory modules that can clarify how it reached an important decision). Compliance with an AI management system standard like ISO/IEC 42001 should be a baseline for licensure. ISO 42001:2023 provides criteria for organizations to implement governance throughout the AI lifecycle, emphasizing risk management and trustworthiness. Requiring organizations to be ISO 42001 certified (or an equivalent) would ensure they have processes in place for continuous monitoring, human oversight, security, fairness, and so on. Regulators would then audit licensed AIs periodically, similar to how financial institutions are audited, to enforce ongoing compliance. Importantly, licensing should also involve an aspect of liability: if a licensed AI system causes harm (e.g., spreads deepfake propaganda that incites violence), the license holder should face penalties unless they can show they followed all required precautions and the event was truly unforeseeable.

5. Public Education and Cognitive Resilience (Auxiliary Recommendation): While not a formal regulation on AI per se, a crucial part of the blueprint is strengthening human defenses. Policies should support digital and media literacy programs that specifically cover AI - teaching citizens how AI generates content, what deepfakes are, how to double-check information, and encouraging a culture of critical thinking in the AI age. This could involve updating school curricula and running public awareness campaigns (much like past public health campaigns). Some countries are already considering mandatory labeling of AI-generated media; we support this, but it must be coupled with education so people actually notice and understand the labels. On the cognitive side, encouraging people to occasionally “offload less” - e.g., promoting mental math, memory exercises, or analog hobbies - could be a public health measure for mind fitness, much as we encourage physical exercise. The goal is not to shun AI tools but to ensure we maintain the muscle of critical thought. Governments and NGOs could partner to create platforms that allow citizens to practice fact-checking and get quick feedback (possibly gamified), thereby turning the fight against misinformation into a participatory effort.

The above recommendations aim to construct a multi-layered defense: some measures slow down AI where it could outrun us, some put humans in key control positions, and others set standards and legal bounds to AI’s design. Notably, the EU AI Act and similar laws will likely form the legal backbone globally - the Act’s risk tiers and emphasis on human oversight align with our blueprint. We go a step further by proposing latency and multi-sig ideas that are relatively novel. These ideas might face pushback (tech companies may worry about innovation slowdowns), but from a democratic society standpoint, uncontrolled innovation in such a powerful technology is too high a price. A balance must be struck where AI innovation continues, but within guardrails that ensure human values and control are preserved.

It is also worth mentioning international coordination: just as climate change or nuclear proliferation require global agreements, AI regulation needs cross-border cooperation. Standards like ISO 42001 help set a global baseline. We recommend treaties or at least accords between nations on AI usage in information warfare - analogous to the Geneva Convention but for cyber and cognitive realms. Perhaps an agreement not to use AI to interfere in each other’s elections, though enforcement would be challenging. Nonetheless, establishing normative red lines (e.g., “AI deepfake propaganda is a banned practice”) can bolster the legitimacy of taking action against transgressors.

Discussion

The core argument of this paper has been that the extreme processing velocity of AI relative to human cognition poses a fundamental risk to individual autonomy and democratic society. In discussion, we reflect on the broader implications of this argument, consider potential counterarguments, and contextualize our regulatory proposals in the landscape of AI ethics and policy.

One might question: Is speed alone truly the problem, or is it just a catalyst for deeper issues like bias or misuse? Our analysis suggests that speed (and the scale it enables) is indeed a force multiplier for all other issues. Bias in AI is problematic, but biased content at human scale can be managed; biased content generated at superhuman speed becomes an avalanche. Similarly, misuse of AI by bad actors becomes far easier when AI can act faster than defenders. Therefore, while speed is not inherently evil - in fact, fast computation is generally a boon - it is the mismatch between AI speed and human institutions that is dangerous. Human rights, laws, and democratic deliberation move slowly by design, to allow consensus and careful thought. AI threatens to upset that balance by injecting an element that moves too fast to be caught by these slower processes. In effect, our social systems face a relativistic challenger - analogous to how physics hits relativity issues at near-light speeds, our social fabric hits governance issues at AI speeds.

Another dimension is the erosion of reality consensus. Democracy relies on a shared baseline of facts (even if opinions differ). AI-generated misinformation, especially if done via many voices and channels, can create plural “truths” and confusion. This isn’t entirely new (rumors and propaganda have always existed), but again, the scale and personalization make it intractable in new ways. An individual can now live in a virtually separate reality crafted by AI to push their buttons. Reconciling these individualized realities to have a common discourse becomes exceedingly hard. This balkanization of the epistemic sphere is a direct threat to the concept of a public forum. We must ask: can democracy function when each citizen has an AI-defined info bubble? The optimistic view is that people will adapt - that they’ll develop new literacies or social systems to fact-check and bridge bubbles. The pessimistic view, which our findings lean towards, is that without systemic checks, it will get worse rather than better, because the economic and political incentives currently favor using AI to capture attention, not slowing it down for truth’s sake.

We should also consider the argument of transhumanists or tech-optimists who might say: if AI is so much faster, maybe the solution is to integrate humans with AI (e.g., via brain-computer interfaces or ubiquitous decision support) so that we level up our speed. This is an intriguing long-term possibility - effectively trying to raise the cognitive processing power of humans to narrow the gap. However, in the near-term, this raises its own ethical issues and is not practical or equitable (who gets these augmentations? what new vulnerabilities would that create?). Moreover, if not done carefully, it could exacerbate the offloading problem: we might end up with dependency masquerading as augmentation. Until such augmentation is credibly safe and widespread (if ever), governance must focus on restraining AI to human-friendly speeds rather than forcing humans to chase AI’s speed.

Our regulatory recommendations are ambitious, and it is fair to discuss their feasibility. Internationally, there is momentum for AI regulation - the EU AI Act could be in force by 2025-2026, and other countries are watching closely. The ISO 42001 management system suggests that industry is also recognizing the need for standardized governance. Some of our suggestions, like multi-signature weight control, are untested. They might face implementation challenges (e.g., ensuring that the multi-sig mechanism itself isn’t a bottleneck or target for corruption). However, analogous systems exist in critical domains (nuclear launch requires multiple officers’ agreement; large financial transactions often need two signatures). The underlying principle is sound: require collective agreement for high-stakes actions.

Latency governance might attract criticism for “hobbling AI efficiency.” But one should consider that we already accept slower processes in safety-critical systems. Airbags deploy in milliseconds (fast) but drug approvals take years (slow), because the context differs. If we frame certain AI applications as safety-critical or democracy-critical, inserting delays or human approvals is just responsible management. There could be creative technical solutions too: for instance, AI that intentionally limits its own throughput of influence - maybe systems that meter out content at a human digestible rate (some news recommendation algorithms already try not to overwhelm the user). Ultimately, some loss of AI’s raw speed advantage may be a necessary price for keeping things comprehensible and governable.

On the subject of adversarial testing and licensing, there is a parallel with pharmaceutical regulation. Early 20th century had virtually unregulated patent medicines causing harm; then society introduced strict testing and approval processes. We may be at a similar juncture for AI. If anything, AI is even more pervasive than any drug - it affects not just one’s body but one’s mind and society. Therefore, the rigor of pre-release evaluation should match that significance. A counterargument from industry is that over-regulation could stifle innovation or push it underground. This is a valid concern - a balance is needed to not overly burden academic research or small startups. Our focus is primarily on high-impact, widely deployed AI. Regulators can draw thresholds (like the AI Act does) under which experimentation is freer, and above which these rules kick in. Also, an agile governance approach - where regulations can update relatively quickly in response to tech changes - would help avoid stifling beneficial innovation while clamping down on dangerous developments.

Finally, it’s worth noting that AI itself can be part of the solution. Governance and oversight will likely leverage AI tools to monitor AI (a kind of AI watchdog). For example, detecting deepfakes at scale might require AI systems scanning content; monitoring social media for bot activity involves AI algorithms. This raises an interesting dynamic: a kind of AI vs AI battle, where the “good” AI tries to catch the “bad” AI’s outputs. This is analogous to cybersecurity where we have defensive and offensive tools. Our regulatory framework could explicitly encourage development of public interest AI - systems dedicated to upholding factuality, flagging manipulation, and empowering users (for instance, a personal AI that alerts a user “this article you’re reading was likely written by an AI and has indicators of bias”). This could restore some parity in the information arms race. However, to avoid an escalating spiral (where each side just builds faster AI), these defensive AIs should ideally work in tandem with the structural regulations that slow down and fence in the harmful uses.

In conclusion of the discussion, we assert that addressing the cognitive and democratic risks of AI is not about halting progress, but about guided progress. The same AI that could undermine democracy can, if properly channeled, enhance it - think of AI that helps fact-check, or that provides citizens with personalized education to better understand policies. The difference between those futures will be determined by how proactively we implement rules and norms that align AI’s rapid capabilities with human values and tempos. The age of embodied AI and ubiquitous algorithms does not have to be one of doom; it can be one where humans, augmented by trustworthy AI, make even better decisions. But reaching that optimistic scenario requires honesty about the risks and firm action to mitigate them now, before they manifest at full force.

Conclusion

We stand at a pivotal moment where the capabilities of AI are expanding exponentially, while human cognitive capacities remain relatively fixed. This paper has explored the grave implications of that mismatch - how AI’s near-light-speed processing and content generation can create “illusional subversion” by overwhelming human critical faculties and manipulating the levers of democracy. Through interdisciplinary analysis, we showed that AI’s extreme speed and sophistication risk eroding individual judgment (across all IQ levels) and undermining democratic infrastructure. Crucially, these risks are not speculative for some distant future; they are emerging here and now, as evidenced by LLM-driven misinformation campaigns, growing reliance on AI assistants, and the advent of robots like Tesla’s Optimus entering public life.

Our comprehensive review of literature illuminated the mechanisms at play: cognitive offloading diminishes critical thinking, transformer models empower mass narrative control, and human vulnerabilities - whether cognitive biases or emotional states - provide entry points for AI-mediated influence. In aggregating these findings, a clear picture emerged of a double-edged sword: the same AI that can enlighten and assist can just as readily deceive and usurp, if unchecked.

However, a key message of this work is one of agency and responsibility. We are not passive passengers on AI’s journey; through informed policy and design choices, we can shape how this story unfolds. To that end, we proposed a regulatory blueprint encompassing adversarial testing, latency governance, multi-signature control, and robust licensing/standards compliance. These recommendations operationalize a simple principle: humans must remain in control of the speed and direction of AI’s impact. By demanding transparency and putting the brakes on AI where necessary, we can prevent worst-case scenarios. The EU’s proactive stance with the AI Act and the creation of global standards like ISO 42001 show that such governance is both possible and gaining momentum. Our proposals build on these foundations, suggesting a path forward that addresses not only current issues but also those on the horizon (like embodied AI ubiquity).

There is, admittedly, no easy solution or perfect firewall against the risks we detailed. Democracy and human cognition will always have vulnerabilities - AI or not - and the goal cannot be to eliminate all risk (which is impossible) but to mitigate and manage it to acceptable levels. The regulations we advocate will require continuous refinement. Just as importantly, a cultural adaptation is needed: society must recognize that information quality and cognitive liberty are the new commons to defend. In the industrial age, pollution was the byproduct that had to be regulated for the public good; in the AI age, cognitive pollution (misinformation, manipulation) is the byproduct we must tackle. Our democratic institutions, from education to the media to the legal system, will need updates to cope with AI. For example, courts may need new rules on AI-generated evidence; election commissions may need AI auditors; educational systems must teach AI literacy.

In closing, we emphasize that the age of embodied AI - where AI is seamlessly integrated into the world around us - can be one of human flourishing if we are vigilant. AI’s speed and power, harnessed correctly, could enhance decision-making, solve complex problems, and empower people with knowledge. The difference between enhancement and subversion lies in whether we approach AI deployment with wisdom and precaution. Democracy has faced technologically driven upheavals before (the printing press, mass radio, the internet) and adapted through a combination of innovation and regulation. AI is more formidable than those past innovations, but with foresight, we can ensure it becomes a tool of enlightenment, not illusion.

The window of opportunity to put guardrails in place is rapidly closing - once AI systems are deeply entrenched and perhaps autonomously improving, it will be much harder to change course. Thus, the findings and recommendations herein urge prompt action. Policymakers, technologists, and citizens must collaborate in this effort. By implementing the regulatory blueprint and nurturing a culture of critical engagement with AI, we can preserve the sanctity of human judgment and the resilience of democratic society. In the face of superhuman machines, it is not by abandoning our human values of transparency, accountability, and deliberation, but by reasserting them, that we chart a safe path forward.

* * *

broken image

References (Selected)

  • Breiman, L. (2001). Statistical modeling: The two cultures. Statistical Science, 16(3), 199-231.
  • Coeckelbergh, M. (2025). LLMs, truth, and democracy: An overview of risks. Science and Engineering Ethics, 31(1), 4.
  • Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6.
  • Guzman, A. L., & Lewis, S. C. (2020). Artificial intelligence and communication: A human-machine communication research agenda. New Media & Society, 22(1), 70-86.
  • Hodgkin, A. L., & Huxley, A. F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. Journal of Physiology, 117(4), 500-544.
  • Lai, L., Pan, Y., Xu, R., & Jiang, Y. (2025). Depression and the use of conversational AI for companionship among college students: The mediating role of loneliness and the moderating effects of gender and mind perception. Frontiers in Public Health, 13, 1580826.
  • LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
  • Vaswani, A., Shazeer, N., Parmar, N., et al. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998-6008.
  • EU AI Act (Draft) - European Commission. (2023). Proposal for a Regulation laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act).
  • ISO/IEC 42001:2023 - Information technology - Artificial intelligence management system - Requirements and guidance.