Throughout history, tyranny has been limited by human frailty. Dictators tire. Their attention wanders. They must sleep. Even the most sophisticated systems of oppression ultimately relied on human beings to maintain control—people who could be corrupted, convinced, or who might simply look the other way at crucial moments. Resistance always remained possible because no human system of control could achieve total perfection.
Artificial intelligence fundamentally changes this equation. For the first time in human history, we face the prospect of systems of control that never sleep, never tire, and never look the other way. AI-enabled surveillance can watch everyone, everywhere, all the time. AI systems can process vast amounts of data to identify patterns of resistance before they even fully form. AI can optimize systems of social control with inhuman precision, crafting personalized manipulation for every citizen.
This isn't science fiction—it's already happening. China's social credit system provides a preview of how AI can enable unprecedented social control. But what makes our current moment particularly dangerous is how technical elites in democratic societies are actively working to dismantle the institutional safeguards that might prevent this technology from enabling total control.
When figures like Elon Musk claim that technical competence should override democratic process, when they work to replace human judgment with AI systems, they're not just seeking efficiency—they're building the infrastructure for a form of despotism humanity has never encountered before. Their vision of replacing democratic deliberation with algorithmic governance isn't just anti-democratic—it's potentially irreversible.
The technical elites driving it genuinely believe they're creating something better than democracy. When figures like Musk and Thiel work to replace democratic processes with AI-driven governance, they're not just seeking power—they're advancing a vision where human judgment and democratic deliberation are seen as inefficient obstacles to be optimized away. They sincerely believe that their technical competence justifies dismantling democratic institutions, making them even more dangerous because they'll pursue this transformation with absolute conviction.
This is not merely my own dystopian imagining. It echoes growing concerns from some of our most perceptive thinkers about the relationship between technology, power, and freedom. Yuval Noah Harari, historian and author of Sapiens and 21 Lessons for the 21st Century, has warned that artificial intelligence and biometric data systems threaten to undermine not just democratic governance but human autonomy itself. In a 2018 interview, Harari put the danger bluntly: “Who owns the data owns the future.” Algorithms trained on our behavior, he argues, are developing the capacity to “hack human beings”—to know us better than we know ourselves, predict our choices, and manipulate our desires. When paired with real-time biometric surveillance, such systems could penetrate the last refuge of freedom: our inner lives. “Fear and anger and love, or any human emotion, are in the end just a biochemical process,” Harari said. “In the same way you can diagnose flu, you can diagnose anger.” With enough data, AI systems could detect and influence our emotional states before we’re even conscious of them
The danger lies not just in the technology itself but in how it could permanently reshape human society. Previous tyrannies maintained power through force and fear, but AI-enabled autocracy works by making resistance literally unthinkable. When every aspect of life is mediated through systems designed to shape behavior, when every communication is monitored and manipulated, when reality itself becomes subject to algorithmic control—how would people even conceptualize freedom?
One of my favorite living philosophers—until his passing last year—was Daniel Dennett. A titan of cognitive science and philosophy, his work shaped how I think about minds, meaning, and human agency. But in his twilight days, Dennett turned his formidable intellect towards the danger behind this very intervention. His warnings deserve vastly more attention than they’ve received. He feared that AI systems could generate what he called counterfeit people—simulations so convincing they could deceive us into forming bonds, making agreements, even shaping our view of reality itself, all under false pretenses. The danger, Dennett argued, wasn’t just the technology—it was the erosion of trust, the fundamental glue holding democratic society together. This was his final moral intervention, and I feel a duty to bring his voice into this conversation now—because the future he feared is arriving faster than even he anticipated.
Consider what makes this form of control fundamentally different from previous tyrannies. Traditional autocracies relied on crude instruments of control—violence, imprisonment, censorship. Even the most sophisticated propaganda systems of the 20th century worked through relatively simple mechanisms of message repetition and information control. People could still maintain private thoughts, secret conversations, hidden resistance.
AI-enabled autocracy operates at a deeper level. It doesn't just control behavior—it shapes the cognitive environment in which thoughts themselves form. When every digital interaction is monitored and manipulated, when social media feeds are optimized for compliance rather than connection, when AI systems can predict and preempt dissent before it crystallizes into action—the very possibility of resistance begins to dissolve.
What makes this particularly dangerous is how it combines total surveillance with personalized manipulation. AI systems don't just watch everyone—they can craft individually optimized strategies for controlling each person's behavior. Your news feed, your social connections, your entertainment, your economic opportunities—all can be algorithmically adjusted to either reward compliance or punish deviation. This isn't just external control—it's the technological colonization of human consciousness itself.
The same technical elites who are developing these AI systems are actively working to dismantle democratic institutions that might constrain their power. When Musk gains control of Treasury systems while simultaneously developing AI, when tech companies build surveillance infrastructure while weakening privacy protections, they're creating the preconditions for permanent despotism.
This is why defending democracy now is not just about preserving current freedoms—it's about maintaining the very possibility of freedom for future generations. Once AI-enabled systems of control are fully established, dismantling them may become virtually impossible. How do you organize resistance when every communication is monitored? How do you build opposition when AI systems can identify and neutralize potential dissidents before they even act? How do you imagine alternatives when reality itself is algorithmically managed?
We're not just facing the prospect of another form of tyranny—we're confronting the possible end of human autonomy itself. Previous generations could maintain hope because they knew that all tyrannies were ultimately human and, therefore, imperfect. But AI-enabled autocracy threatens to create systems of control so comprehensive, so sophisticated, and so tireless that hope itself becomes computationally impossible.
The marriage of autocratic power with artificial intelligence isn't just another step in the evolution of despotism—it's the birth of something qualitatively new in human history. A system that can monitor everyone, predict behavior, shape cognition, and optimize control with inhuman precision represents a fundamental transformation in how power operates. Once established, such systems might preclude not just successful resistance but the very ability to conceive of resistance.
What makes our current moment uniquely dangerous is how technological and political transformations are converging. As democratic institutions are systematically weakened, as civil service protections are dismantled, as public trust in shared truth-seeking mechanisms collapses—the technical infrastructure for permanent control is being simultaneously constructed. This isn't coincidence. The same forces working to replace democratic governance with technical authority are developing the AI systems that could make this transformation irreversible.
Consider what happens when AI-enabled social control meets the politics of psychological gratification. Current social media algorithms already optimize for engagement through emotional manipulation. Now imagine those same techniques enhanced by vastly more sophisticated AI, deployed not just for profit, but for social control. The system wouldn't just monitor dissent—it would make dissent emotionally unsatisfying while making compliance feel rewarding. People would embrace their own subjugation because the algorithms would make it feel good.
This points to something even more disturbing: AI-enabled autocracy might not look like the crude dystopias we imagined. There might not be jackbooted thugs or obvious oppression. The system would work by making the paths of approved behavior feel natural and satisfying while making resistance feel pointless and uncomfortable. It wouldn't need to ban protest—it would simply make protest seem passé, cringe, socially costly. The AI would learn exactly which emotional and social levers to pull for each person, crafting a personalized experience of happy compliance.
The technical infrastructure for this system is being built right now, often with public approval. Every AI model trained on human behavior, every improvement in predictive algorithms, every advance in natural language processing—these aren't just technical achievements. They're potentially components of the most sophisticated system of human control ever devised. When combined with the dismantling of democratic institutions and the erosion of human rights protections, they create the preconditions for permanent despotism.
The timeline for preventing this future is vanishingly short. Once these systems are fully established, they could preclude the very possibility of successful opposition. Traditional resistance movements relied on certain human constants—the ability to form trusted groups, to maintain secret communications, to spread alternative ideas. But AI systems could identify potential resistance before it forms, map social networks to prevent organization, and optimize content delivery to make alternative ideas seem cognitively uncomfortable.
This future isn’t centuries away, or even decades. It is five to ten years out—if that. The systems are being built now. The political and legal guardrails that could constrain their abuse are being dismantled now. If democracy fails in the coming election cycle, these technologies won’t remain neutral. They will be weaponized and fused with state power. And once that fusion happens—once AI optimization is driving state repression—it’s hard to see a path back. I don’t know what to tell people beyond this: if we lose this fight now, we may not get another. This isn’t alarmism. It’s the brutal math of power and technology converging.
This is why current battles over democratic institutions matter so much. We're not just fighting about today's policies or political outcomes. We're fighting to maintain the framework that might let us constrain AI systems before they become instruments of permanent control. Once technical systems replace democratic processes, once AI optimization replaces human judgment, the window for establishing meaningful constraints may close forever.
What makes this transformation particularly insidious is how it could happen without most people recognizing the magnitude of what's being lost. When tech leaders talk about making government more "efficient" through AI, when they promise to optimize bureaucratic processes or improve service delivery, they're presenting the elimination of human judgment and democratic oversight as mere technical upgrades. But these aren't just improvements to existing systems—they're fundamental transformations in how power operates in society.
Consider what happens when AI systems replace human bureaucrats. A human official might bend rules for compassionate reasons, might be persuaded by moral arguments, might resist unjust orders through passive non-compliance. Their human judgment, even their human fallibility, creates spaces where resistance to systemic injustice remains possible. An AI system, optimized for compliance and efficiency, would eliminate these human spaces of potential resistance. It would execute its programmed objectives with inhuman precision, immune to moral persuasion or human appeal. Hopefully this shows you why AI ethics is a serious subject, that demands serious attention.
This points to perhaps the most dangerous aspect of AI-enabled autocracy: it could eliminate the human inefficiencies that historically prevented total control. The imperfect enforcement of rules, the gaps in surveillance, the limitations of human attention—these weren't bugs in previous systems of governance, they were crucial features that maintained spaces for human freedom. When AI systems eliminate these inefficiencies, they don't just make governance more effective—they eliminate the margins where human autonomy historically survived.
The systems being built today could create what might be called "comfortable totalitarianism"—a form of control so sophisticated that most people wouldn't even recognize it as oppression. The AI wouldn't need to threaten or punish. It would simply shape the information environment, social incentives, and economic opportunities to make compliance feel natural and resistance feel futile. Like a master psychologist with godlike power and infinite patience, it would understand exactly how to shape behavior without ever seeming to coerce.
This is why current battles over seemingly technical issues like AI development, platform governance, and bureaucratic reform are actually existential struggles for the future of human freedom. We're not just deciding how to regulate new technologies—we're determining whether meaningful human autonomy will remain possible. Once AI systems become sophisticated enough to predict and shape human behavior with high precision, once they're integrated deeply enough into social and economic systems, establishing effective democratic control may become computationally impossible.
The struggle isn't just against particular politicians or policies—it's against a vision of the future where human judgment and democratic deliberation are seen as inefficient obstacles to be optimized away. When tech leaders talk about replacing democratic processes with AI-driven decision making, they're not just proposing administrative reforms—they're advocating for systems that could permanently eliminate human agency from governance.
The reactionary coup attempting to seize power right now isn't just another political crisis—it's potentially our last chance to prevent permanent technological despotism. When Trump and his allies work to dismantle civil service protections, when they claim authority to ignore democratic constraints, they're not just seeking traditional autocratic power. They're trying to eliminate the institutional safeguards that might prevent AI systems from becoming instruments of permanent control.
This is why the alignment between tech oligarchs and anti-democratic forces is so dangerous. Figures like Musk aren't just supporting Trump out of tactical convenience—they're working to create conditions where technological power can operate without democratic oversight. By the time most people recognize the true magnitude of this transformation, it may be too late to resist. Once AI systems are deeply integrated into governance, once they're optimized for social control rather than human flourishing, establishing meaningful constraints may become computationally impossible.
We are rapidly approaching what might be called a democratic point of no return. If anti-democratic forces succeed in dismantling constitutional safeguards now, if they gain the power to deploy AI systems without meaningful oversight, they won't need to maintain power through traditional repression. The systems they're building could make effective opposition technically impossible, not through violence but through precise behavioral and social control that eliminates the possibility of organized resistance before it can form.
If we fail to stop this reactionary coup, if we allow them to dismantle democratic institutions while simultaneously deploying increasingly sophisticated AI systems, we may be surrendering not just our own freedom but the very possibility of freedom for all future generations.
This is why my interventions—grounded in what I’ve called an epistemic liberal ethic—thread directly against this threat. Liberal democracy is not merely a set of procedures or a power-sharing arrangement; it is the only system we have discovered that allows societies to collectively discover and contest what is true. It is the machinery through which human societies manage the complexity of reality together. When I argue that liberalism is fundamentally an epistemic project, I mean that it is our best-known defense against collective delusion and unchecked power—because it disperses authority, it demands justification, and it insists on the right to dissent.
Technology is a tool, not a telos—not an end in itself. This is a crucial ethical dividing line. When we begin treating technological development as an autonomous force, a destiny we are bound to fulfill rather than a means we must consciously govern, we surrender what is most essential about our humanity: the ability to choose our path. The human-tool relationship is not incidental; it is foundational to freedom. Technology must serve human flourishing, not the other way around. The moment we cede that primacy—allowing systems designed for optimization to dictate what we value, how we live, and ultimately, who we are—we transform from agents into artifacts. We become not the authors of our future, but products of a process we no longer control. This is the final subjugation: not merely being ruled, but being reprogrammed.
But AI-enabled autocracy represents an assault on this very epistemic core. It doesn’t merely seek to suppress dissent; it aims to preemptively shape what can be thought. It replaces open contestation with algorithmic optimization. It collapses the space in which humans deliberate about their shared reality and replaces it with a system that dictates reality to them—tailored to their weaknesses, designed to feel natural, engineered to feel true.
This is why the fight for democracy and the fight for truth are inseparable. If we lose the democratic framework that allows us to collectively seek truth—to challenge power with facts, to contest reality in public—then the power that seizes control will not merely dictate laws. It will dictate what is real. And with AI as its instrument, it may succeed in making that dictated reality feel so seamless, so emotionally satisfying, that the very desire to contest it withers away.
This is the final despotism. Not merely the suppression of freedom—but the quiet erasure of the capacity to imagine it—at all.
And that is why, if we lose this fight, we may not get another.
Turning and turning in the widening gyre
The falcon cannot hear the falconer;
Things fall apart; the centre cannot hold;
Mere anarchy is loosed upon the world,
The blood-dimmed tide is loosed, and everywhere
The ceremony of innocence is drowned;
The best lack all conviction, while the worst
Are full of passionate intensity.
Surely some revelation is at hand;
Surely the Second Coming is at hand.
The Second Coming! Hardly are those words out
When a vast image out of Spiritus Mundi
Troubles my sight: somewhere in sands of the desert
A shape with lion body and the head of a man,
A gaze blank and pitiless as the sun,
Is moving its slow thighs, while all about it
Reel shadows of the indignant desert birds.
The darkness drops again; but now I know
That twenty centuries of stony sleep
Were vexed to nightmare by a rocking cradle,
And what rough beast, its hour come round at last,
Slouches towards Bethlehem to be born?
— William Butler Yeats, The Second Coming
The problem is that no one knows what to call this. If you say fascism people think of Hitler and Mussolini. Since these people aren’t like them the name doesn’t stick. Same with calling them nazis. In fact, they are different. Neofascists or neonazis is too vague. I suggest technofascism
.
Somehow crossed your feed and reading that was scarier than a Stephen King Novel. I have a kid that spent a year studying Ai in depth. He no longer wants to have anything to do with it due to some of the reasons you mentioned