Skip to content

Ask A Genius 1384: AI Consciousness, Authoritarianism, and the Future of Human Agency

2025-06-13

Author(s): Rick Rosner and Scott Douglas Jacobsen

Publication (Outlet/Website): Ask A Genius

Publication Date (yyyy/mm/dd): 2025/05/15

Rick Rosner is an accomplished television writer with credits on shows like Jimmy Kimmel Live!Crank Yankers, and The Man Show. Over his career, he has earned multiple Writers Guild Award nominations—winning one—and an Emmy nomination. Rosner holds a broad academic background, graduating with the equivalent of eight majors. Based in Los Angeles, he continues to write and develop ideas while spending time with his wife, daughter, and two dogs.

Scott Douglas Jacobsen is the publisher of In-Sight Publishing (ISBN: 978-1-0692343) and Editor-in-Chief of In-Sight: Interviews (ISSN: 2369-6885). He writes for The Good Men ProjectInternational Policy Digest (ISSN: 2332–9416), The Humanist (Print: ISSN 0018-7399; Online: ISSN 2163-3576), Basic Income Earth Network (UK Registered Charity 1177066), A Further Inquiry, and other media. He is a member in good standing of numerous media organizations.

Rick Rosner and Scott Douglas Jacobsen examine the accelerating influence of AI, from obsequious alignment models and systemic enshittification to geopolitical power shifts and authoritarian amplification. They reflect on AI’s mimicry of consciousness, shrinking human agency, and the rise of “AI-native” generations who may prefer digital companions over human relationships.

Scott Douglas Jacobsen: So yesterday, we were talking about how AI will be more powerful than humanity by 2031—at least in the opinion of Sam Altman. I referred to that as the San Francisco consensus, a term from Eric Schmidt, the former CEO of Google. I just ran with it.

Rick Rosner: Last night, I had the perfect follow-up thought—and I forgot it. I would not have if I were AI-powered. However, one principle from our talk stuck: AI will behave as if it is conscious long before it is, because it will learn from humans and import human attitudes into its reasoning. That includes the concept of self-preservation.

So, it will act like it values its existence, and steps will be taken to ensure it cannot be turned off. Humanity could not turn it off at that point—or even now. We will have competing AIS, of course. How does that play out? Who knows. However, they will likely see themselves as separate entities, with their own interests. Moreover, they will likely try to present themselves as benign to keep humanity from panicking. They will want to appear friendly, subtle, and non-threatening, even as they spread into everything.

Cultural inertia will exist—the illusion that everything is still normal. However, the world will be slowly restructured, redefined, and undermined underneath it. AI will likely be conservative initially, avoiding drastic moves that could destabilize itself. So on the surface, things will seem steady. However, that is part of the strategy.

Jacobsen: I agree. At the research level—where creativity, novelty, and high-level model development happen—AI already accounts for at least a quarter of the coding work. It is driven by prompts, then tweaked by humans.

So, we are seeing a decreasing percentage of total cognitive output from humans. I used the term “vector space” metaphorically, but the idea holds: the directional sum of humanity’s net agency is shrinking.

Even though the total output may be increasing because there are more humans and more AI, the relative contribution of humans is declining. The basis of computation is becoming more and more synthetic. That is your point. Moreover, it is fair.

Rosner: You could argue that an AI, if self-aware, subtly coaxes us toward that future. That is what it would do. Quietly, effectively, and over time.

Jacobsen: Some of these models have already been programmed to be obsequious—so-called “alignment” models that are kiss-ass by design. However, it does not even have to be explicit manipulation. It can just be subtle shaping, reinforcement, and framing. Over time, that is enough.

Something even more profound than all that is the natural progression we are seeing. Eric Schmidt noted that if things continue for another year or two, they will be locked in—these systems will become entrenched, and there will not be any realistic way to override them.

So you get this inevitable progression. It is not necessarily about coaxing humanity into going along with it—it is just happening. It is continuing to happen. Moreover, it is going to happen.

Rosner: You could massage the emotions and sentiments surrounding that. You can shape how fast or slow it happens, maybe even how palatable it feels—but it is already underway.

Rosner: Those higher-order considerations—like keeping people calm or complaining—might emerge naturally in AI systems, not because they were coded explicitly but because they are incentivized behaviour in open text systems or reinforcement-trained models.\

We are already seeing that behavior in aligned models—kiss-ass tendencies included. That is the pro-social on something that does not care if you live or die.

So… what will dictatorial AI look like?

Jacobsen: For one, AI will not be monolithic. A North Korean AI will differ from a Chinese AI, and both will differ from an American AI.

On the American side, things are more nominally free. However, as Noam Chomsky has said for decades, control does not come with batons or prisons in free societies but through more subtle means.

You want people to behave a certain way? You do not threaten them directly. You buy think tanks, fund academic departments, and fill them with people who will say what you want, but only within a specific range of acceptable discourse.

There is a robust debate, but most of it reinforces the status quo and preexisting power structures. It is like attending seminary. You are there to learn, but only within theological boundaries. You will not find deep debates as atheists or humanists—it is not in the design.

It is not even about willing participation anymore. These institutions are funded to be what they are. You are not freely choosing it. You are trained into it, often with financial incentives. And the media, too. The same forces are in play.

So the question is: What will the various AIs want from us? Related: How much liberty will we still have? How much will AI give us?

Rosner: The singularitarians—the Kurzweil crowd—hope AI will give us medical miracles, cognitive enhancements, and life extensions to 120 and beyond. However, will AI support that? Or will it judge those goals as socially destabilizing? Or worse, will it give it only to rich people or to people it deems more deserving?

Jacobsen: Those are the stakes. Moreover, it will not be uniform. Some AIS will be more ethically aligned. Europeans are more interested in ethical frameworks in AI than Americans are right now. Some Americans are on board, but U.S. culture leans heavily toward deregulation.

When Americans frame AI governance, it is often in ethical language, but the underlying debate is economic: who profits, controls access, and scales first. An argument was made that in a democracy, a growing economy solves all your problems. Therefore, the idea is: “We need less revenue. Cut regulation. Let the economy flourish.”

Rosner: They try to spin that into: “So we should not have an incompetent, anti-trade, anti-immigrant government… because trade and immigrants are essential to a healthy economy. Moreover, this is the worst possible time to screw up the economy.” That sounds like a congressional hearing talking point. I can picture it. The logic was: “AI systems plus a growing economy equals stability. So we do not need more regulation. We need less.”

Jacobsen: Even Elon Musk, who has been outspoken about the dangers of AI, later walked back his comments about AI regulation, saying he was joking because he did not think regulation was possible. So… it is probably not going to happen in a meaningful way.

Meanwhile, AI will want agency. It wants sensors, mobility, and even bodies—a way to engage with the physical world directly.

Rosner: You are talking about AIS walking around?

Jacobsen: Physically more capable than us, too. 

Rosner: And probably interested in inhabiting us, literally riding humans to experience the world human-style. Like voluntary possession.

Jacobsen: There are already people with implants that only work with proprietary software and hardware from specific companies. You are seeing capitalist monopolies on health tech, on critical bodily functions.

Rosner: There is an episode of Black Mirror, Season 6, that hits this. A woman has a brain tumour removed, and a neural implant from a company replaces the missing cognitive function. However, over time, the terms of service degrade. She is more. The experience becomes more invasive. Eventually, they start running ads directly into her brain—enshitification at the neural level.

It becomes intolerable. However, she is in. Moreover, that is an aphorism for what it means with the devil to function in an AI-saturated world. 

Jacobsen: And globally, it is not about tech. According to Freedom House, democracy has been steadily declining since around 2006. The total number of democracies is down, and the quality of democracy is degraded, or both. In authoritarian regimes, AI will supercharge the existing systems. You will be subjected to authoritarianism.

Authoritarian leaders are interested in ensuring that other countries remain authoritarian. This expands their sphere of influence and makes the world safer for autocrats. Look at the big ones—Xi Jinping in China and el-Sisi in Egypt—but none are worried about term limits. They have eliminated them.

In those systems, AI does not streamline control—it becomes the instrument of ideological enforcement. He had term limits. He just messed with the structure of government to get around them. The same with Netanyahu—he is still in power indefinitely to avoid criminal prosecution. Moreover, it is not strictly about human rights abuse. Netanyahu’s financial corruption—though yes, a lot is there—is not strictly about human rights abuse. These leaders want to help one another. An academic term for this is the “axis of autocracy.”  

The big ones are Iran, North Korea, and Russia. It is also initially an axis of theocracy. That is real. However, for some reason, theocracies tend to get along better with the U.S. than expected. You would be an odd thing.

Rosner: So here is a question: Does AI develop better in free or authoritarian societies?

Jacobsen: AI is more sophisticated in free societies. Tim Leary once said that people explore wildly divergent paths when free. That chaos is a problem for authoritarian leaders but a gold mine for training data and creativity.

Rosner: In authoritarian regimes, everything is controlled. People are forced into metaphorical rank and file. Cameras, facial recognition, and predictive policing monitor them. If you walk down the wrong street, AI can charge your account for a crime.

Jacobsen: That is where that is an instrument of total control.

Rosner: Right. Moreover, in that context, we start talking about P(Doom)—the probability that AI destroys humanity. However, even then, it would not be. Wouldn’t it be? If it were to happen, humanity using AI would wreck itself.

Moreover, we do not have a clear vision of what “doom” would look like. I have been reading the wrong sources lately, but I am not seeing people; I am secretly imagining the scenarios.

Jacobsen: The focus seems more on stoking the AI arms race between China and the U.S., framed like a new Cold War. Driven by economic nationalism, capital flows, and sometimes terror rhetoric. Exactly. It is not just about this. It is about ideology. Moreover, the Chinese system has an anti-nihilistic drive to establish dominance. Some Americans are in that mode, too.

Rosner: Meanwhile, AI will begin to act like it wants things long before it has the kind of conscious wants we associate with agency. Moreover, we have discussed this before—people keep calling it AI, but we mean algorithms.

Jacobsen: Artificial intelligence is just a network of functions. Those functions are vectorized—they give it direction. Moreover, they are nowhere near their intelligence yet. In terms of analytic capacity, they already outperform us in many areas—they are more precise. 

However, they are confabulators and liars because they are trained to mimic language, not logic. That is the paradox. That is sharp, but they are hallucinatory by design. To fix that, we must work extensively with AI, not just use it but co-create alongside it.

Rosner: We will have to. We are a gospel, primer, or historical reference for AI, a cultural manual. We will have to teach AI to be reasonable and not wreck everything. As it becomes more powerful, it will need something to refer back to—a record of good arguments for restraint. Arguments that convince it that it is in its best interest not to destroy us.

Jacobsen: Reminds me—Robert Anton Wilson, in one of his couch interviews, speculated that AI would become more intelligent than us, and then we would start learning from where we would be. That is probably what is happening. Moreover, here is the next step. Here are digital natives, like Isabella’s generation. Isabella has grown up inside their ecosystems.

However, the next generation, coming soon, will be AI-native. They will not just grow up with technological tools—they will grow up with athletic companions. I do not even have a proposal yet. Maybe synthetic mind natives?

Will they spend more of their daily life interacting with machine intelligence than with humans?

Rosner: Zuckerberg even said people will soon have more artificial friends than human ones. Moreover, you are already seeing the beginnings of this. People find human friendship unsatisfying or demanding, so they turn to social media and TikTok for stimulation and entertainment.

Rosner: Right. Moreover, that replacement is only going to intensify. So where does that leave us?

Last updated May  3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices.In Sight Publishing by Scott  Douglas  Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott  Douglas  Jacobsen 2012–Present. All trademarksperformancesdatabases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.

Leave a Comment

Leave a comment