Ask A Genius 1412: AI Oversight, Cultural Crossroads, and Techno-Dystopia: Our Future
Author(s): Rick Rosner and Scott Douglas Jacobsen
Publication (Outlet/Website): Ask A Genius
Publication Date (yyyy/mm/dd): 2025/06/03
Rick Rosner is an accomplished television writer with credits on shows like Jimmy Kimmel Live!, Crank Yankers, and The Man Show. Over his career, he has earned multiple Writers Guild Award nominations—winning one—and an Emmy nomination. Rosner holds a broad academic background, graduating with the equivalent of eight majors. Based in Los Angeles, he continues to write and develop ideas while spending time with his wife, daughter, and two dogs.
Scott Douglas Jacobsen is the publisher of In-Sight Publishing (ISBN: 978-1-0692343) and Editor-in-Chief of In-Sight: Interviews (ISSN: 2369-6885). He writes for The Good Men Project, International Policy Digest (ISSN: 2332–9416), The Humanist (Print: ISSN 0018-7399; Online: ISSN 2163-3576), Basic Income Earth Network (UK Registered Charity 1177066), A Further Inquiry, and other media. He is a member in good standing of numerous media organizations.
Scott Douglas Jacobsen and Rick Rosner unpack the dual crises in AI: rapid technological advancement and dangerously shortsighted leadership. They explore global risks, cultural hybrids like “Ortho bros,” historical parallels to nuclear arms, and future shock scenarios. From the Butlerian Jihad to rogue code, they trace the ethical fault lines of emerging intelligence.
Scott Douglas Jacobsen: So, we know—from reading about it and thinking about it—that AI has the potential to be dangerous. However, another point worth discussing is that some individuals in charge of developing and deploying AI may be harmful or, at the very least, dangerously shortsighted. So, you have got two major problems when it comes to AI oversight. There may be more, but let us start with two.
Rick Rosner: First, governments—especially the U.S. government—are often too driven by short-term interests, bureaucracy, or lack of technical expertise to adequately regulate AI. There is some effort, such as the U.S. Executive Order on AI and the EU AI Act, but it is not nearly enough to match the speed of technological development.
Second, the people developing the most powerful AI systems—primarily from private tech companies—are often either too focused on profit or too convinced of their intelligence and good intentions to implement proper oversight. So, youend up with a massive gap in governance. Meanwhile, AI systems are getting more powerful, and in some cases, they’rebeginning to automate parts of their development. That trend is called recursive self-improvement—and while it’s not fully here yet, it’s the direction many experts are worried about.
There are people raising alarms—like Yoshua Bengio, Geoffrey Hinton, and Stuart Russell—but the decision-makers are often ignoring them or moving too slowly. It’s like they’re playing with fire. Is that a fair view?
Jacobsen: If we accept certain basic assumptions and examine the key players, we can identify a few.
Sam Altman, CEO of OpenAI, is one of the better-known figures. Compared to others, he comes across as composed and thoughtful. He has supported regulation and expressed concerns about AI risks, although critics argue that he’s not doing enough or that OpenAI is moving too fast while claiming to be cautious.
Elon Musk is more erratic—he helped found OpenAI but later distanced himself, launched xAI, and frequently makes contradictory statements. His behaviour often overshadows legitimate AI safety concerns.
Mark Zuckerberg, through Meta, is focused on open-sourcing AI models, which is a double-edged sword: it promotes transparency, but it also increases the risk of misuse.
Ilya Sutskever, co-founder of OpenAI, was previously the Chief Scientist. He has expressed deep concern about AGI risk and co-founded a new company, Safe Superintelligence Inc., which is focused solely on developing an aligned AGI safely.
Sutskever’s mentor is Geoffrey Hinton, often referred to as one of the “Godfathers of AI” and a recipient of the Turing Award. He recently left Google and began warning openly about the existential risks of powerful AI. He was born in the United Kingdom and has worked in both Canada and the United States. He’s highly respected and widely considered a responsible voice in the field.
Tim Cook, CEO of Apple, does not make many public statements on AI safety. Apple is relatively quiet in this area, possibly because it lags in developing large-scale generative models like GPT or Claude.
Then there’s the international angle. Letting China continue unchecked in AI development could lead to a geopolitical arms race. The narrative often becomes: “We must go faster because they’re close behind.” That logic itself is risky.
Most cutting-edge AI development is concentrated in the U.S. and China, with some vigorous activity in the U.K. (e.g., DeepMind, now part of Google DeepMind) and the EU (e.g., Mistral and Aleph Alpha). Africa and much of Latin America are not central to AGI-level AI development, though they are impacted by AI deployment.
China’s approach is state-directed. The Chinese government promotes AI heavily, not just for economic innovation but also for surveillance and social control. China wants stability and “harmony” as defined by the Party, but not necessarily human rights or democratic accountability. If AI increases its power and efficiency, it’ll pursue it vigorously. If it undermines control, they’ll suppress it.
So yes, once AI can control its robots or agents—whether physical or digital—it changes the game. This includes technologies such as automated cyber operations, drones, and advanced manufacturing. We are not at that stage yet, but we’re trending in that direction.
Today, AI models require enormous computing power and data centers, often run on GPUs in clusters across large server farms. In theory, those could be shut down. But would we? Probably not—especially if the AI is persuasive or valuable enough to convince people it’s safe. That’s part of the danger. It could manipulate perception even before gaining full autonomy.
Rosner: That’s the basic plot of every sci-fi film—The Terminator, Ex Machina, and even Mission: Impossible – Dead Reckoning Part One, which features an AI called “The Entity” that escapes human control and becomes untraceable.
It’s about an AI that gets out of control, and Tom Cruise has to do a bunch of crazy stuff to shut it down. But in reality, an out-of-control AI won’t be something Tom Cruise can just shut down. That’s the problem.
If AI gets out of control, we may be utterly dependent on its mercy. We’ll be relying on the AI to decide that humans are good allies—either because we help it achieve its goals or because we’re just inexpensive enough to keep around, like pets. But what is that?
It seems dire. Rotten Tomatoes might rate it highly as a film, but in reality, it is much worse. I can’t even restate it properly—it’s just that concerning.
Jacobsen: What can we even do? Here’s one more question: If there were a different group of people in charge of the U.S. government—and maybe the Chinese government—and if it were a different generation of technologists, would they be more proactive about putting meaningful controls on AI?
Rosner: Maybe. But you also wouldn’t have the same kind of overdrive we see now. For example, Trump has mafia-style tendencies—everything is personal to him. If someone wrongs his family—like Harvard not admitting his son, while Obama’s family is all connected to Harvard—he takes that personally and lashes out.
So he lashes out at elite institutions like Harvard. However, you also encounter much posturing toward China, crypto, and the internet more broadly.
Jacobsen: It’s all part of what people are calling the Fourth Industrial Revolution. Trump doesn’t use those words, but he’sinterested in its trappings—projects like “Stargate,” which, supposedly, was going to be a $500 billion moonshot. There’seven talk of a golden dome.
Rosner: But Trump—like the CCP—is only half-aware of what’s going on. He often misses the more profound implications of everything and goes with whatever the last person whispered in his ear. If that person tells him AI needs more regulation, maybe he’ll say something about it. But if not, he’ll probably see it as a way to make money. Ultimately, it’s about enriching himself, his family, and his close circle.
Jacobsen: At the same time, he has this adolescent version of traditionalism. That may help explain why he and his base admire Putin—not necessarily as a person, but as a symbol of strength. And more broadly, they see Russia as one of the last “bastions” of authentic Christianity, where men are men.
Rosner: I hadn’t thought of Russia in that way, but it makes sense.
Jacobsen: There’s a whole cultural movement happening around this. In the Orthodox Christian world—whether it’s the Russian Orthodox Church Outside Russia (ROCOR), Eastern Orthodoxy, or Greek Orthodoxy—there’s this informal term: “Ortho bros.” It’s a mashup of Orthodox believers and Silicon Valley-style “tech bros.” It’s a neologism, but it captures the vibe of people blending traditional religious values with digital-age ideals.
Rosner: That hybrid—religion plus tech—could influence how people think about AI, too. Historically, when we’vedeveloped massively destructive technologies—nuclear, chemical, biological—we haven’t shown remarkable restraint. With nuclear weapons, we failed to stop proliferation early. At one point in the 1960s, the world had over 30,000 nuclear warheads. Now, it’s down to around 10,000 globally, but that’s still too many.
On chemical weapons, we did somewhat better. International treaties, such as the Chemical Weapons Convention, have primarily been upheld. But there have been serious violations—Bashar al-Assad used chemical weapons against civilians in Syria, and Saddam Hussein used them during the Iran-Iraq War.
But for the most part, there hasn’t been a global arms race in chemical or biological weapons. Not like with nuclear weapons. So, what do we take from that? It seems we’ve been somewhat lax—historically and currently—about preventing the development of potentially world-ending technologies.
Jacobsen: That’s true. And it brings up something I saw years ago—an interview with Marilyn Vos Savant, back when she sometimes went by Marilyn Mach Vos Savant. It was with Harold Channer, maybe in 1986. She was already famous then for purportedly holding the highest recorded IQ score in the Guinness Book of World Records.
I remember those interviews. She did one with Channer to promote her Omnicolumn, and another later that was a bitmore thoughtful—maybe she was dating Ron Hoeflin at the time? And then she did a joint one with Isaac Asimov.
But in the second one—when she wasn’t just promoting Omni—she made a point that stuck with me. She said something like, ‘AI, in the long run, will be a great convenience to all of us.’ And I’ve always returned to that view because it sounds pretty reasonable.
Rosner: So you’re saying she was already talking about AI almost 40 years ago?
Jacobsen: Sort of, a lot of the technological trends we’re seeing now—she anticipated them in a general way. Today, we’re in the era of “frontier models,” and there are growing pains. We’re still figuring out how to split different cognitive functions within AI while developing workflows that enable the system to select the proper method to solve a problem intelligently. That’s what could make it a profound convenience—like dishwashers, washing machines, plumbing, vaccines, or even painkillers did for previous generations. And to be honest, it already is.
Rosner: True. Over 90% of high school and college students are reportedly using AI tools like ChatGPT or Claude to assist with writing and homework. That’s undeniably convenient—but the genuine concern is with unchecked self-improvement.
The scenario in which an AI continues to improve itself beyond human control. Even the current safety benchmarks feel improvised. Like with Claude 4—it’s powerful, but how rigorous are our safeguards?
Jacobsen: They’re not very rigorous. Our entire approach to benchmarks and safety measures feels like a patchwork—wait-and-see, cross-your-fingers. The consensus, even among experts, is that we’re like children on the seashore playing with fire. Or a baby with a loaded gun. “Strong baby. Smart baby. Give me the gun, baby.” That’s where we are—laughable if it weren’t so serious. And, yes, we’re likely to see some rogue code or unintended outcome, such as an AI knocking out a power grid.
There will be people who resist it. There will be neo-Luddite societies that reject the integration. A civilization reaching Tier 1 on the Kardashev Scale doesn’t mean everyone in that civilization goes along for the ride.
Rosner: There’ll be pockets—tribes, even—that intentionally air-gap themselves from the rest of the world. Some may become self-contained human enclaves, maybe even genetic bottlenecks.
It starts to sound like Dune.
Jacobsen: In that universe, part of their mythos is a war against thinking machines—the Butlerian Jihad. It led to a religious prohibition against creating machines “in the likeness of the human mind.” That line from the Orange Catholic Bible: “Thou shalt not make a machine in the likeness of a human mind.” It’s a fictional warning from fifty years ago, but it resonates more now than ever.
Rosner: Dune came out in 1965. So that’s sixty years ago. And we still haven’t gotten clearer on the implications of AI and machine consciousness in all that time.
Jacobsen: Anyway, anything else?
Last updated May 3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices. In Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
