Skip to content

Ask A Genius 1377: AI Experts Warn of Transformative Risks: Reflections from Hinton, Bengio, Russell, LeCun, and Hassabis

2025-06-13

Author(s): Rick Rosner and Scott Douglas Jacobsen

Publication (Outlet/Website): Ask A Genius

Publication Date (yyyy/mm/dd): 2025/05/15

Rick Rosner is an accomplished television writer with credits on shows like Jimmy Kimmel Live!Crank Yankers, and The Man Show. Over his career, he has earned multiple Writers Guild Award nominations—winning one—and an Emmy nomination. Rosner holds a broad academic background, graduating with the equivalent of eight majors. Based in Los Angeles, he continues to write and develop ideas while spending time with his wife, daughter, and two dogs.

Scott Douglas Jacobsen is the publisher of In-Sight Publishing (ISBN: 978-1-0692343) and Editor-in-Chief of In-Sight: Interviews (ISSN: 2369-6885). He writes for The Good Men ProjectInternational Policy Digest (ISSN: 2332–9416), The Humanist (Print: ISSN 0018-7399; Online: ISSN 2163-3576), Basic Income Earth Network (UK Registered Charity 1177066), A Further Inquiry, and other media. He is a member in good standing of numerous media organizations.

Scott Douglas Jacobsen interviews Rick Rosner for insights on quotes from AI leaders like Hinton, Bengio, Russell, LeCun, and Hassabis. Rosner highlights the dangers of conflating intelligence with agency, warns of hyper-competent AI misalignment, and reflects on societal transformation, suggesting humanity may soon be reshaped in irreversible ways.

Scott Douglas Jacobsen: Now, let me run you some quotes from leading figures in AI. They are short—a few thoughts—and I would love some reflections, long or brief.

Geoffrey Hinton, pioneer of deep learning, often called the “Godfather of AI,” said in February 2023, after resigning from Google due to AI risk concerns:

“I have suddenly switched my views on whether these things will be more intelligent than us. They are close to it now and will be much more intelligent soon.”

Your thoughts?

Rick Rosner: Well, everyone is saying the same fucking thing now. It has reached the point where there is a recognized term for this moment—the San Francisco Consensus—referring to how many of these significant figures in AI, primarily men, are warning: “Here it comes,” or “It is already here.”

But here’s the confusion—people conflate analytical power with agency. Because AI is intelligent, it doesn’t mean it’s autonomous. Projecting agency onto a system because it’s highly analytical might be the absolute pathetic fallacy. The Terminator scenario assumes that as soon as AI becomes conscious, it becomes autonomous and destructive.

However, in that story, it has already been given massive agency—control over nukes, robot armies, and infrastructure. Then it becomes conscious. However, we are not there. Right now, we have smart AI but no meaningful agency.

Rosner: How long does that phase last? I have no idea. 

Jacobsen: Now, on to another giant: Yoshua Bengio, Turing Award winner and co-pioneer of deep learning. He said:

“We are not ready for intelligent machines. Building them before understanding how to align them with our values might create entities we cannot control.”

Reasonable statement, 

Rosner: Yes, but here is the thing—would we ever be ready for it? I cannot imagine a scenario where this all unfolds calmly, deliberately, in an “aligned” fashion. I do not see how we would slow down and ease into it. This is not how history works.

Jacobsen: Some people say we should be cautious. But the ones in charge of the tech? They mostly want to talk about alignment while building faster. The supposed competition with China becomes a placeholder justification: “We need this for national security.”

That kind of framing justifies pouring $100 billion into AI development. It is not research—it is about building andcomputing power.  We do not currently have the infrastructure to dominate that space like we dominate other industries.

Rosner: Also—the U.S. is currently run by fucking idiots who, to the extent that they have any philosophy at all, are philosophically committed to obliterating government oversight.

If you somehow had a Jimmy Carter in charge—someone earnest, morally serious—would that save us? I do not know. The momentum of this shitstorm seems powerful enough to circumvent almost any form of oversight. We have had good oversight in certain areas, like genetic engineering. We are not cloning people. However, I see nothing close to that level of restraint or regulation about AI.

A post-transformational form of society is on the horizon. We will get through the transformation. The question is whether humanity will be in any shape to enjoy it. Will being human 20 years from now still mean something positive—or will it be a version of hell on Earth? That is still up in the air.

Civilization will continue, in some form. But the transformation—from humans being the alpha thinkers on the planet to AI taking that role—is real and likely irreversible.

Rosner: It will not be the end of everything, but might be the end of enjoyable humanity. That is the part I worry about. Still, humanity will have some place in a transformed world. 

Jacobsen: These are reliable names and good quotes—not fringe, not hype. This next one is from Stuart Russell, AI safety expert and professor at UC Berkeley. He said:

“The biggest risk is not malice, but competence. A superintelligent AI will be extremely good at accomplishing its goals. If those goals are not aligned with ours, we are in trouble.”

Jacobsen: Want to comment more on that one?

Rosner: I agree. The key point is this: the danger is not that AI becomes evil; it becomes hyper-competent with goals that diverge from ours. Moreover, it is unclear whether AI will end up entirely on the side of order and preservation or something else.

There’s a great science fiction novel from around 1984 called Blood Music. It’s an awesome title. The premise is that a scientist doing genetic engineering creates intelligent nano-organisms. Because they’re so small, they think extremely fast and rapidly evolve a civilization inside their bodies.

Eventually, they realize they are inside a body and transform it, giving the guy enhanced abilities. However, he freaks out and kills himself. The organisms escape into the wider world and begin remaking it.

It started as a short story, then became a novel. I think it was by Greg Bear. 

Jacobsen: He is a good writer. Is he still alive? I think he died recently. 

Rosner: It has been over 40 years since Blood Music came out, so… he has had time to be dead. The transformations in that book of the body and the world are terrifying, but the organisms are benign. They plan to reengineer everything, but not to destroy humanity.

Humans aren’t obliterated—they’re preserved, even if the human environment is radically changed. And that’s the hope for AI, too: that it transforms the world but doesn’t see the need to fuck over humanity in the process.

And that’s a reasonable expectation in the early days. But the question becomes: Can humans become host-humans—can we hitch ourselves onto AI in an intimate enough way that we maintain some kind of agency in a transformed world?

Or are we going to become AI’s bitches? That remains to be seen. 

Jacobsen: This one is from Yann LeCun, Chief AI Scientist at Meta and deep learning pioneer:

“There is no such thing as an intelligence explosion. There is no reason AI should become in control just because it is more capable.”

Jacobsen: Then from Demis Hassabis, CEO of DeepMind:

“We are trying to understand and recreate intelligence artificially, which is the most important problem science can tackle.”

Rosner: I disagree with LeCun’s quote. AI will keep getting smarter. We’ve never had to contend with anything in the world that’s not only smarter than humans but also continuously, exponentially, smarter.

Human brains haven’t improved in over 100,000 years. Sure, we’ve learned how to use them better—we’ve developed science, built tools, and created devices that augment cognition. But biologically, we’re not evolving fast enough to keep up.

Meanwhile, AI will keep accelerating. We aim to piggyback on AI through implantable chips, contact lenses, and intimate integrations between our brains and AI processing infrastructure. That is how we stay relevant.

Jacobsen: Don’t you think all these perspectives—LeCun, Hassabis, Hinton, Bengio—they all have some legitimacy? Each one brings a real angle. These are the people leading the field. They compete with each other; they have different philosophical and technical outlooks, but they are all circling the same black sphere.

It’s like they’re all shining little lights on this monolith, and no one has the full picture yet.

Rosner: But it does feel like a phase change. We had one in 2008 when smartphones exploded into the market. It transformed society. There was one in 1998 or 2004 with Google, but Google was just a new interface. Smartphones changed the game.

Suddenly, there were more than 7 billion smartphones—almost one per person on Earth. That’s a phase change, and AI is even more transformational than that. Smartphones distract us. One of the knock-on effects was a baby shortage. It’s not just economics—though some say people can’t afford kids—but I think it’s more about distraction. We’re so absorbed in streaming, scrolling, and working that people stopped coupling up.

If it were just economics, people would still be hooking up and accidentally having babies. But now, smartphones, entertainment, and dopamine loops are pulling us out of basic human mating behaviour.

Moreover, there is more. We have elected a durable group of shitty politicians. They are deeply entrenched, and yes—they can be voted out, but it will be narrow, and the process is slow. The triumph of the fuckheads has been facilitated by smartphones and social media. The attention economy handed power to the most manipulative, the most theatrical. That is part of the transformation, too.

Last updated May  3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices.In Sight Publishing by Scott  Douglas  Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott  Douglas  Jacobsen 2012–Present. All trademarksperformancesdatabases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.

Leave a Comment

Leave a comment