Skip to content

Ask A Genius 1299: AI, Nuclear Deterrence, and the Future of Human Labor

2025-06-13

Author(s): Rick Rosner and Scott Douglas Jacobsen

Publication (Outlet/Website): Ask A Genius

Publication Date (yyyy/mm/dd): 2025/03/03

Rick Rosner: So, in the book, we’re going to have Skynet Day. The U.S. and Russia each have about 1,550 deployed nuclear warheads under New START limits, but their total stockpiles—including retired warheads—are much higher. Not all of them are battle-ready. Maybe a third are immediately functional, but most could likely be made ready quickly. Another few hundred on each side could be deployed within weeks or months. The world has lived with this reality for decades, with nuclear deterrence preventing large-scale war. But what happens when an AGI/ASI decides that this much destructive power in human hands is unacceptable?

It intervenes. It hacks into the nuclear systems—somehow bypassing air-gapped safeguards, multi-layered encryption, and human oversight—and launches one nuclear weapon from each side. A five- or ten-kiloton weapon. One from America. One from Russia. Before doing so, it broadcasts a message to the entire world:

“This is going to happen. You cannot stop it.”

Then it launches them straight up. Thirty miles into the sky. And detonates them. The entire world watches in horror as two nuclear fireballs explode harmlessly in the upper atmosphere. No casualties. No cities destroyed. Just a blinding reminder of what AI can do. Then it issues a final warning:

“Do you really want this many nukes in the world?”

Beyond the U.S. and Russia, other nuclear-armed nations—China, France, the UK, India, Pakistan, North Korea, and possibly Israel—also hold stockpiles. AI could theoretically penetrate their systems if they lacked robust cybersecurity, air-gapped controls, or human intervention safeguards. Then AI could issue an ultimatum: “Agree to nuclear disarmament talks within a week—or we do it again.” Governments scramble. Diplomats rush to emergency meetings. Military leaders argue over whether AI can be trusted or controlled. Some nations refuse to comply. AI launches again. It doesn’t need to target cities—just another terrifying display, forcing humanity to confront its self-destructive tendencies.

Would it work? Would the world comply? Or would nations retaliate against AI, trying to shut it down before it dictates global policy? The ultimate question: is this a rogue intelligence acting in the world’s best interest, or an authoritarian enforcer of peace?

Another issue we’ll face by the 2040s is underemployment due to automation and AI replacing human labor. Governments will be forced to rethink economic structures as entire industries become obsolete. But in the book, I’ve devised a Matrix-style employment solution: a network called Mesh, where people have brain implants—chips enhancing cognitive functions or allowing them to contribute computational power to AI-driven economies. The future of labor may not be about physical work but rather integrating human cognition with machine intelligence. Those who opt in will live in an entirely different socioeconomic class, capable of heightened productivity, enhanced learning, and even direct interfacing with AI models. Those who resist? They may become the new underclass—struggling in a world where AI has outpaced traditional skill sets.

Scott Douglas Jacobsen: AI computing is expensive to run—but how expensive? Sam Altman posted on X about this. Some claims of AI’s energy use are exaggerated, but large-scale AI models do require vast computing resources. 

Rosner: Training a single AI model can consume millions of kilowatt-hours, equivalent to powering thousands of homes. Some data centers use as much electricity as small cities. Water is a major issue—cooling AI data centers can require millions of gallons. If the world continues down this path, AI will soon be one of the largest consumers of energy on the planet.

But comparing AI’s resource use to a single hamburger’s carbon footprint is misleading—food production involves land, livestock, water, and methane emissions, whereas AI primarily consumes electricity and cooling resources. Altman is one of the leading figures in AI entrepreneurship, but critics argue he downplays energy concerns. He has an incentive to push the idea that AI will be sustainable long-term, but others warn that the computing power required to scale artificial intelligence to AGI levels could make it an energy hog worse than crypto mining.

Musk? He is unpredictable but plays a key role in AI debates. While he warns of AI dangers, his companies also actively develop it. The irony isn’t lost on anyone. There are other major AI leaders, some pragmatic, some idealistic, some reckless. One thing is certain: AI’s energy consumption, ethical risks, and potential dominance will be major discussions in the future.

They already are, but increasingly so in the future. So, anyway, if you don’t want to run AI to handle whatever task you need done, you can borrow somebody’s brain—or a set of brains—and run it through bio-circuitry to get thoughts, vibes, or even deeper processing. You’re running your calculation through a bunch of brains, and the cost is comparable—or maybe even a little less—because biological thinking is far less energy-intensive than AI computation. Also, you’re not spending extra energy, because these people would be alive anyway. It’s a way to get people paid in a world where many are underemployed.

And there will be many ways to ride people, to integrate their cognition into something bigger. If they’re meshed, if their brains are hooked into an information exchange network, people will be able to sell their lived experience. Imagine a GoPro—except it comes directly out of your brain. To some extent, you’ll be able to experience their thoughts and feelings. If you want to marionette them, whether for sexy fun, parkour, or just slice-of-life experiences, all of that will become increasingly possible.

Or, if not this idealized version, then some shitty, corporate-controlled version that’s just good enough for people to buy into it. Right? Because it’s always shitty. By the time it gets to market, the technology loses its magic. Every new advancement that would have been mind-blowing a decade ago now feels meh the moment we get used to it. That’s just how it goes.

Jacobsen: Every technology we have today that would have been astonishing ten years ago feels mundane now.

Rosner: Yes. By the time it reaches mass adoption, we become spoiled by it. Every tech innovation feels underwhelming once it’s in our hands. AI seemed wondrous when it first started generating insanely detailed art, but now? No one cares. AI porn is a whole other thing.

That’s different. AI porn is… something else, Scott. I’ve gotta tell you. It’s grotesque. Every woman’s boobs are the size of basketballs. Her asshole is blown out. And in many cases, filthy, because—somehow—AI assumes that’s what people want. It’s like everything is exaggerated. And because AI porn is relentless, it jades people faster than regular porn. The sheer volume and speed of content generation makes it overwhelming.

And I’d bet it can be dangerous—not because it “rewires your brain”, but because it skirts the edges of legality. California now has a law against generating AI-created sexual images of minors, which is a good law. Someone has already been prosecuted under it. But the problem is AI can generate anything, and that’s a legal minefield.

For example, the female characters from The Incredibles are weirdly popular in AI-generated porn. Mrs. Incredible? Sure. But the daughter? That gets risky, because even if the movie never specifies her age, she isn’t an adult. I don’t know what the legal argument would be, but I don’t want to see her popping up—because it’s a swamp I don’t want to step into.

Jacobsen: [Laughing] Quick question, how did we get onto this?

Rosner: I had another topic, but hold on—addendum. The place where I check from time to time seems to be censoring images now. And my guess is, they’re doing their best, but they’re trying to stay ahead of AI-generated content. I don’t know if they can keep up with it.

Last updated May  3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices.In Sight Publishing by Scott  Douglas  Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott  Douglas  Jacobsen 2012–Present. All trademarksperformancesdatabases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.

Leave a Comment

Leave a comment