Ask A Genius 1296: Existential Risks, AI Alignment, and Global Stability
Author(s): Rick Rosner and Scott Douglas Jacobsen
Publication (Outlet/Website): Ask A Genius
Publication Date (yyyy/mm/dd): 2025/03/03
Rick Rosner: So, we can think of three major existential threats to humanity:
- AI run amok
- Climate change
- Nuclear war
And then there’s the more manageable threat of an asteroid impact. Can you think of any other major risks?
Scott Douglas Jacobsen: Supervolcanic eruption. Biological weapons. Solar flares. Gamma-ray bursts, global economic and societal collapse, uncontrolled genetic engineering, alien contact, and unstable particle physics experiments.
Or—if this is a simulation—the operator presses the off button.
Follow-up: In many ways, AI is much more analytically alert than people. It’s disembodied, but artificial intelligence is artificial in the sense that it’s synthetic—its architecture, hardware, and structure are all man-made. But the intelligence is fully real because it’s based on information processing, algorithms, and pattern recognition—just like us.
They were making a subtle argument for substrate independence—that intelligence isn’t limited to biological brains. AI’s cognition may be artificial, but the way it thinks is real.
So, follow-up question—of these risks, which do you think are the most dangerous and most probable?
Rosner: Ask the AI.
Jacobsen: Immediate risks with extinction potential—
- AGI misalignment
- Nuclear war
- Engineered pandemics
- Uncontrolled climate change
- Global societal collapse
In the long term—the next 100 years—
- Climate change
- Nuclear war
- Pandemics
- AI disruption
- Global economic collapse
Lower probability, but still catastrophic:
- Asteroid impact
- Supervolcanic eruption
- Gamma-ray burst
- Solar flare
Jacobsen: Final assessment?
Most dangerous and probable:
- AGI misalignment
- Nuclear war
- Climate change
- Engineered pandemics
Jacobsen: Most probable in the next nine years:
- Climate change
- Nuclear war
- Catastrophic pandemics
- Global societal collapse
Low probability but catastrophic:
- Asteroids
- Supervolcanoes
- Gamma-ray bursts
Rosner: Wow. One more follow-up: How likely is it that AI will actually mitigate these risks to humanity and the planet?
Jacobsen: The key frame here is that we have to solve the alignment problem first. It says:
But even beyond that, AI will need to handle:
- Energy efficiency—optimizing energy use
- Predicting and mitigating extreme weather events
- Cybersecurity—preventing AI hacks
- Carbon capture and climate intervention
- Advancing agricultural productivity
AI can help with all of this—if it’s properly aligned. The problems associated with rapid economic growth, disruptive technological advancements, massive energy consumption, and geoengineering AI solutions may have unintended consequences. If a super-fracking technique were developed, it could destabilize the Earth’s subsurface structure. That is a significant concern.
I have a list of these issues.
Rosner: On a somewhat unrelated note, I have a question for the AI. The current political leadership in the United States seems particularly dysfunctional. Is this just an accidental blip in history, or is it the result of broader historical trends—particularly technological disruptions and the weaponization of social media as propaganda?
Some argue that this dysfunction is not coincidental but rather a symptom of deeper structural shifts in technology, media, and governance. The claim is that our current political crisis is not accidental; rather, it is an inevitable outcome of these changes.
A follow-up to that: Can human governments recover from their current dysfunction, or will AI be necessary to guide us out of political disorder?
The answer depends on whether our institutions can reform quickly enough. If they cannot, AI-driven systems may need to compensate for human weaknesses.
One more unrelated question: Will most humans soon become the second most intelligent beings on the planet, given the benefits AI will bring?
The answer depends on how AI is integrated into society. Some people will thrive, while others will struggle with a perceived loss of status, purpose, and control. It will be highly individual.
Final question: Will the emergence of artificial consciousness diminish the perceived value of all consciousness?
If artificial consciousness arrives, it will alter how we perceive our own uniqueness, intelligence, and moral worth. Some people will feel less special, while others will remain unfazed. More than anything, AI will expand our understanding of consciousness—if it truly arises.
Non-human cognition is distinct because it could possess speeds beyond human capability, unlimited memory, perfect self-reflection, and emotions far beyond human experience.
Those are all fair points.
Regarding AI and climate change: AI and related technologies might help mitigate climate change, but not before significant damage occurs. However, one emerging trend is that people, distracted by technological advancements, are reproducing less. Since climate change is largely influenced by human population growth—the more people, the greater the environmental impact—could this shift in reproductive rates help mitigate climate change sooner than expected?
That said, this discussion does not touch on nuclear weapons. It would be in our best interest to reduce the number of nukes in circulation before AI-related governmental instability exacerbates global tensions. However, given the current leadership in the U.S. and Russia, that seems like a distant hope in the near future.
Jacobsen: I’ll see you tomorrow at the same time.
Rosner: Yes. Talk to you then. Thank you.
Jacobsen: Bye.
Last updated May 3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices. In Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
