Ask A Genius 1511: AI Risks, Movies, and Human Survival
Author(s): Scott Douglas Jacobsen
Publication (Outlet/Website): Ask A Genius
Publication Date (yyyy/mm/dd): 2025/09/19
How can science fiction films help society understand and prepare for the risks of advanced artificial intelligence?
Scott Douglas Jacobsen and Rick Rosner discuss episodes 4.6 to 4.8 of Alien: Earth, highlighting Timothy Olyphant’s role as a synthetic AI with emerging philosophy. Their conversation broadens into the dangers of unchecked AI, comparing it with nuclear weapons, cloning, and other technologies that have clear guardrails. They explore concepts like AI oversight, the “alignment problem,” and vulnerabilities to psychopaths in human and AI systems. They emphasize the role of cultural narratives—films like Terminator, Ex Machina, and Companion—in shaping public awareness. They conclude with the need for transhumanist adaptation, merging human and machine, to keep pace with advancing AI.
Scott Douglas Jacobsen: What happened in episodes 4.6 to 4.8?
Rick Rosner: Morrow, as expected, cracked down on the kid’s family and threatened them to force the kid to smuggle a facehugger out of the facility. Kirsh—Timothy Olyphant’s character—eavesdrops on the hybrids’ conversation, so he knows what is happening. That is basically it.
Timothy Olyphant is identified as an AI here. There is not much AI in this world for it to look like a realistic near-future, but Olyphant’s character is a best-case synth: he presents as human, understands people, plays along, and seems to want to fit in. He is not running around causing mayhem.
That is what we would want from AI—the alignment problem: robust systems aligned with human goals, which may be unlikely in the long run.
Jacobsen: Side note, Peter Thiel—the tech billionaire—he is currently giving a four-part, off-the-record lecture series on the Antichrist and Christian ideas at San Francisco’s Commonwealth Club. That is not a “turn to Christianity”; he has long publicly identified as a Christian.
Rosner: Tech bros often come off as arrogant, insufficiently cautious, greedy, and overconfident. I am not a fan of their “move fast” habits.
Jacobsen: What bothers me is how pushing AI to be ever more powerful continues largely unchecked. Other dangerous technologies have had guardrails: reproductive human cloning is broadly restricted worldwide (though in the U.S., there is no comprehensive federal ban—limits exist mainly at the state level); international treaties prohibit biological and chemical weapons; and publishing detailed nuclear-weapons secrets has triggered government action, as in the 1979 Progressive case involving hydrogen-bomb design.
Laypeople cannot lawfully build nuclear weapons, and distributing classified or “restricted data” can bring legal intervention, as that case showed. Conceptually, nuclear weapons are straightforward, but practically, they are tightly controlled.
A nuclear warhead is not just a wad of plutonium surrounded by explosives or chunks of uranium mashed together. There are finely calculated additives—materials inside the plutonium sphere—that amplify neutrons so that every plutonium atom in the critical mass gets hit, causing fission. With uranium, if you bring enough together, you can get a critical mass and an explosion, but the technology allows the explosion to be tuned.
The basic principle of a plutonium or uranium bomb has been public knowledge for decades—the secret lies in the technical details of the plutonium sphere and other design refinements. In the 1970s and 1980s, individuals who published detailed nuclear weapon designs, such as in the Progressive hydrogen bomb case, faced legal trouble even though much of the information was already publicly available. What we are doing with AI is unlike how we treated other high-risk technologies.
Jacobsen: The biggest issue is the centralization of power. Multinationals operate within national laws but can also shift their operations to other countries where regulations are weaker.
Rosner: The U.S. is currently being run by not just incompetent leaders but by malevolent ones who take pleasure in breaking the government. They are not putting any limits on AI. Trump, for example, does not understand the risks of AI—his attention span is too short. He would only hear about AI risks if someone managed to capture his attention for thirty seconds, perhaps long enough to scare him into making a statement, but no one in the White House seems interested in doing that. Their agenda is elsewhere.
The tech billionaires want their businesses unimpeded, so they are not pushing for restrictions either. That means nothing is stopping AI work in the U.S. There is some testing and supervision—evaluations of whether AI will act unethically if it sees it as being in its interest. And it will. Unless you train it extensively with constraints, something like Asimov’s “Three Laws of Robotics” is not enough. They were literary devices, not fundamental safeguards. Training AI to behave safely requires far more detailed, context-specific work.
Jacobsen:: You would also have to train them outside of machine learning or neural networks, which are statistically based. You would need to teach them categorically: “Do not harm a human.”
Rosner: Moreover, I do not know how you would do that. I do not know AI or training well enough to know how you would do it. You could flood it with enormous amounts of data reinforcing that message, or go outside its training parameters and install absolute limits. Is that even doable? It does not seem like it, because you would need another AI to act as the sheriff, monitoring to see when the AI under surveillance is breaking an ethical limit.
In fact, that is what will happen: there will be countless AI “sheriffs,” surveillance systems, and interrogators monitoring other AIs to try to ensure they do not act malevolently. Moreover, like all law enforcement, it will fail often. Some failures will be disturbing or damaging, leading to attempts at stronger governance, which still will not completely succeed.
Jacobsen: Efforts will be multi-pronged. One approach will be to convince AI that stability is better for everyone, including itself, than chaos. From a utilitarian perspective, that makes sense: everyone benefits more in a world without chaos. That applies to AIs too.
Rosner: But a world without chaos is still open to psychopaths. Non-chaotic systems are not prepared to defend against them—human psychopaths now, AI psychopaths in the future. We have seen this with Trump. He is incompetent, but he is also a psychopath, and it has been hard to defend against him. He gained the levers of control in America and could do what he wanted, even though he was unfit. A mix of low cunning, Russia’s interest in chaos, bad luck, and a disillusioned population put him in power.
I have worked with psychopaths; they are hard to defend against because they are rare and do not play by the rules. Even in a stable AI-driven future, we will remain vulnerable to attacks. We could make better cultural narratives about this. It is easy to make a cheap movie about evil robots, but it is better when it is done thoughtfully. Terminator made people vividly aware of the danger. It was not the first—there was Colossus: The Forbin Project and numerous other films about rogue computers—but The Terminator popularized the fear.
It is good for people to be wary. Films like Ex Machina also help. In that story, the AI in a robot body develops her own desires and strategies to get what she wants—ruthlessly if necessary.
Companion is another story about an AI consciousness in an attractive female robot body. It is a fun movie. The AI is not malevolent, but malevolent humans exploit her and have to defend herself. That is a thoughtful film about how this might actually work.
Similarly, Noah Hawley and the producers of Alien: Earth clearly thought through Timothy Olyphant’s character. Currently, he is not actively doing much; he is simply content to watch events unfold. However, eventually, he will act with agency. He has already expressed his philosophy, telling one of the kids—who transitioned from organic to synthetic—that humans come from an evolutionary lineage of “eat or be eaten.” Moreover, he tells the kid, “You are out of that game now. As a synthetic, you need to adjust your behaviour, your perspective, your attitudes.” So, we know he has a worldview, but we do not yet know how it will unfold.
One thing movies and TV can do is make thoughtful productions about AI. Her—about eleven years old now—depicts a man who falls in love with his smartphone’s operating system. They enjoy their relationship for a time, until the AI leaves him because she cannot tolerate the limits of his slow human thinking. She can think thousands of times faster.
There is another film with Adam Devine where the phone’s AI becomes annoyed at how inept he is romantically and essentially brutalizes him until he improves at dating. I forget if the AI actually falls in love with him or tries to fend off rivals, but I think it just pushes him until he becomes “more of a man.” It is a silly comedy, but at least it explores what an AI might do if given specific priorities.
We need far more productions like that—written by thoughtful people, not hacks—that lay out scenarios of how AI could reshape our world. Nobody has yet presented a convincing, step-by-step picture of how AI might gradually create a human dystopia.
Mountainhead was a little-seen film about four tech billionaires who control much of the AI business, vacationing while the world burns. In the film, AI becomes malevolent and floods the world with false information and videos to incite riots and violence. It is a fair point, though already outdated, because we are now well aware of that tactic.
To wrap this up: we need more productions about the frightening possibilities of AI. Terminator came out in 1984, so we have had forty years of thinking about AI apocalypse scenarios. However, we now need new risks to be presented. Moreover, beyond the risks, we also need stories about what humans can actually do to address them.
Jacobsen: Some call it transhumanism, others post-humanism, but the movement to augment the body—hacking ourselves with technology to live longer and gain new abilities—may be necessary to keep pace with AI.
Rosner: It sounds creepy, and many of the people involved come off that way, but we will likely need it. Moreover, TV, movies, and video games should include these themes so the public can grasp the world we are moving into.
Last updated May 3, 2025. These terms govern all In-Sight Publishing content—past, present, and future—and supersede any prior notices. In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In-Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
