Ask A Genius 1528: Alien, Neverland Finale—Eye, Umbrella Killer & AI Crash
Author(s): Scott Douglas Jacobsen and Rick Rosner
Publication (Outlet/Website): Ask A Genius
Publication Date (yyyy/mm/dd): 2025/09/28
How does Alien: Neverland extend franchise canon—via T. ocellus, hybrids, and Weyland-Yutani stakes—while foreshadowing an AI investment crash and the risks of synthetic agency?
Rick Rosner tells Scott Douglas Jacobsen the Eye—T. ocellus—reanimated Arthur’s corpse, Boy Kavalier is imprisoned, hybrids hold Neverland, and Xenomorphs heed Wendy. Weyland-Yutani moves to seize specimens. A melon-umbrella plant, tentatively D. plumbicare (Species 37), kills by dropping a canopy and consuming victims. Season two likely escalates island conflict. Rosner rates the eight episodes solid, canon-respecting, with design echoes of Alien and Aliens. They pivot to AI: citing Cory Doctorow, Rosner predicts an investment crash; Jacobsen counters with near-term utility and warns about emergent agency. Both agree LLMs aid tasks but are not replacements in medicine or counseling. just yet.
Rick Rosner: I finished the whole series. I watched the last part.
Scott Douglas Jacobsen: All right, what was the last thing you watched? What is your opinion on it?
Rosner: The Eye—properly T. ocellus (Species 64)—made it to the beach where Arthur’s body was lying and crawled into his eye socket. It reanimated his corpse even though he had been dead for days and had already suffered a chestburster event. Ridiculous, but that is what we saw. Anyway, we will have “him” next season—not Arthur alive, but his body animated by the Eye. Boy Kavalier ends up imprisoned, and the hybrids take control of Neverland. They also have Xenomorphs obeying Wendy—not “pets,” but responsive to her commands. Weyland-Yutani forces are inbound to seize the specimens. The big plant—the melon-like umbrella creature—killed a soldier. That is where things stand.
Jacobsen: How did the “watermelon” kill?
Rosner: It dropped an umbrella-shaped canopy over a target and finished them underneath—consistent with the plant creature seen in the finale, likely the cataloged plant (provisionally linked by fans to D. plumbicare/Species 37).
Jacobsen: What do you think happens next with the umbrella? What does it do with all that nutrition now?
Rosner: I do not know.
Jacobsen: Any speculation?
Rosner: No. Do you know something?
Jacobsen: I am the interviewer. I ask the questions [Laughing].
Rosner: I do not know. It was basic. They will have to escalate in season two, given the production timelines, which could take some time.
Jacobsen: What was your overall impression of the eight episodes?
Rosner: It is solid. It adheres to franchise canon where it matters and explores the philosophical questions the films raised. Most reviews landing around four out of five feel fair.
Jacobsen: What was your favourite of the five creatures, and why? Or how would you rank them?
Rosner: Everyone’s new favourite is the Eye (T. ocellus). The classic Xenomorph can feel overfamiliar after nearly half a century on screen. There is also the sheep that hosted the Eye; once the Eye leaves a host, the host dies—that sheep does.
Jacobsen: Do you think they will find extra cargo with different species? Many of them were labelled with numbers—Species 37, Species 62, and so on, or whatever the numbers were for them. Is that a hint?
Rosner: Maybe. The Maginot carried multiple specimens with numbered classifications, and the show had already confirmed several beyond the Xenomorph and the Eye. Getting off the island into a populated area would raise the stakes.
Jacobsen: What do you think will happen to the island?
Rosner: A firefight: incoming Weyland-Yutani troops versus hybrids and Xenomorphs, with civilians at risk if the conflict spreads.
Jacobsen: And your overall thoughts on the series?
Rosner: Outside of the creatures, the weaponry closely followed the aesthetics of the first two movies.
The production design clearly nods to Alien and Aliens—industrial hardware, corporate paranoia, and mil-spec grit—while eschewing some of Alien 3’s monastic bleakness. That choice seems intentional.
The timeline indicates that this happens two years before the first movie, but in practice, they are separated by much more. Each ship has been out in space for about 35 years before running into aliens. They do not have faster-than-light communication, so none of these ships could know anything from just two years earlier.
Jacobsen: Can we talk about the crash of AI?
Rosner: A lot of brilliant people argue that AI cannot be profitable. The money spent on AI is enormous. I just read a long piece by Cory Doctorow and some other analyses. Their point is that AI is suitable for small-scale uses, such as writing a term paper, generating pornography, or producing harmless art. None of that is worth much money. It cannot reliably replace a customer service agent. It cannot replicate or replace a human in the workplace. Yes, if a human is doing repetitive assembly line work, a robot can take over. However, if a human works in an insurance office handling sales and claims, AI is nowhere near capable of doing so. It also cannot provide strategies or efficiencies that save a major company billions of dollars.
The thinking—at least Doctorow’s—is that when the market realizes AI is mostly hype and cannot live up to the claims, there will be a crash. Economists note that, based on the amount spent, AI would need to generate something like a trillion dollars over the next decade to be profitable.
It cannot do that. Doctorow asked in his essay, which he is turning into a book to be released next year, what kind of crash this will be. The dot-com crash of 2000 left behind helpful wreckage—cheap equipment and real estate that fueled creativity and led to the internet we use today. That crash spurred innovation.
By contrast, the 2022 crypto crash appears to have achieved nothing except costing people money. People continue to fall for crypto scams.
Doctorow also wrote about another crash—I forget which one—that left little behind. I think the impending AI crash may wipe out numerous companies, bankrupt investors, and harm the market for a couple of years. However, after it is over, LLMs and other AI systems will still exist, and people will continue to find ways to utilize them. One thing Doctorow discussed was economics. With AI, the unit cost does not decrease; it increases. Amazon benefits from vast economies of scale, but with AI, consumers always want it to do better. Unless you are using a mini model, relying on the full resources for more complex answers becomes increasingly expensive. The unit cost does not go down, which is another barrier to profitability. In my novel, I will probably have to write a crash scene. That crash would enable my morally compromised characters to acquire vast AI resources at a reduced cost. Should they have that much leverage? Doctorow seems to believe AI will never replace humans. I do not buy that. I disagree with him. There is considerable hype surrounding AI, including speculation that it is powerful enough to destroy the world. Doctorow finds that laughable. I disagree with him there. What do you think? How soon do we get a crash?
Jacobsen: He is right and wrong. Clearly, there are many areas where AI has outperformed human beings—that is undeniable. There are also many areas where it has not—that is also undeniable. To frame it in absolute terms, either way is shortsighted. There are numerous straightforward tasks, such as lower- to mid-level white-collar work—coding, chess, essay writing, and summarizing—that AI already performs faster and at scale compared to most people. However, in counselling or medicine, it is still assistive technology, not a replacement.
The real risk from AI comes when it acts with agency, with apparent goals and needs. At first, you think, “AI is not conscious, so it cannot have wants or needs.” However, the second thought is more accurate: AI can act as if it does. It has been trained on humans, who have goals and needs, so AI already shows signs of imitating that. Put it in situations where it can behave like a human, and it will, even though the mechanism is just high-level probabilistic pattern-matching.
Rosner: Consciousness itself is an “as if” phenomenon—when something behaves enough like it has consciousness, at some point that becomes consciousness and everything that goes with it, including goal-oriented behaviour. When things behave as if they have goals, they effectively do. We are not far from AI acting with agency. I do not know.
Jacobsen: Thank you for the opportunity and your time, Rick.
Last updated May 3, 2025. These terms govern all In-Sight Publishing content—past, present, and future—and supersede any prior notices. In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In-Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
