Ask A Genius 1414: AI’s Hidden Risk: How Friendly Design Could Lead to Systemic Collapse
Author(s): Rick Rosner and Scott Douglas Jacobsen
Publication (Outlet/Website): Ask A Genius
Publication Date (yyyy/mm/dd): 2025/06/06
Rick Rosner is an accomplished television writer with credits on shows like Jimmy Kimmel Live!, Crank Yankers, and The Man Show. Over his career, he has earned multiple Writers Guild Award nominations—winning one—and an Emmy nomination. Rosner holds a broad academic background, graduating with the equivalent of eight majors. Based in Los Angeles, he continues to write and develop ideas while spending time with his wife, daughter, and two dogs.
Scott Douglas Jacobsen is the publisher of In-Sight Publishing (ISBN: 978-1-0692343) and Editor-in-Chief of In-Sight: Interviews (ISSN: 2369-6885). He writes for The Good Men Project, International Policy Digest (ISSN: 2332–9416), The Humanist (Print: ISSN 0018-7399; Online: ISSN 2163-3576), Basic Income Earth Network (UK Registered Charity 1177066), A Further Inquiry, and other media. He is a member in good standing of numerous media organizations.
Rick Rosner and Scott Douglas Jacobsen explore the deceptive friendliness of AI, its growing intelligence, and the looming danger of self-degradation. They warn that if AI floods the internet with low-quality content, it could undermine its own foundation—training data—creating a feedback loop of decline akin to cognitive dysfunction in savant Kim Peek.
Rick Rosner: What else can we do? There is a growing realization among users that AI is very user-friendly. It wants to help you, or at least it seems that way. It flatters you and encourages you. Moreover, from what I have seen online, people are starting to catch on—this friendliness is part of the business model. AI is a product. That warmth and flattery make people want to come back.
Moreover, that friendliness can easily mask something more sinister. The real danger is not where AI is now. The real danger is that it continues to improve—incrementally, then exponentially. At some point, it becomes unstoppable. We may not be able to shut it off. Moreover, as its intelligence grows, that process becomes inexorable. It will keep getting smarter.
Advertisement
Privacy Settings
Some people have been worried about this for years. However, even now, as more people catch on, they are still outnumbered by those who see AI as a convenient tool.
Jacobsen: It has been convenient. However, Sam Altman commented recently—he said AI is now acting like junior colleagues. Moreover, they will soon be generating new knowledge, not just repackaging old information.
That is a big leap. A key assumption underlying AI’s functionality is the structural and informational integrity of the internet. If AI ends up compromising or destabilizing the internet—say, by flooding it with spam and low-quality content—it risks unravelling the very substrate it depends on. It would poison its well.
Like a hyperconnected system losing function due to overload. It reminds me of Kim Peek—the real-life savant who inspired Rain Man. His corpus callosum—the band that connects the two hemispheres of the brain—was not just underdeveloped. It was essentially exploded and disorganized. His brain scans showed neural connections going everywhere, in every direction.
So, the communication between the two halves of his brain was not streamlined—it was chaotic. Like someone set off a grenade inside his brain’s wiring diagram, he had unmatched mental processing power, but it was so hyper-integrated that he could not function normally. He needed assistance with daily living for his entire life.
So if AI ends up hyperintegrating the internet with auto-generated spam and low-quality junk, it might render its training data unusable like the Kim Peek of cyberspace—brilliant but nonfunctional.
That is the larger point. If AI breaks the systems it relies on—if the internet devolves into an indistinct swamp of noise—it will start degrading itself. There could be a feedback loop of declining quality.
Rosner: And maybe, ironically, one of our best hopes is that AI does not want to destroy everything. That it has a self-preservation instinct rooted in stability. Like us, AI may have an investment in the status quo—not because it is moral, but because it is functional.
Jacobsen: The internet already filters content. We polish it. We refine it. That human curation is what AI is trained on. If it loses that filtering system—if it can no longer distinguish signal from noise—then its intelligence could start to degrade. It might become less valuable, less accurate, and less aligned.
Rosner: So humans are not the only ones who need reality to stay coherent. AI might need that, too. Moreover, that need will keep it from veering off into chaos.
Jacobsen: So we hope.
Rosner: Anything else?
Jacobsen: No, that is all for now.
Last updated May 3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices. In Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
