Skip to content

Ask A Genius 1366: How AI Language Models Are Changing Education, Cognition, and Access to Knowledge

2025-06-13

Author(s): Rick Rosner and Scott Douglas Jacobsen

Publication (Outlet/Website): Ask A Genius

Publication Date (yyyy/mm/dd): 2025/05/09

Rick Rosner explores the growing role of large language models (LLMs) like ChatGPT, Claude, and Gemini in simulating high-level abstract reasoning. While these tools mimic intelligence through vast data training, Rosner questions whether the difference between simulation and real understanding matters for practical tasks like writing university essays. They discuss the democratization of cognitive labor, structural inequality in access and use, and how individuals with slight cognitive advantages or technical fluency may best leverage AI. They also touch on changing cultural behaviors, the decline of traditional skills, and the shifting moral and cognitive landscape shaped by AI.

Rick Rosner is an accomplished television writer with credits on shows like Jimmy Kimmel Live!Crank Yankers, and The Man Show. Over his career, he has earned multiple Writers Guild Award nominations—winning one—and an Emmy nomination. Rosner holds a broad academic background, graduating with the equivalent of eight majors. Based in Los Angeles, he continues to write and develop ideas while spending time with his wife, daughter, and two dogs.

Scott Douglas Jacobsen is the publisher of In-Sight Publishing (ISBN: 978-1-0692343) and Editor-in-Chief of In-Sight: Interviews (ISSN: 2369-6885). He writes for The Good Men ProjectInternational Policy Digest (ISSN: 2332–9416), The Humanist (Print: ISSN 0018-7399; Online: ISSN 2163-3576), Basic Income Earth Network (UK Registered Charity 1177066), A Further Inquiry, and other media. He is a member in good standing of numerous media organizations.

Scott Douglas Jacobsen: You have mentioned many times in our collaborations how high-level abstract reasoning may eventually be replicated, at least in form, by synthetic systems, right? Computers and large-scale algorithms can simulate specific patterns of thought. You mentioned, in a recent session, that systems like ChatGPT, Claude (by Anthropic), Gemini (formerly Google Bard), Grok (from xAI), and other cloud-based LLMs can give the appearance of reasoning that might match the output of an average college graduate—or someone slightly above average—depending on the context.

And that capacity is improving steadily. It gives the impression of intelligence because it is trained on enormous corpus of human-generated data, incorporating linguistic patterns, reasoning chains, stylistic signals, sentiment, and structural features derived from human cognition. Tone, implied intent, behavioural patterns, and inferred interests. There is, of course, a fundamental distinction between the simulation of high-level thinking and the genuine experience or understanding of it.

But does that difference matter in practical terms regarding tasks like writing a university term paper with Chatgpt?

Rick Rosner: For many users, the answer is no. Moreover, that is a fascinating point.

Now I want to build a framework around this idea. So, you can follow that path of reasoning further. I do not know where it ends. My grandmother had chronic respiratory issues late in life, and sometimes when she spoke, it came with a gurgling sound from mucus in her throat. You would have to speak on her behalf sometimes.

My grandfather had what may have been Zenker’s diverticulum—a pharyngeal pouch near the esophagus—so phlegm would build up, and he would frequently clear his throat or spit into the sink or toilet. As a child, I would go to the bathroom and see what he left behind. That was not fun.

Jacobsen: So, refining some of the ideas we have previously discussed, primarily your ideas, we now have the added variable of character in AI-generated responses. The older concept of rising cognitive augmentation is that these tools allow more people to access complex reasoning outputs. You might call it the “democratization of abstract thought.” What exactly qualifies as “higher-level thought”?

Rosner: We’ve explored that already. The era of IQ as a strict proxy for potential may be fading. Anyone with access to a sophisticated LLM—and the skill to use it well—can perform at a much higher level than their unaided cognition might allow. This already mirrors how calculators or search engines extended human cognitive range in many ways.

A recent report said Alphabet Inc.’s stock temporarily dipped after analysts noted a year-over-year decline in traditional Google Search queries. One reason cited was that more users were turning to generative AI models like Chatgpt to find information.

Rosner: Sam Altman, the CEO of Openai, may—if Openai’s trajectory holds—become one of the wealthiest individuals in the tech sector. Mark Zuckerberg is still the youngest billionaire, having reached that status in his early 20s.

Elon Musk is in his early 50s. According to Walter Isaacson, Musk has at times struggled with health habits—frequent Diet Coke consumption, erratic sleep, and indulgent eating patterns. So the image of a high-functioning but unsustainable lifestyle fits with that portrayal.

I am getting at this: it is wrong to assume that we all share equal access to their benefits because we have tools capable of simulating high-level cognition. That is both technically and sociopolitically inaccurate. I see this unfolding on at least two levels: one of interface fluency, and one of structural access and control.

One, socioeconomics, geography, and technical access—whether AI is open to society. That is the first factor. Two, individuals who already possess slightly sharper innate cognitive capacities. Again, the range of human sharpness is not as extreme as it may seem, especially compared to what is coming.

When you look at adult human height, for example, it ranges from just under four feet to about seven and a half feet—a spread that’s less than a multiple of two. The same general principle applies to IQ. Some individuals at the very low end require institutional support, but the vast majority fall within a relatively narrow functional range. So, I would argue that our world is a relatively flat surface compared to the environments in which we evolved.

And to be clear—that is not an argument for a flat Earth, in case anyone misreads this.

Rosner: The cognitive complexity built into our societies and systems has a lower threshold that most people surpass. Most people are functionally capable in many domains. It would be odd if that were not the case.

Jacobsen: Just as most people have functioning hearts, it would be strange if a significant portion of the population were turned off simply by being unable to think. As the psychologist Donald Winnicott might put it, nature has been a “good enough” parent through evolution. Winnicott emphasized that you need not be a perfect parent—just a good enough one. Similarly, evolution has produced good enough cognitive structures to survive and function.

Evolution optimizes cognitive development in ways that we may not yet fully understand, but the point is that the average person is cognitively functional. AI is going to change that landscape. In ways that we do not yet fully understand or even know how to measure, at least not quantitatively.

Outside of deep qualitative analysis—working with individuals—evaluating who will benefit most is hard. However, those people who already function at a modestly higher level across many human domains will likely use these AI thought assistants more effectively. They’ll catch on faster.

We are now in this transitional phase where we haven’t been biologically enhanced—no neural implants, no brain-machine interfaces—but we are beginning to use these external cognitive tools. So, it would be naïve to believe that this is a form of universal access. Again, there are two key levels to consider.

First, access. For example, there is significant smartphone penetration in India, and ChatGPT has made major inroads there. But in general, poorer or less developed societies—those that are not technically advanced or are culturally resistant to technological adoption (Luddite communities, for instance)—will be less connected to these systems and will not be in a position to take full advantage.

Second, individual differences. Knowledgeable and technically trained people will be well-positioned to benefit from AI. But so might someone without formal training who decides to dive in and learn—someone who bets on AI as the future.

Rosner: Or even someone in a relationship with a tech-savvy partner who insists they teach it, or a person raised by so-called tiger parents. The early adopters will not just be the “very smartest” people. They will include some of the smartest, sure—but also people who, by taste, chance, or circumstance, position themselves in the right orbit of AI.

Some random person with good instincts and curiosity might wield as much AI power as someone with a PhD in computer science. That is the democratizing and destabilizing part of this moment in history. Let’s come back to that later, maybe even expand on it during a panel. That topic’s already spinning off in a few directions.

Jacobsen: Does that come up for you?

Rosner: We already talked about it a bit. Twitter went nuts over that New York Magazine article a couple of days ago. Everyone started piling on with the argument that we’re raising a generation of people who can’t do things that used to be basic, like trigonometry or writing an essay.

I hate writing essays, by the way. The five-paragraph essay is one of the most boring and formulaic writing formats. And now, students can just have Chatgpt write them. There are people—people in their early 20s, about to graduate from college—who may never have actually written a five-paragraph essay themselves, though they’ve turned in plenty of them, courtesy of ChatGPT.

You could argue, “Well, this person will have workplace problems because they never learned how to structure an essay.” But when do you ever need to write a five-paragraph essay in the workplace? Rarely—unless you land a job writing op-eds for a newspaper, and even then, those traditional platforms barely exist anymore.

So, the argument is that we’re raising a generation of simpletons. The counterargument is that we’re raising a generation with a different skill set, adapted to various tools and conditions.

Jacobsen: Do you think there is more to explore there?

Rosner: Maybe. In many ways, people today are more obnoxious than they used to be, thanks in large part to social media. But in other ways, people might be less harmful than before. For example, I’d argue people are, on average, less rapey than in previous decades, because both men and women are more informed about boundaries, consent, and what is or isn’t acceptable.

Thanks to social media and more open cultural conversation, many girls and women are better equipped to identify and avoid dangerous situations. Moreover, many guys are better educated, at least marginally, about what is and is not acceptable behaviour.

Also, the sheer ubiquity of pornography means fewer people feel the need to go out and manipulate others for sex. Many people stay home and masturbate. I do not know. Maybe that helps. So yes—the overall “dickishness” landscape has changed. We’re more toxic in some ways, and I hope less so in others.

In the U.S., though, we’re more overtly dickish—racism is back out of the closet and even encouraged in some political circles. That is worrying. But I hope there are equally strong countercurrents pushing in the opposite direction.

Comments?

Jacobsen: No.

Last updated May  3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices.In Sight Publishing by Scott  Douglas  Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott  Douglas  Jacobsen 2012–Present. All trademarksperformancesdatabases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.

Leave a Comment

Leave a comment