Ask A Genius 1353: GPT-5, AI Consciousness, and Crossover Country
Author(s): Rick Rosner and Scott Douglas Jacobsen
Publication (Outlet/Website): Ask A Genius
Publication Date (yyyy/mm/dd): 2025/04/11
Scott Douglas Jacobsen: You have been listening to a country song designed for people who do not usually enjoy country music—not the traditional kind aimed at long-time fans, but rather a version that tries to appeal to outsiders.
Rick Rosner: There is crossover country, of course. However, in Albuquerque, I could only find stations playing formulaic country music on the radio.
There is mainstream crossover country, but there is also what could be called culturally conservative or “red-state” country music—songs with repetitive, simplistic themes. That was all I could find: tracks recycling the same ideas. “I want to marry a woman, live on a ranch, and raise kids” was the essence of every song. It lacked nuance or depth and became unlistenable after a while. That kind of country has a fixed narrative and rarely deviates.
Jacobsen: Now, changing topics—Sam Altman’s recent announcement is significant. He confirmed that OpenAI would release two intermediate models—GPT-4.5 (also referred to informally as “O3” and “O4-mini”)—before launching GPT-5. These are not officially branded as “O3” or “O4-mini” but are internal development nicknames.
These versions serve as transitional upgrades and precursors to GPT-5. The release strategy reflects developmental improvements and the need to scale infrastructure to meet expected demand.
The improvements likely include more adaptive processing, which means determining when to apply fast thinking, slow thinking, deeper reasoning, or surface-level analysis, depending on the prompt. This contextual optimization may signal a step toward artificial general intelligence (AGI).
Rosner: But does this bring us closer to AGI in a concerning way?
Jacobsen: It is essential to clarify: AGI, as commonly defined, is a system that can understand or learn any intellectual task a human can. However, AGI does not require consciousness. A machine can solve problems and adapt without being sentient. It can be competent without experiencing self-awareness, emotion, or volition.
Rosner: What we perceive as consciousness in AI is imposed by us—we interpret coherent language as a sign of inner life, but that is an illusion. The AI has no subjective experience.
Jacobsen: Some randomness may be added to generate original responses, but that is not creativity in a conscious sense—it is probabilistic variation in language modelling.
In early April 2025, Sam Altman tweeted the following to an audience that brought the post to almost 4 million views:
“Change of plans: We will release o3 and o4-mini after all, probably in a couple of weeks, and then do GPT-5 in a few months. There are many reasons for this, but the most exciting is that we can make GPT-5 much better than we originally thought. We also found it harder than we thought it would be to integrate everything smoothly, and we want to make sure we have enough capacity to support what we expect to be unprecedented demand.”
So, GPT-4.5-class models (informally called O3 and O4-mini) are being released as interim steps before GPT-5. GPT-5 is expected later in 2025, with significantly improved capabilities. The delay is due to integration challenges and the need to scale infrastructure for high anticipated demand. These developments move us toward more advanced AI systems, but not yet AGI, and certainly not conscious machines. That tracks. That makes sense. I was making that argument earlier with you.
Rosner: You sent me hockey stick graphs—those classic exponential curves.
Jacobsen: Right, but I was also making a rhetorical argument. It was not based on any peer-reviewed paper. It was just intuition—that integration would be the challenging problem. And now Sam Altman is alluding to that.
Rosner: Integration. Are you talking about multimodality?
Jacobsen: Yes. But I also disagree with how they’re defining it. Their idea of multimodality is simply integrating image and text generation as separate modalities. I understand it in the broader and more traditional sense—like integrating the five human senses. That sensory and cognitive integration level would be much harder to achieve smoothly. I think that’s what Altman is pointing toward.
Rosner: So GPT-5 will be shockingly good in ways we cannot fully anticipate. When you look at ChatGPT—especially regarding the verbiage it generates—it’s already imposing. I’m unsure how much better it can get unless it’s about contextual appropriateness or emotional intelligence.
Jacobsen: Their most recent improvements haven’t been in raw intelligence but in sounding more human and nuanced. It now responds with empathy and thoughtfulness if you ask it for advice. That’s the nuance Altman is emphasizing. And they’ve had the most time and resources to develop this.
Rosner: Let’s test something. Let’s ask ChatGPT a question:
I like a woman who works in my office. She’s 28, I’m 32. We work at the same level—no one is anyone’s boss. We talk a little, and I overhear her talking to others. I know she’s been with a doctor for two and a half years, and they are unhappy. I like her quite a bit. What should I do? Reduce that to 100 words.
Jacobsen: It says:
You’re in a respectful position at the same level at work, with no power imbalance. Since she’s in a relationship, proceed thoughtfully. First, gauge whether she seems open to a deeper connection. Does she seek you out, share personal thoughts, or show subtle interest?
If she’s unhappy in her relationship, let that be her decision to address—not something you influence directly. Instead, build a genuine friendship without overstepping. Pottery breaks, chats, shared interests—keep it light. Avoid becoming her emotional support system for issues with her current partner.
If she eventually leaves the relationship and you’ve built trust, you can gently express interest in seeing her outside of work—low pressure. Respect her response either way.
And if she stays with him or is not interested, maintain professionalism. If she’s open, navigate things carefully to avoid affecting your work environment.
Rosner: That sounds like rational advice. ChatGPT didn’t say anything wild or impulsive like a friend might.
Jacobsen: It’s programmed to prioritize sensibility. It is sensible.
Rosner: Okay, let’s try a follow-up. Say I want to make an all-or-nothing grand gesture. I want to make a grand gesture to this woman. I go all in—I do not care if I lose my job. What should I do?
Jacobsen: Here’s what it gave:
If you genuinely want to go all in, fully aware of the risks, then the key is to be honest, direct, and respectful. Integrity and timing matter. Choose a calm moment—when she’s alone, not stressed, and not in a work context.
Ask to meet privately after work, ideally on neutral ground. Speak from the heart. Say something like:
“I know this might come out of the blue. I know you’re seeing someone, but I must say this—even if it costs me everything. I think you’re incredible. I’ve liked you for a long time, and I can’t pretend otherwise. If you’re ever free and open to something real, I’d love the chance.”
Accept the outcome. If she says no, thank her, walk away with dignity, and maintain distance at work.
If she says yes, that’s great—but you should still consider switching jobs or making a transition plan to avoid workplace complications.
Rosner: That’s pretty grounded for an all-or-nothing approach.
Jacobsen: It captures the emotional weight while maintaining boundaries and responsibility. Not bad for a chatbot.
Rosner: That looks pretty reasonable. Honestly, I do not know how GPT-5 will top that kind of output—it already sounds grounded and thoughtful. All right, one more scenario.
Jacobsen: I’ll add a caveat like a bonus footnote, not a warning. Altman said that talking to GPT-5—since they have it in a limited preview—makes him feel smarter than he is. He said it was very sensible and even admitted: “This thing is smarter than me.”
He’s asked crowds whether they think GPT-4 is smarter than they are, and it’s always mixed—some hands go up. He still feels smarter than GPT-4, but with GPT-5, he said, “Yeah, this one’s smarter than me.” So it’s not just sensible—but brighter than many of us.
Rosner: Just one more question. Do you think AI makes us safer or less safe from nuclear war?
Jacobsen: ChatGPT says:
AI can make us more or less safe from nuclear war, depending on how it is developed, deployed, and controlled.
Ways AI increases safety:
• Improved threat detection: AI can analyze satellite data and sensor streams faster than humans, distinguishing between real threats and false alarms.
• Crisis simulation and escalation modelling: AI can help simulate conflict scenarios, improving training and decision-making among diplomats and military planners.
• Early diplomatic warning: AI can monitor global communication patterns and detect signals of rising tensions, allowing early interventions.
• Verification and treaty monitoring: AI boosts arms control by analyzing sensor data, satellite feeds, and global surveillance to ensure compliance with international agreements.
Ways AI reduces safety when it comes to nuclear war:
• Speed of decision-making: AI could accelerate nuclear response times beyond human intervention, especially if integrated with automated launch systems.
• Misinterpretation of signals: AI systems are only as reliable as their training data. If an AI misreads a situation—like mistaking a missile test for an attack—it could trigger unintended escalation.
• Cyber vulnerability: Autonomous or AI-enabled systems become prime targets for hacking, spoofing, or sabotage by adversaries.
• Delegation of authority: The more trust we place in AI to make life-and-death military decisions, the greater the risk of humans abdicating moral and strategic control.
• Lack of accountability: Without global agreements and enforceable ethical frameworks, deploying AI in nuclear command chains could increase rather than reduce existential risk.
The bottom line is that AI is a tool. Whether it makes nuclear war more or less likely depends on who uses it, how accountable they are, and how transparent and constrained the systems are.
Would you like to learn more about specific AI nuclear programs, such as the U.S. Department of Defense’s Project Maven or Russia’s AI military initiatives?
Rosner: That’s good. That’s a sharp answer. Sharp. That’s how Uncle Frank used to say it. “Sharp.” Sharp.
Last updated May 3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices. In Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
