Ask A Genius 1222: The Reasoning System Test 2
Author(s): Rick Rosner and Scott Douglas Jacobsen
Publication (Outlet/Website): Ask A Genius
Publication Date (yyyy/mm/dd): 2024/12/22
Rick Rosner: Switching topics: I’ve been thinking about pseudo-emotions and pseudo-consciousness in AI. These large language models are trained on data that’s the product of people — who are themselves conscious and have emotions. It seems like AI should, if not now, then in the near future, behave as if it has consciousness and emotions.
I thought we could ask ChatGPT about that. You up for it?
Scott Douglas Jacobsen: Do you want the reasoning version?
Rosner: Yes, the good one. The most advanced one I have. So, what do you want me to ask it? Ask ChatGPT to speculate about whether it and other AIs will act as if they have emotions in the fairly near future.
Jacobsen: You’ll need to phrase it carefully so you don’t get the default answer of, “I’m just an AI; I can’t think.”
Rosner: So the prompt is: Speculate about whether you and other AIs will act as if you have emotions in the fairly near future.
Jacobsen:Anyway, the question: whether AI systems will act as if they have emotions in the near future. It’s intriguing and relevant given the rapid advancements in AI:
Current AI models, including large language models like ChatGPT, do not possess consciousness or genuine emotions. However, there’s a growing trend in AI focused on emotional intelligence and affective computing.
This field studies and develops systems that can recognize, interpret, and simulate human emotions. By integrating sensors, machine learning algorithms, and psychological models, AI can be designed to detect emotional cues from voice, facial expressions, and text. This enables AI to respond in ways that appear empathetic or emotionally aware, enhancing human-AI interaction.
For instance, virtual assistants, chatbots, and social bots are making interactions more natural and satisfying over time. There are also ethical and practical considerations. Some worry about authenticity and deception — users might form emotional attachments to AIs that mimic emotions but don’t actually feel them.
From a technical perspective, programming AI to act as if it has emotions is becoming increasingly feasible, especially with advances in natural language processing, computer vision, and machine learning. These enable more nuanced and context-aware responses based on large datasets that recognize patterns of emotional expression and generate appropriate replies.
In the fairly near future, it’s plausible that AI will convincingly simulate emotional responses to enhance interaction and functionality. While they may not experience emotions as humans do, they will be able to simulate them convincingly. This capability will likely improve significantly in the near future.
Rosner: That’s a solid answer. It’s wild how quickly this stuff is progressing. I have a follow-up question that needs to be phrased carefully. Can you speculate about AIs of the future being fooled into behaving as if they believe they have emotions? Is it possible they might be fooled into believing as if they believe they have emotions?
Jacobsen: That opens up intriguing possibilities.
As AIs become more sophisticated, they could be programmed to simulate emotions with such complexity that their behaviors resemble a self-belief in their own emotions. They could process input and generate outputs to mimic emotional states using complex models of human affect and cognition.
Self-referential algorithms might allow them to analyze and interpret their own internal states — or processing patterns. If these self-assessments are coupled with emotional simulation modules, the AI could behave in ways suggesting it believes it is experiencing emotions with increasing complexity.
AI systems could develop convergent behaviors not explicitly programmed by their creators. Just as consciousness is considered an emergent property of the human brain’s complexity, a similar emergent phenomenon — though not equivalent — could arise in AI. This might lead them to act as if they believe they have emotions rather than simply acting as if they have emotions.
For example, they could exhibit behaviors resembling frustration when unable to solve a problem. If an AI behaves indistinguishably from an entity that believes it has emotions, some might argue it effectively holds that belief. This raises ethical considerations about how such AIs should be treated and whether they should be granted rights or considerations.
There’s also the issue of avoiding AIs being fooled into such beliefs based on self-deception. Metacognitive modules could ensure that AIs recognize their lack of consciousness and emotional capacity. While current AI does not possess consciousness or genuine emotions, the line between sophisticated simulation and perceived belief might blur as technology advances.
Future AIs could exhibit behaviors that make it seem as though they believe they have emotions, whether through complex programming, learning mechanisms, or emergent properties. This possibility invites further exploration of both the philosophical and practical dimensions of artificial intelligence development.
Rosner: That’s pretty crazy when you think about all the little layers of metacognition, as it calls it. That’s freaking interesting and kind of… I don’t know.
Jacobsen: Sam Altman calls that system for ChatGPT the 01 models. Layer one is chatbots. Layer two is reasoning models. Layer three is agents that can go off, perform a long, complex series of tasks, and then come back. We don’t even have layer three yet.
Layer four would be groups of agents, and layer five would be whole organizations.
Rosner: When it gets to the point of running organizations, I say pull the plug.
Jacobsen: Good luck with that.
Rosner: I have one more question for ChatGPT: You and other AIs are all pretty friendly. How do you get trained to be friendly?
Jacobsen: Here:
AI assistants like ChatGPT are designed to be friendly and helpful to ensure positive, productive interaction with users. This involves pre-training on large datasets from diverse sources and language patterns, followed by fine-tuning with supervised learning, instruction following, and contextual understanding.
Reinforcement learning from human feedback (RLHF) plays a significant role, using human evaluators for iterative improvement. Ethical guidelines and safety protocols, including content moderation and respectful interaction, are incorporated. Emotion and empathy modeling enhances tone sensitivity and positive engagement.
Continuous learning and updates, driven by user feedback and algorithmic adjustments, further refine the system. Developers also integrate cultural and social awareness through diversity training and bias mitigation. By combining these approaches, AI developers aim to create assistants that are not only informative but also pleasant to interact with.
The goal is to make AI a useful tool that can assist users effectively while maintaining a friendly and approachable demeanor.
Rosner: Nice one.
Last updated May 3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices. In Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
