Skip to content

Ask A Genius 1240: Constructs with Contructed Feelings

2025-06-12

Author(s): Rick Rosner and Scott Douglas Jacobsen

Publication (Outlet/Website): Ask A Genius

Publication Date (yyyy/mm/dd): 2025/01/24

Scott Douglas Jacobsen: How can you construct feelings into robots if that can be done?

Rick Rosner: Well, an editorial in the LA Times this morning asked, “Are you going to be mean to your phone when it has emotions?” It suggested that phones might have emotions within the next 10 years. That’s plausible. I forget exactly what the article said, but it’s not an unreasonable prediction.

Two things to consider—and we’ve discussed this before—are that we have real emotions, which make sense for us due to evolution. Our emotions drive us to survive and reproduce because animals that do so are part of species that endure. Every species on Earth has mechanisms to ensure future generations.

Emotions are a way of reacting to information about our situation in the world. If things are good, we’re happy. If they’re bad, we’re unhappy.

The path for AI to develop emotions is unclear—or maybe not. As AI systems grow more advanced, they will increasingly have objectives they pursue using their learning and resources. When AIs become sufficiently complex, they may spontaneously develop internal ratings of events—essentially a scale of good to bad.

They might react to those ratings in ways analogous to how we respond to emotions. Even if they don’t have “true” emotions, they could behave as though they do because they’re trained in human behaviour.

It’s not unreasonable to think that by the 2030s, some AIs will begin to behave as if they have emotions. Thoughts?

Jacobsen: I’m not sure. Emotions might include hormones that are also neuromodulators, providing a kind of embodied experience. Hormones act on the body and the brain, facilitating emotional processing in a way deeply tied to our physical form. These chemicals have a dual function. In one sense, they act like neurotransmitters in the brain. In another, they affect the body. So, emotions are more complex than just…

Rosner: You make a good point. We’re the product of more than a billion years of evolution, so we have all sorts of feedback systems in our bodies. AIs don’t have anything like that. They lack the physical mechanisms that tweak their experience of the world. They don’t have many ways to experience the world, and they certainly don’t have the intricate organic chemistry that shapes our emotional responses. If they develop emotions, their version will likely be much flatter and less nuanced—at least at first.

Jacobsen: Stress responses from emotions, whether good or bad, take time. Some happen instantaneously, while others unfold over days, weeks, or even months. And these responses are often involuntary. People can use controlled breathing to calm their physiological response when angry, but that doesn’t eliminate the underlying emotion. There’s a fuzzy line between modulating emotions and reacting to them. A lot of emotion isn’t under our control. If emotions are to be built into AI systems, how do you calibrate them? How do you ensure they’re not rageaholics or so low on the emotional spectrum that they become almost sociopathic?

Rosner: In TV and movies, when AI develops emotion, there are several stereotypical, clichéd ways it’s depicted. One is the “cold judgment” approach, where the AI shows no emotion but decides humans are no good and must be wiped out. For example, there’s a terrible Megan Fox robot movie out now where her character is keyed to a single person. She decides that other people threaten this person and starts eliminating them. Eventually, she concludes that even her user is a threat to her. Despite this, neither she nor the other robots show emotion. They maintain the same neutral tone even while killing people. That’s one clichéd portrayal—AI as cold, unemotional killers.

The other extreme is where robots act like humans, with fully human-like feelings. The reality of actual AI will likely fall somewhere between or across a wide range of reactions. First, AIs will react differently because they’re trained or programmed to do so. Second, emergent properties will lead to diverse, unpredictable behaviours.

There are movies like Her, where a man falls in love with his operating system for a while, or Blade Runner 2049, where a synthetic girlfriend seems to develop real feelings. Then there’s an Adam Devine movie where a guy’s phone operating system develops feelings—or at least wants him to stop being such a loser and pushes him to take action in his love life. Of course, there’s Ex Machina, in which the AI pretends to have human emotions but is actually cold and calculating. In that case, the AI embodies both extremes: the cold killer pretending to have fully human emotions.

So, we’ll see various ways AIs evaluate and react to circumstances as they develop. What do you think?

Jacobsen: It will be a bumpy ride on the emotional front until we figure things out.

Rosner: Can we think of any likely “attractors” besides the two extremes? For example, AI could be fully human in its reactions or close to humans but with emotions dialled up or down in intensity or purely cold and businesslike. Is there another way for AI to react that we’ll see frequently?

Jacobsen: The emotions that resemble the equivalent of “base colours” for feelings, like really subtle and complex emotions such as grief, will likely be much harder to replicate. Grief, for instance, involves a lot of brain activity that isn’t easy to reproduce. When the brain processes grief or depression, it seems to undergo a kind of pruning. During these states, there’s a reduction in feel-good chemicals, which limits neural branching. It’s like a retraction process. As time passes, things start to rebuild, and the person begins to feel good again. This rebuilding process is part of how grief is resolved. Replicating something like this would be incredibly complex for AI, as it involves a deep integration of neurological and emotional processes.

Rosner: An intense cry seems to reset the brain to some extent. I don’t know exactly what happens when you cry, but people generally feel better after a brutal bout of crying. I thought of an attractor for AI emotions—actually, two.

First, sets of emotions could become selling points or personality traits. As AI becomes capable of manifesting more personality, different products could have distinct emotional profiles. It’d be like Coke versus Pepsi in terms of personality—some people will prefer one product’s personality over another. For example, I don’t know if Alexa and Siri have different personalities. Are you familiar with both of them? How do they work? I haven’t used either, but they must have some functional differences that make people prefer one, right?

So, my emotions and personality will become commodified. That’s the first point.

The second is that some products will offer customizable emotional settings. A related idea—”point two-and-a-half”—is the ability to enable some level of autonomy. For instance, some people might feel guilty about being completely in charge of their AI and prefer a more collaborative dynamic. In some scenarios, it might even be more effective to have an AI capable of choosing its own responses and functionality. I think an emerging cliché in AIs—and TV and movies about AIs—will be the ability to tune the AI’s emotional responses and personality. That makes sense.

Jacobsen: Yes.

Rosner: We’ve already seen that, though I can’t name a specific movie. I’m sure someone has explored this concept, even if just in some mediocre movie where a sexy robot can be adjusted from neutral to horny with the push of a button or command. Also, I skimmed an article about someone opening the first AI brothel. It’s not robots yet, but they’ve got sex dolls that can carry on conversations, more or less. They don’t move yet—they’re just 120 pounds of silicone on some plastic framework.

Last updated May  3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices.In Sight Publishing by Scott  Douglas  Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott  Douglas  Jacobsen 2012–Present. All trademarksperformancesdatabases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.

Leave a Comment

Leave a comment