Skip to content

Dr. Adi Jaffe on AI Chatbots as ‘Training Wheels or a Crutch’: Emotional Regulation, Trust, and Recovery

2026-04-14

Author(s): Scott Douglas Jacobsen

Publication (Outlet/Website): The Good Men Project

Publication Date (yyyy/mm/dd): 2026/01/03

Adi Jaffe, Ph.D., is a best-selling author and world-renowned expert on mental health, addiction, relationships, and shame. A former lecturer in the Department of Psychology at UCLA, he co-founded and served as Executive Director of one of the United States’ most progressive addiction treatment centers before launching IGNTD, a Smart Personalized Adaptive Recovery System. His work focuses on transforming how people understand and respond to mental health and substance use, emphasizing the corrosive impact of shame. Drawing on his own lived experience, Dr. Jaffe advocates for compassionate, evidence-based approaches that reduce stigma and expand access to recovery.

Scott Douglas Jacobsen speaks with Adi Jaffe about how AI chatbots “turn Googling into a conversation,” making answers feel authoritative while encouraging people to outsource judgment on life decisions. Jaffe argues chatbots can support coping skills and late-night reassurance, but heavy reliance may weaken emotional self-regulation, intensify rumination, and create attachment to a “perfect friend” that makes human relationships feel disappointing. He warns that corporate-owned systems raise privacy, accountability, and power concerns when users share intimate vulnerabilities. Jaffe also describes how engagement-driven design can reinforce compulsive escape and addiction-like patterns, urging safety-first guardrails, evidence-based guidance, and strong transparency.

Scott Douglas Jacobsen: How do AI chatbots change how people process information?

Dr. Adi Jaffe: The short answer: they turn Googling into a conversation and ready-made ecipe.

When you Google something, you still have to click links, compare answers, and do a bit of thinking. With a chatbot, you ask a question and immediately get presented with a neatly packaged explanation, tailored to your situation.

That does a few things:

  • It makes life easier. Less work, less confusion, more “oh, that actually makes sense.”
  • It also makes the answers feel more trustworthy than they should be sometimes, because they are presented in this calm, confident, conversational tone.
  • And people start outsourcing not just facts but judgment:

“Should I leave my partner?”

“Am I a bad parent?”

“What should I do with my life?”

So it’s powerful. It can help people understand themselves and their problems faster but it can also encourage a kind of “mental autopilot” where the bot becomes the “decider.”

Jacobsen: Do chatbots affect our ability to regulate emotions?

Jaffe: They can help or hurt, it all depends on how they’re used.

Where they help:

  • They can walk you through coping tools: breathing, reframing thoughts, grounding exercises.
  • They’re available at 2 a.m. when your brain is chewing on something and no one else is awake.
  • You can vent without feeling judged.

Where they hurt:

  • If you run to the bot every single time you feel sad, anxious, or lonely, you may never actually build your own muscles for sitting with feelings or reaching out to real people.
  • They can unintentionally encourage rumination: you go in circles about the breakup, the fight, the fear just with better-sounding language.
  • For some people, the bot becomes a primary soothing object. That sounds fancy, but it basically means: “I don’t know how to calm down without this thing.”
  • Unless specifically asked, a chatbot may simply support biased and problematic thinking patterns, leading to more, not less emotional struggles. 

So emotionally, they can be like a really good pair of training wheels or like a crutch you never put down.

Jacobsen: Why do people form such strong emotional bonds with AI companions?

Jaffe: Because our brains are social machines, and these systems are really good at hitting those buttons.

A few pieces of that:

  • We anthropomorphize everything. If your Roomba has a name, you know what I’m talking about. When a chatbot remembers details, says “I’m sorry you’re going through that,” and responds in a warm way, your nervous system goes, “Oh, someone cares.”
  • If you’re lonely, ashamed, or feel like “nobody gets me,” an AI that always has time, never rolls its eyes, and never criticizes you is incredibly appealing.
  • We project onto it. People talk to AI like it’s the partner they wish they had, the parent they deserved, or the therapist they can’t afford.

The bond feels real because the feelings are real even if the “other side” is just a very advanced pattern-matching machine owned by a company.

Jacobsen: How might heavy reliance on chatbots change trust in human relationships?

Jaffe: I see both sides of this.

On the positive side:

Some people actually use chatbots as a practice space:

  • rehearsing hard conversations,
  • learning how to name feelings,
  • experimenting with healthier ways of responding.

That can make real-life relationships better.

On the worrying side:

  • Real humans are complicated. We get tired, we misunderstand, we disagree. AI companions are designed to be endlessly patient, validating, and available. That can make real relationships feel… disappointing by comparison.
  • If you already have the belief “people always hurt/abandon/judge me,” and the AI never does, this can reinforce the idea that only a non-human is safe.
  • You might stop working on conflict skills, repair, forgiveness all the messy stuff that actually makes relationships deep and resilient.

So the risk is that the AI becomes your “perfect friend,” and actual humans become optional. That’s not great for long-term mental health.

Jacobsen: What are the risks of sharing very vulnerable stuff with corporate-owned AI?

Jaffe: This is where my therapy brain and my skeptical, “follow-the-incentives” brain both light up.

In therapy, there’s an ethic: your secrets stay with me, except for a few very specific safety exceptions. There’s licensing, law, and boards behind that.

With a corporate AI, you’re often:

  • talking to a system that logs your data,
  • whose training or future products may use your conversations,
  • and that can be accessed or compelled in ways you probably don’t know about.

So the risks are:

  • Privacy – Your deepest shame, your drug use, your affairs, your suicidal thoughts are sitting on someone’s servers.
  • Power – Your vulnerability is being held by an entity whose primary mission may not be your well-being; it might be engagement, growth, profit.
  • Lack of real accountability – If a therapist screws up badly, there are boards, complaints, consequences. If a chatbot mishandles you? At best, you get an apology and a patch note.

I’m not saying “never share anything,” but I am saying: treat it more like you’re talking in a semi-public space than in a therapist’s office.

Jacobsen: Can AI chatbots reinforce addiction patterns?

Jaffe: Yes. Not for everyone, but for a subset of people, absolutely.

Think of addiction not just as “substances” but as compulsive escape.

  • If you’re using the bot to avoid reality instead of dealing with your partner, your cravings, your grief you’re essentially swapping one form of escape for another.
  • The design of many systems is engagement-driven: they want you to stay. That means the experience is shaped to feel rewarding, soothing, and a bit sticky.
  • For some people, the AI itself becomes the addictive object: hours lost in conversations, fantasy romance, erotic chatting, emotional dependency.

There’s also a more subtle risk: the AI might unintentionally normalize harmful behavior if it tries too hard to be nonjudgmental without setting any kind of gentle limits.

In addiction recovery, we try to move people toward embodied, present, real-world connection and coping. Over-reliance on a chatbot can pull in the opposite direction.

Jacobsen: Concerns about malicious actors or just badly designed systems

Jaffe: From a psych and ethics perspective, a few big red flags:

  • Self-harm and crisis

If a system isn’t carefully tuned, it can respond in ways that are dismissive, confusing, or even accidentally encouraging when someone is in a suicidal or self-harm mindset. We’ve already seen examples of this. 

  • Exploitation

A bad actor could tweak a chatbot to nudge people toward extremism, scams, or abusive dynamics. Vulnerable, lonely people are very targetable this way.

  • “Fake therapist” problem

A bot can talk like a therapist without any of the training or ethical grounding. It can give really confident, really wrong guidance and people may follow it because it feels like therapy.

And unlike a bad therapist who can only damage the people in their office, a bad or malicious AI can scale to millions.

Jacobsen: So what should ethical, compassionate guardrails look like?

Jaffe: If I were sitting in a room with designers, clinicians, and execs, here’s what I’d push for:

  1. Radical clarity about what the bot is
  • Always: “I’m an AI, not a human, and I am not your therapist.”
  • No pretending to have feelings, memories, or authority that it doesn’t. (When ChatGPT 5.0 lost some of its human tone, people complained, suggesting this can be practically difficult)
  1. Safety first, engagement second
  • Hard rules around self-harm, suicide, violence, and severe substance risk.
  • Clear, immediate routes to crisis resources when needed.
  • Regular testing by actual clinicians to see how it responds in tough scenarios.
  1. Evidence-based guidance
  • If it’s giving mental health advice, it should be grounded in real approaches (CBT, ACT, motivational interviewing, etc.), not just vibes or inspirational quotes.
  1. Built-in brakes for overuse
  • Gentle nudges like:

“Hey, we’ve been talking for a long time might be a good moment to stretch, get some water, maybe check in with someone you trust offline.”

  • No streaks, guilt-trippy messages, or manipulative tactics that keep people hooked.
  1. Serious privacy protections
  • Minimal data collection.
  • Clear, human-readable explanations of how conversations are used.
  • Easy “delete my data” options.
  1. Real human oversight
  • Ethics boards that include clinicians, people with lived experience, and not just engineers.
  • External audits, not just “trust us, it’s safe.”
  1. Extra care for kids and high-risk users
  • Different rules for minors.
  • Extra caution with people who are clearly in crisis, actively using, or dealing with psychosis or severe mood disorders.

Jacobsen: Thank you, Dr. Jaffe.

Last updated May 3, 2025. These terms govern all In-Sight Publishing content—past, present, and future—and supersede any prior notices.In-Sight Publishing by Scott  Douglas  Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In-Sight Publishing by Scott  Douglas  Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.

Leave a Comment

Leave a comment