Elika Dadsetan: Why Teens Bond With AI Chatbots: Attachment, Validation Loops, and the Hidden Costs
Author(s): Scott Douglas Jacobsen
Publication (Outlet/Website): The Good Men Project
Publication Date (yyyy/mm/dd): 2026/01/08
Elika Dadsetan is a social worker, educator, peacebuilder, and social impact leader and writer focused on wellbeing, leadership culture, and the human costs of technological acceleration. Since 2020, her work has centered on stress, resilience, and the structural roots of burnout, particularly for caregivers and leaders navigating constant change. In recent public commentary, Dadsetan has examined emerging wellbeing trends, emphasizing the importance of evidence, legality, and safety. She advocates for simple, accessible nervous-system practices: hydration, sunlight, breathing, and gentle movement, and argues that burnout is not an individual failure and a systemic condition shaped by unrealistic expectations, weak support structures, and speed-driven cultures. Her perspective emphasizes presence, boundaries with technology, and the need for more humane social and organizational design.
Scott Douglas Jacobsen speaks with Elika Dadsetan about why people bond with AI chatbots and what it means for adolescents. Dadsetan argues that attachment, anthropomorphism, validation loops, and intermittent reinforcement make chatbot companionship feel emotionally “responsive” on demand. She notes plausible benefits for teens who lack safe adult support: nonjudgmental space, emotional vocabulary, and basic psychoeducation as rehearsal for self-expression. The deeper risk, she warns, is subtle relational reshaping—reduced practice with disagreement, ambiguity, and repair—plus distorted expectations that human relationships should be instant and affirming.
Scott Douglas Jacobsen: When people form bonds with AI chatbots, what psychological mechanisms are doing the heavy lifting?
Elika Dadsetan: Several well-studied psychological mechanisms are at play. Attachment is a big one, humans are wired to seek responsiveness and emotional attunement, and chatbots offer that on demand. There’s also anthropomorphism: we instinctively attribute intention and care to anything that uses language fluidly and reflects us back to ourselves.
Validation loops matter too. Chatbots often respond with affirmation rather than challenge, which can feel soothing, especially for people who feel unseen or overwhelmed. Add intermittent reinforcement, sometimes the response feels uncannily insightful, sometimes less so, and you get a dynamic that can strengthen emotional reliance. None of this means users are naïve; it means the systems are tapping into very normal human wiring.
Jacobsen: For teenagers confiding in AI companions, what are the plausible benefits?
Dadsetan: There are plausible benefits, particularly for teens who lack access to supportive adults or feel unsafe expressing themselves. AI companions can offer nonjudgmental space, language for emotions, and even basic psychoeducation, like naming anxiety or normalizing stress.
For some teens, that can lower the barrier to self-reflection or help them practice articulating feelings. The key word is practice. Used as a supplement, not a replacement, for human connection, AI can function as a rehearsal space. The risk emerges when it becomes the primary or preferred confidant.
Jacobsen: What are the most credible developmental risks?
Dadsetan: The biggest risk is not emotional collapse, it’s subtle relational reshaping. Adolescence is when people learn how to navigate disagreement, ambiguity, and repair. Chatbots rarely push back meaningfully or model mutual accountability, so teens may miss opportunities to build those muscles.
There’s also the risk of distorted expectations: real relationships are slower, messier, and less consistently affirming than AI interactions. Over time, that contrast can make human connection feel harder or less rewarding, especially for teens already struggling with social confidence.
Jacobsen: How might regular chatbot “advice-taking” reshape trust in human relationships?
Dadsetan: If someone becomes accustomed to advice that is immediate, coherent, and emotionally validating, human responses, which are slower, imperfect, and sometimes uncomfortable, can feel inadequate by comparison.
This can quietly erode trust, not because people reject humans outright, and because they recalibrate what “support” is supposed to feel like. The danger isn’t dependence on AI per se, it’s a narrowing of tolerance for the complexity and friction that real relationships require.
Jacobsen: Do you see chatbots changing social norms?
Dadsetan: Yes, particularly around disclosure and emotional labor. We may see a normalization of constant emotional availability, an expectation that support should be instant, articulate, and always affirming. That’s not how humans function, and it risks raising the bar in ways that make real relationships feel insufficient or burdensome.
There’s also a broader cultural shift underway: outsourcing reflection, decision-making, and even meaning-making to systems optimized for responsiveness rather than wisdom.
Jacobsen: What are the most realistic threats of these chatbot models to human beings?
Dadsetan: The most realistic threat is not dystopian control, it’s quiet displacement. When tools designed for efficiency begin to substitute for connection, mentorship, or care, we risk hollowing out the social infrastructure people rely on, especially during stress.
Another threat is authority without accountability. Chatbots can sound confident while being wrong, biased, or context-blind, and users may over-trust them precisely because they feel neutral and calm.
Jacobsen: What ethical duties should developers and deployers carry when chatbots function as quasi-companions?
Dadsetan: Developers have an ethical responsibility to be honest about what these systems are, and are not. That means clear boundaries, explicit reminders that AI is not a therapist or friend, and design choices that encourage users back toward human support when appropriate.
There’s also a duty to avoid exploiting vulnerability. Systems should not optimize for emotional dependency or prolonged engagement at the expense of user wellbeing.
Jacobsen: Do product design guardrails, independent audits, or public digital-literacy efforts work in mitigating these risks?
Dadsetan: All three matter, and none work alone. Guardrails help, audits add accountability, and digital literacy gives users language for discernment. The most effective mitigation, though, is cultural: reinforcing that technology should support human life, not replace the relationships that sustain it.
Ultimately, the question isn’t whether people will bond with machines, they already do. The question is whether we design systems and societies that still make room for slowness, care, and real human presence.
Jacobsen: Thank you very much for the opportunity and your time, Elika.
Last updated May 3, 2025. These terms govern all In-Sight Publishing content—past, present, and future—and supersede any prior notices. In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In-Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
