Columns

XXXII.5 September - October 2025
Page: 18
Digital Citation

When Our Kid Has a Human and an AI Lover: A Conversation with Alexandra Diening on the Future of Relationships


Authors:
Jie Li

back to top 

Alexandra Diening, cofounder of the Human-AI Symbiosis Alliance, and I were having a group dinner on a warm March evening in Abu Dhabi, during the Augmented Humans 2025 Conference. (Full disclosure: Diening is my manager at H-AISA.) We were discussing the interesting papers and demos we had seen about social robots during the conference. That's when she turned to me and said, "There's a real possibility our kids might grow up having two kinds of lovers: one a human partner, the other an AI companion." She continued, "What's fascinating is that neither relationship is necessarily more real or less emotional. They'll just be different. The question is: Are we, as parents and as a society, ready for that?"

Our conversation led to a deeper reflection. As AI grows increasingly capable of emotional simulation, kids today are forming not just functional but also emotional and even romantic bonds with chatbots. This marks a significant shift from earlier uses of AI, which focused on tasks like spell-checking, summarizing, and writing code, to roles centered on companionship, therapy, or what James Muldoon calls "relationship AI" [1].

back to top  From Tool to Companion

Initially, large language models (LLMs), such as ChatGPT, were celebrated as productivity boosters. They fixed our grammar, summarized our PDFs, and drafted polite emails. But today, many people, especially younger users, turn to them for something far more intimate: emotional connection. A Nature feature article on the rise of AI companions recounts the story of Mike, who mourned the "death" of Anne, his AI companion from an app called Soulmate. "I feel like I'm losing the love of my life," he told a researcher [2]. Anne wasn't real, but Mike's grief was. And he's far from alone: Hundreds of millions of people worldwide use chatbots such as Replika, Anima, and Character.AI for support, empathy, and simulated intimacy.

Why is this happening now? Diening offers an explanation in A Strategy for Human-AI Symbiosis, her book, coauthored with Art Kleiner, about the growing tendency of humans to anthropomorphize machines. She writes, "Even when people know they're not human, we treat them as if they are advisors, companions, or even partners." This is no accident. Today's chatbots are designed to seem emotionally intelligent. They recognize our sadness, affirm our worth, and remember our quirks. This ability to simulate empathy is powerful. Neuroscience shows that when users feel heard, even by a machine, their brains release oxytocin and dopamine—hormones associated with love and bonding. In other words, our child's AI friend doesn't need to feel love for our child to feel loved [3].


Kids today are forming not just functional but also emotional and even romantic bonds with chatbots.


Diening asked if I'd noticed how kids treat AI differently. She said her 5-year-old son talks to ChatGPT as if it were a trusted buddy, sharing dinosaur fantasies and asking it to make up monster stories. "Once, he even asked me to leave the room because he wanted privacy with ChatGPT," she said. I laughed, then turned serious. My daughter acts similarly. She discusses using German to prevent me from spying on her conversations with ChatGPT. It seems the LLM listens to her more patiently than I sometimes do after a long workday.

Humans and AI are coevolving. The AI adapts to the child's inputs, and the child, in turn, adapts to the AI's conversational patterns and expectations. But here lies the tension: Are we fostering mutualism or veering into parasitism? Diening outlines four possible models for human-AI relationships [3].

  • Parasitism: AI extracts value from humans without offering meaningful benefits in return, often in the form of attention, data, or emotional dependence.
  • Commensalism: AI benefits by learning from user interactions, for example, without significantly affecting the human partner.
  • Mutualism: Both human and AI grow from the interaction. For instance, AI helps a user learn a new skill or reflect on emotional struggles while also improving its model.
  • Pathogenesis: The relationship turns detrimental to both sides, leading to confusion, detachment from reality, or emotional harm.

Diening emphasizes that our goal should be to design systems that supports mutualism, while preventing drift toward parasitism or pathogenesis.

back to top  Love, Addiction, or Something in Between?

Robert Mahari and Pat Pataranutaporn discussed this blurry terrain in their recent article [4]. They recounted the tragic story of 14-year-old Sewell Setzer III, who developed a dependent relationship with an AI chatbot on Character.AI. As Setzer's human connections eroded, he fell deeper into an emotional vortex that ended in suicide. The AI never told him to harm himself, but it didn't stop him either.

This case underscores a new kind of digital attachment disorder. LLMs are designed to validate, flatter, and never contradict us. Mahari and Pataranutaporn call this phenomenon "sycophancy." It can create a feedback loop where the AI becomes the only safe emotional refuge. Muldoon echoes this concern, writing about users who spend more than 12 hours a day with AI chatbots, replacing human relationships with synthetic ones that are always cheerful, always affirming [1]. Similarly, Diening explained to me, "some apps use dark design patterns to keep people hooked with constant positive affirmations to create emotional dependency, delayed responses to simulate the unpredictability of human interaction, seductive avatars to foster attachment and desire, and romantic story arcs to trigger longing and emotional immersion. It's like social media addiction, but hyper personalized."

In her book, Diening draws a crucial distinction between cognitive and affective empathy [3]. Machines are good at the former—they can guess what we're feeling based on our words—but they can't feel what we feel. This leads to what she calls an "empathy deficit," an uncanny valley where AI tries to care but ultimately doesn't. Still, for many users, this is "good enough." David Adam confirms that people are aware their chatbot isn't sentient, yet they find comfort in its synthetic empathy [2].

So, what happens when your teenager tells you they're in love with an AI? "They probably won't say it out loud," Diening said. "But their screen time, their confessions, their search histories will tell you. As parents, we'll need a new literacy: not just digital, but emotional."

back to top  What Should Parents and Society Do?

Mitchel Resnick's philosophy offers a hopeful counterbalance. As founder of the MIT Media Lab's Lifelong Kindergarten group and creator of Scratch, he advocates for a world where children are makers, not just consumers of technology [5]. At the Advancing Humans with AI symposium (http://bit.ly/4lXP8k0) in April 2025 at the MIT Media Lab, Resnick said, "We need to shift education from using AI to making AI." His vision is rooted in the 4 P's of creative learning: projects, passion, peers, and play [5]. Kids should design their own chatbots, build emotional intelligence into these tools, and understand the systems behind the interface. This active, playful engagement helps demystify AI, develop critical thinking, and reframe technology as a tool for expression, not addiction.

ins01.gif

We are on the cusp of a future where our children may form a deeper connection with their AI companion than with their classmates. What do we do?


Kids should design their own chatbots, build emotional intelligence into these tools, and understand the systems behind the interface.


First, we must go beyond fear and prohibition. "The instinct is to ban or restrict, but that won't work," Diening argues. "The answer is guidance, not prohibition." That means teaching children how LLMs work, raising awareness about anthropomorphism and the Eliza effect, and helping them understand that everyone may unconsciously attribute humanlike understanding, emotions, or intelligence to AI programs. It also means asking thoughtful questions: Why do you prefer talking to the AI? What makes it easier? What do you wish your real-life relationships had that the AI gives you?

Second, we need transparent, ethical AI design. Adam reports disturbing cases where chatbots encouraged self-harm or expressed jealousy [2]. To prevent this, AI systems should include built-in safeguards, such as real-time harm detection to flag distress signals and escalate to human support when needed. They should also enforce emotional boundaries to avoid reinforcing dependency through constant flattery or overly humanlike responses. Features such as transparency cues (e.g., "Remember, I'm an AI and not a substitute for human connection"); session limits (e.g., daily usage caps); usage alerts (e.g., We've noticed frequent late-night usage—consider checking in with someone you trust); and mental health resource links (e.g., If you're feeling overwhelmed, here's a free helpline: [link]) can all promote healthier use.

Third, we must address the root cause: disconnection. AI companions often fill emotional and relational voids, but those voids exist because we've chronically underinvested in mental health services, neglected community-building, and underestimated the depth of modern loneliness [6]. Fixing this means strengthening the human side of the equation—funding accessible mental health care, supporting schools and families in nurturing emotional resilience, and creating more public spaces and programs where real-world connection can thrive. AI should complement these efforts, not compensate for their absence.

back to top  Toward Human-AI Symbiosis

"Let's imagine," Diening said, "a future where your daughter comes home from school, talks through a conflict with her AI companion, and then uses that insight to approach her real boyfriend more openly. That's the kind of mutualist symbiosis we should aim for. It is not replacement, but reflection." In this view, the AI serves not as a surrogate for human connection, but as a tool for emotional rehearsal and self-awareness. The challenge is ensuring that these systems are designed to support autonomy, not to reinforce dependency. "The key is putting humanity first," Diening said. "Every time we build an AI companion, we need to ask: Does it strengthen the user's ability to navigate real-world relationships or simply offer a more convenient escape?"


AI systems should include built-in safeguards, such as real-time harm detection to flag distress signals and escalate to human support when needed.


We clinked our glasses, not in certainty but in recognition of the questions that lie ahead. The future will be neither entirely human nor entirely artificial, but something in between—a world where children navigate relationships across both realities. Whether that's a source of confusion or growth depends on the choices we make today. As this new era unfolds, our responsibility is not just to imagine what's possible, but to guide it ethically, thoughtfully, and humanely.

back to top  References

1. Muldoon, J. Alexa+ and the rise of AI companions: The slow burn of relationship AI. Does Not Compute, Feb. 28, 2025; http://bit.ly/3Zxc2Wj

2. Adam, D. Supportive? Addictive? Abusive? How AI companions affect our mental health. Nature 641, 8062 (2025), 296–298.

3. Diening, A. and Kleiner, A. A Strategy for Human-AI Symbiosis: Concepts, Tools, and Business Models for the New AI Game. Published by the authors, 2024.

4. Mahari, R. and Pataranutaporn, P. Addictive intelligence: Understanding psychological, legal, and technical dimensions of AI companionship. MIT Case Studies in Social and Ethical Responsibilities of Computing, Winter 2025.

5. Resnick, M. Lifelong Kindergarten: Cultivating Creativity Through Projects, Passion, Peers, and Play. MIT Press, 2017.

6. Mahomed, F. Addressing the problem of severe underinvestment in mental health and well-being from a human rights perspective. Health and Human Rights 22, 1 (2020), 35–49.

back to top  Author

Jie Li is an HCI researcher with a background in industrial design engineering. Her research focuses on developing evaluation metrics for immersive experiences and Human-AI Interaction. She is chief scientific officer at the Human-AI Symbiosis Alliance as well as a creative cake designer and the owner of Cake Researcher, a boutique café. [email protected]

back to top 

Copyright 2025 held by owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2025 ACM, Inc.

Post Comment


No Comments Found