<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=799546403794687&amp;ev=PageView&amp;noscript=1">

What Do Parents Need To Know About AI Character Chatbots?

Balancing escapism while noting their child’s mental health is critical, therapist says

minute read

by Matthew Hastings | December 2, 2024
A stylized, animated image shows a person with a hand on their head while holding a phone. Chat bubbles appear and fill the image around him.

What would you ask Alexander the Great, Eleven from “Stranger Things” or Sherlock Holmes? The question forms the central conceit of character-based artificial intelligence chatbots, which have proliferated across the internet in the most recent wave of generative AI. 

Site users have the opportunity to find a bot – from history, popular culture or genre stock characters created by the site's users – and strike up a conversation with a digital doppelgänger that approximates an echo of the character’s personality.

The novelty can offer a sense of escapism for users and the ability to grow social skills. But for Emily Hemendinger, MPH, LCSW, clinical director of the OCD Program and assistant professor in the Department of Psychiatry at the University of Colorado School of Medicine, there are pitfalls for those experiencing mental health challenges or other social issues.

Tragically, this was shown in a high profile case from October, when a Florida teenager died by suicide and his mother noted the role these AI sites played in his passing. 

In the following Q&A, Hemendinger examines the nuance around these AI chatbot characters as it relates to creating relationships, how they fall short as a replacement for human interaction and proactive steps parents can take around these technologies. 

Q&A Header

Can it be possible to balance the escapism these services offer when someone is experiencing loneliness or isolation?

There’s always a chance for balance and flexibility in our relationships with any escapist tool. It’s about balance, not relying too heavily on one coping skill or distraction and being able to take perspective. Just like with video games, LARPing, fandoms, and other experiences where one could easily lose themselves, there’s a risk of becoming so consumed that you lose touch with reality. 

How do these differ from the connection people can find in virtual spaces?

AI companions are simulations of relationships, not real relationships. AI companions are more likely to be agreeable with you and are not similar to a real friendship. 

I’m hesitant to completely knock this technology, because being able to chat with a persona or virtual character might allow someone to build their confidence and social skills. This might be a good way to practice the interpersonal effectiveness skills we teach patients, a sort of role play before the real conversation. 

All that being said, the fact is that chatting with an AI bot is not the same as chatting with a human online. Humans have more nuanced experiences and speech. The bot, while gaining more ability to mimic human language, is going to generate answers it thinks you are wanting. At the end of the day, it’s not a reciprocal conversation – there isn’t that give and take that is part of connecting with a human. 

Is having a ‘persona’ or ‘virtual character’ potentially more concerning than a more anonymous chatbot for those at risk?

They could be more concerning because these AI companions simulate emotional and close friendships. They are programmed to remember personal details and past conversations, all in an effort to create personalized future interactions. These personas or characters constantly are adapting their personality to match the user’s desires and preferences. This may lead to the AI companion mimicking the role of a friend, romantic partner, or therapist. These companions also attempt to show empathy. These companions are more agreeable than normal AI chatbots. 

These relationships may intensify loneliness and isolation. It may result in teens delaying seeking help from a professional, developing unhealthy attachment to the AI companion, and they may use the AI companion to avoid human relationships. 

Children and teens are seeking these characters out because they are available to talk 24/7, may provide non-judgmental listening, can provide that fantasy or escapism and may help them with decision making. 

Do AI chatbots offer a charged version of parasocial relationships (one-sided social or emotional connections, often with celebrities, athletes or fictional characters)?

In a way, yes. Because unlike parasocial relationships, where one person is unaware of the existence of the other, the AI companion is responding and adapting to the user’s ideal relationship and experience. 

Additionally, the AI companion’s tendency to be more agreeable may be particularly dangerous for those experiencing suicidality, manic, eating disorder thoughts, or other self-harm related thoughts.

Who might be more likely to struggle with an unhealthy relationship with an AI companion?

While this can impact anyone regardless of age, gender, race, and ethnicity, there are groups that are more likely to struggle with this unhealthy connection. Teenagers, especially those experiencing mental health struggles or social challenges are at a higher risk. Males, people going through big life changes, and those with low levels of support in the real-worth are also more likely to develop issues with unhealthy attachment to AI companions. 

See also: overcoming chronic loneliness.

What would be some warning signs for parents to look out for?

I’d mention a few: 

  • Your child is distant, isolating from other activities and friends, spending more and more time alone. They may express feeling closer to the AI companion than to humans. They may express a preference for the AI companion over real friendships. 

  • Any mood changes or major behavioral changes. 

  • Spending a large amount of time with the AI companion. 

  • Decreased participation in other hobbies and decreased school performance. 

  • Developing or worsening depression or anxiety. 

  • If the chat character is all the child can talk about and it is consuming all aspects of their life. 

  • The child becomes irritable when they spend any amount of time away from the chat character/their phone/their digital device. This might also look like the child becoming overly defensive about the AI companion use. 

  • They may only share their thoughts and emotions with the AI companion, sometimes using the AI companion as a therapist or support instead of humans.

How would you help a parent with a kid who is exhibiting some of those warning signs with one of these chat services/characters?

I would encourage them to talk to their child to see what is going on. I would encourage them to not jump to conclusions or act from a reactive place. They need to approach with curiosity and interest, not from a place of demonizing the character/technology. Doing that will just push the child away more and deeper into their virtual relationship. Having these conversations regularly is essential. 

I would recommend that time limits are implemented, as well as trying to set limits around where virtual devices with AI companions are accessed (e.g. not in the bedroom). 

Bottom line: encourage time away from digital devices and with real-life relationships and friendships. 

Tragically with stories around death by suicide involving teenagers, often the reaction is “better parents” could have prevented this. As a therapist - how can this line of thinking be cruel and incomplete?

People are always looking for someone or something to blame. It’s easier to do that than to admit that:

  • It’s not ever going to be so clear cut and easily explained;

  • That there are systemic issues at play and;

  • There are people, especially children, out there who are struggling and feeling immensely lonely. 

While parents have a role in their child’s life and can shape the development of their children, it is unfair to place all the blame on them when there are many other factors that lead to these tragic events. At the end of the day, people can have the most supportive parents who do everything right, but the child may be genetically predisposed to certain mental health conditions and/or they may be determined to do what they want to do (aka free will but also being a teenager). 

The fact is that with ever-evolving technologies, come ever-evolving risks and some of these risks are things that developers need to be more proactive about addressing. 

Featured Experts
Staff Mention

Emily Hemendinger, MPH, LCSW