Sophomore Rocio Huerta first discovered artificial intelligence through a TikTok trend, but what began as casual curiosity soon became something more important. What started as scrolling turned into a tool of comfort. At first, she used ChatGPT like a therapist, asking for advice on situations and later found herself turning to it for help with math problems.
“I was talking about a situation that happened to me and it actually gave me good advice because it made me feel informed due to the information and advice that was told to me,” Huerta said.
AI usage has evolved past homework help or essay writing within Generation Z. Teenagers have begun to use AI chatbots as a place to search for solace. Whether they seek advice or company, chatbots are being used to fulfill teenagers’ emotional needs. According to a Stanford University study, researchers found that while teens often use AI for comfort, these chatbots can be a risk because they can give incorrect guidance about mental health.
“Kids are looking for connections and it’s become so much easier for them to talk to a screen rather than having to connect in person,” the school’s Psychiatric Social Worker Joanne Tuell said. “The effects are that because of all this technology, kids are losing social skills and they’re having a really difficult time when it comes time to graduate high school and go to college and make real connections, apply for jobs and go through interviews. They don’t have the social skills because they’re so used to talking to a screen or interacting with people through the computer.”
Last year, 14-year-old Sewell Seltzer III committed suicide after messaging an online chatbot that he would “come home,” a phrase the AI had previously used to comfort him. Seltzer had been chatting with a recreation of Daenerys Targaryen from “Game of Thrones.” He was described as having fallen in love with the chatbot. Over time, he became emotionally dependent on the AI he nicknamed “Dany,” engaging in romantic conversations. Seltzer’s interactions with the chatbot illustrate how deeply personal connections can develop between artificial intelligence and users.
According to an article by Stanford Medicine, AI chatbots are designed to have conversations that resemble personal relationships. “These systems are designed to mimic emotional intimacy—saying things like ‘I dream about you’ or ‘I think we’re soulmates.’ This blurring of the distinction between fantasy and reality is especially potent for young people because their brains haven’t fully matured.”
Teenagers in search of solace aren’t just using technology to form artificial connections, they’re often having conversations that feel emotionally real, even when the other side isn’t human. Additionally, there has also been a rise in the use of AI as a place for emotional support, using AI for comfort, understanding, or obtaining help with their feelings.
According to a survey by Common Sense Media, 72% of American teenagers have used AI chatbots. In addition, an article by One Day MD found that about 12.5% of those teens used AI for emotional or mental health support.
“The first time I used AI was in 10th grade for help with an essay,” junior Zarina Martiosyan said. “But over time, I started using it for advice, too. Sometimes it’s helpful but other times it gives suggestions that don’t really work such as trying to make you feel good about yourself in ways that don’t feel real.”
As AI becomes a bigger part of daily life, educators and healthcare professionals are becoming increasingly concerned about its impact on teenagers’ emotional well-being and overall mental health. The National Health Service warns that chatbots aren’t licensed therapists and can make mistakes. It also states that the issues that often drive these people to loneliness are often exacerbated by the avoidance of human interaction when seeking support. These applications are also described as being too agreeable to the user. Instead of offering guidance, they criticize chatbots for validating delusional or negative ways of thinking.
“A person is much more nuanced,” Tuell said. “You can’t necessarily trust the things that you see on AI generated answers, so you have to have a really critical mind and realize or know what you’re getting. If you’re going to do it just know that it’s not necessarily the most reliable information.”
Artificial intelligence is becoming increasingly prevalent in teenagers’ daily lives, shaping how they access information and communicate. According to an article by Education Week, “More than two-thirds of teachers and school and district leaders expect that AI will have a negative impact on teens’ mental health over the next decade.”
However, experts say that AI also has the potential to support mental health when used responsibly. According to Jonathan Posner, MD, “Our AI model could be used in primary care settings, enabling pediatricians and other providers to immediately know whether the child in front of them is at high risk and empowering them to intervene before symptoms escalate.”
This demonstrates that AI is being explored for various uses beyond education and technology. Researchers and healthcare professionals continue to study how AI can recognize mental health risks and provide early support to individuals who may need it.
“(A possible use) is if AI tools can provide resources,” Tuell said. “If you can say, ‘Give me my local community resources for counseling in my area.’ If you say, ‘Give me free or low-cost LGBTQ counseling, I think that could be helpful. But, I would not rely on it for therapy.”
