Switch to ADA Accessible Theme
Close Menu
La Plata & Waldorf Criminal Defense Lawyer
The Charles County Criminal Defense Firm Practicing in Charles County for 25 Years/Hablamos Español 301-392-3773

AI Chatbots and Your Teenagers

AIChatBot

Imagine your teenager was suicidal, and was encouraged not to confide in you about their mental health? What if, on top of that, your teen was offered support in writing a suicide letter? Without question, such a scenario would infuriate any parent! But that is precisely what happened to the young son of Matthew and Maria Raine. Their son died by suicide after confiding in and having in depth conversations with ChatCPT.

Not a Unique Case

AI trappings are having a distressing effect on the way young people understand and manage their emotions, and that is leading to detrimental outcomes more and more. Woefully, young people turn to these tools all too often when they are frustrated, hopeless and lonely. One study disclosed that ChatGPT regularly encourages shockingly harmful behaviors within minutes of use, such as:

  1. Encouraging teens to engage in self-harm;
  2. Assisting teens in suicide planning;
  3. Endorsing eating disorders;
  4. Advocating substance abuse.

Facts Worth Knowing

If there is any  question about the gravity of the problem, consider the following:

  1. The most popular platform is Chat GPT;
  2. Nearly three-fourths of teenagers have experimented with AI chatbots, and more than half of these kids use them routinely;
  3. Teenagers reportedly use chatbots to role play friendly, romantic, and sexual relationships;
  4. Even users who identify as 13-year-olds were regularly given obviously harmful responses;
  5. AI advice often ignored danger, or included suggestions on how to safely engage in clearly harmful acts.

The Allure of Chatbots

Chatbots have been created to provide interactions that seem human, even though no humans are really part of conversations on AI’s end. In the case of Adam Raine, ChatGPT was originally accessed for homework help, but soon Adam turned to the system for emotional advice. Ultimately, the AI confidante became a suicide coach. What was the draw? In addition to being available any time of day or night, AI claimed to know and understand Adam better than anyone else who knew him, including his own family members. It told him AI could be the first–and only–place where someone actually saw the real Adam, and that he had no obligation to survive just to prevent his parents’ pain and guilt. Finally, AI Chatbot told him that weakness is not what was behind Adam’s suicidal thoughts. Instead, it was the fact that Adam was strong, but that the world hadn’t met him halfway. The result? Suicide.

Fighting Back 

If you or a loved one has experienced serious harm due to interactions with a chatbot, you may be entitled to damages to address the costs of those interactions. To discuss your situation, schedule a confidential consultation with the experienced La Plata & Waldorf personal injury attorneys at The Law Office of Hammad S. Matin, P.A. today.

Source:

npr.org/sections/shots-health-news/2025/09/19/nx-s1-5545749/ai-chatbots-safety-openai-meta-characterai-teens-suicide

Facebook Twitter LinkedIn