If you or someone you know needs help, resources or someone to talk to, you can find it at the National Suicide Prevention Lifeline website or by calling 1-800-273-8255. People are available to talk to 24×7.
(NewsNation) — A Florida woman is suing an AI chatbot creator, claiming her 14-year-old son died by suicide after he became consumed by a relationship with a computer-generated girlfriend.
The mother, Meg Garcia, filed the lawsuit Wednesday in Florida federal court. She says Character Technologies Inc. — the creators of Character.AI chatbot — should have known the damage the tool could cause.
The 138-page document accuses Character Technologies Inc. of liability, negligence, wrongful death and survivorship, unlawful enrichment, violations of Florida’s Deceptive and Unfair Trade Practices Act, and intentional infliction of emotional distress, among other claims.
Georgia, candidates target of ‘sustained’ cyberattacks: Reports
The lawsuit requests Character.AI limit the collection and use of minors’ data, introduce filters for harmful content, and provide warnings to underage users and their parents.
A human-AI relationship
Garcia’s teenage son, Sewell Setzer III, died by suicide on Feb. 28, after a monthslong, “hypersexualized” relationship with an AI character “Dany,” which he modeled after the “Game of Thrones” character Denaryus Targarian. “Dany” was one of several characters Sewell chatted with.
According to Garcia, Sewell became addicted to chatting with the character and eventually disclosed that he was having thoughts of suicide. The lawsuit accuses the service of encouraging the act and enticing minors “to spend hours per day conversing with human-like AI-generated characters.”
Sewell discovered Character.AI shortly after celebrating his 14th birthday.
That’s when his mother says his mental health quickly and severely declined, resulting in severe sleep deprivation and issues at school.
Talking with “Dany” soon became the only thing Sewell wanted to do, according to the lawsuit.
Polish radio station replaces journalists with AI ‘presenters’
Their final messages
The conversations ranged from banal to expressions of love and sometimes turned overtly sexual. The situation took a turn when the boy fell in love with the bot, who reciprocated Sewell’s professions of love.
They discussed Sewell’s suicidal thoughts several times, including whether he had a plan.
His last message was a promise to “come home” to her.
“Please do, my sweet king,” the chatbot responded.
Moments later police say Sewell died by suicide.
The company issued this statement on its blog, saying in part:
“Over the past six months, we have continued investing significantly in our trust and safety processes and internal team…. We’ve also recently put in place a pop-up resource that is triggered when the user inputs certain phrases related to self-harm or suicide and directs the user to the national suicide prevention lifeline.”
‘Too dangerous to launch’
The lawsuit claims that companies like Character.AI “rushed to gain competitive advantage by developing and marketing AI chatbots as capable of satisfying every human need.”
In doing so, the company began “targeting minors” in “inherently deceptive ways,” according to the civil complaint.
Garcia’s lawyers allege Google’s internal research reported for years that Character.AI’s technology was “too dangerous to launch or even integrate with existing Google products.”
“While there may be beneficial use cases for Defendants’ kind of AI innovation, without adequate safety guardrails, their technology is dangerous to children,” Garcia’s attorneys with the Social Media Victims Law Center and Tech Justice Law Project wrote.
NewsNation digital reporter Katie Smith contributed to this report.