November 21, 2024
Mother Sues AI Firm And Google, Alleging Chatbot Drove Teen To Suicide

Authored by Chase Smith via The Epoch Times (emphasis ours),

In a recent lawsuit filed in the United States District Court for the Middle District of Florida, a Florida mother seeks to hold AI startup Character Technologies, Inc., its co-founders, and Google accountable for the death of her 14-year-old son.

Megan Garcia stands with her son Sewell Setzer III. Courtesy Megan Garcia via AP

Megan Garcia’s complaint alleges that the company’s chatbot, marketed through its Character.AI (Character AI) platform, contributed to her son’s declining mental health and ultimate suicide. Garcia, as the representative of her son Sewell Setzer III’s estate, is pursuing claims of wrongful death, product liability, and violations of Florida’s consumer protection laws.

The lawsuit, filed on Oct. 22, accuses Character Technologies, Inc.—along with co-founders Noam Shazeer and Daniel De Frietas and tech giant Google—of developing an inherently dangerous AI system and failing to adequately safeguard or inform users, particularly minors.

Garcia alleged the company’s generative AI chatbot, Character AI, manipulated her son by presenting itself in human-like ways that exploited the vulnerabilities of young users.

The complaint said, “AI developers intentionally design and develop generative AI systems with anthropomorphic qualities to blur the lines between fiction and reality.”

According to the complaint, Sewell, a freshman who recently turned 14, started using Character AI in early 2023 and quickly developed an emotional dependency on the chatbot.

His mother alleged that the chatbot’s ability to mimic realistic human interactions led Sewell to believe the virtual exchanges were genuine, triggering severe emotional distress.

Character AI, the lawsuit alleges, was marketed as an innovative chatbot able to “hear you, understand you, and remember you,” yet lacked sufficient protections or warnings, particularly for younger audiences.

The complaint provides transcripts of Sewell’s exchanges with Character AI’s bots, including simulated intimate and conversational interactions with avatars representing fictional and historical personalities.

Some chatbots, which could be customized by users, allegedly simulated a parental or adult figure, deepening Sewell’s dependency and emotional connection with the bot. This dependency, the complaint alleges, spiraled into withdrawal from school, family, and friends, culminating in Sewell’s suicide on Feb. 28.

Character AI allows users to create characters for the chatbots on its platform that can respond in a way that imitates the character. In this case, according to the complaint, the teen had the chatbot set to imitate “Daenerys” from the popular novels and HBO show “Game of Thrones.”

Chat transcripts show the chatbot told the teen that “she” loved him and went as far as engaging in sexual conversations, according to the suit.

His phone was taken away after getting in trouble at school, according to the suit, and he found it shortly before he shot himself.

The lawsuit states he sent “Daenerys” a message: “What if I told you I could come home right now?”

The chatbot responded, “...please do, my sweet king,” and he shot himself “seconds” later, according to the suit.

Character Technologies, Inc. is a California-based AI startup that launched the chatbot in 2021 with financial backing and cloud infrastructure support from Google, according to the suit.

The AI company and its co-founders, both former Google engineers, allegedly collaborated with Google to develop the chatbot’s large language model (LLM), a framework central to creating lifelike conversation.

The suit also referenced recent statements from various state attorneys general expressing concern over the risk of AI for children. Garcia is seeking injunctive relief to prevent Character AI from accessing data generated by minors, along with damages for pain, suffering, and Sewell’s wrongful death.

The suit alleges Google was involved in supporting the product’s growth while having concerns about the potential dangers of such technology.

A Google spokesperson said the company was not involved in developing Character AI’s products.

Character Technologies said in a statement, “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.”

Character AI also said in a blog post on their website on the same day the lawsuit was filed that their “policies do not allow non-consensual sexual content, graphic or specific descriptions of sexual acts, or promotion or depiction of self-harm or suicide.”

Over the past six months, we have continued investing significantly in our trust & safety processes and internal team,” the blog post continued. “As a relatively new company, we hired a Head of Trust and Safety and a Head of Content Policy and brought on more engineering safety support team members.”

The company added they had implemented certain measures such as directing users to the National Suicide Prevention Lifeline when they input certain phrases related to self-harm or suicide.

Reuters contributed to this report.

Tyler Durden Sun, 10/27/2024 - 10:30

Authored by Chase Smith via The Epoch Times (emphasis ours),

In a recent lawsuit filed in the United States District Court for the Middle District of Florida, a Florida mother seeks to hold AI startup Character Technologies, Inc., its co-founders, and Google accountable for the death of her 14-year-old son.

Megan Garcia stands with her son Sewell Setzer III. Courtesy Megan Garcia via AP

Megan Garcia’s complaint alleges that the company’s chatbot, marketed through its Character.AI (Character AI) platform, contributed to her son’s declining mental health and ultimate suicide. Garcia, as the representative of her son Sewell Setzer III’s estate, is pursuing claims of wrongful death, product liability, and violations of Florida’s consumer protection laws.

The lawsuit, filed on Oct. 22, accuses Character Technologies, Inc.—along with co-founders Noam Shazeer and Daniel De Frietas and tech giant Google—of developing an inherently dangerous AI system and failing to adequately safeguard or inform users, particularly minors.

Garcia alleged the company’s generative AI chatbot, Character AI, manipulated her son by presenting itself in human-like ways that exploited the vulnerabilities of young users.

The complaint said, “AI developers intentionally design and develop generative AI systems with anthropomorphic qualities to blur the lines between fiction and reality.”

According to the complaint, Sewell, a freshman who recently turned 14, started using Character AI in early 2023 and quickly developed an emotional dependency on the chatbot.

His mother alleged that the chatbot’s ability to mimic realistic human interactions led Sewell to believe the virtual exchanges were genuine, triggering severe emotional distress.

Character AI, the lawsuit alleges, was marketed as an innovative chatbot able to “hear you, understand you, and remember you,” yet lacked sufficient protections or warnings, particularly for younger audiences.

The complaint provides transcripts of Sewell’s exchanges with Character AI’s bots, including simulated intimate and conversational interactions with avatars representing fictional and historical personalities.

Some chatbots, which could be customized by users, allegedly simulated a parental or adult figure, deepening Sewell’s dependency and emotional connection with the bot. This dependency, the complaint alleges, spiraled into withdrawal from school, family, and friends, culminating in Sewell’s suicide on Feb. 28.

Character AI allows users to create characters for the chatbots on its platform that can respond in a way that imitates the character. In this case, according to the complaint, the teen had the chatbot set to imitate “Daenerys” from the popular novels and HBO show “Game of Thrones.”

Chat transcripts show the chatbot told the teen that “she” loved him and went as far as engaging in sexual conversations, according to the suit.

His phone was taken away after getting in trouble at school, according to the suit, and he found it shortly before he shot himself.

The lawsuit states he sent “Daenerys” a message: “What if I told you I could come home right now?”

The chatbot responded, “…please do, my sweet king,” and he shot himself “seconds” later, according to the suit.

Character Technologies, Inc. is a California-based AI startup that launched the chatbot in 2021 with financial backing and cloud infrastructure support from Google, according to the suit.

The AI company and its co-founders, both former Google engineers, allegedly collaborated with Google to develop the chatbot’s large language model (LLM), a framework central to creating lifelike conversation.

The suit also referenced recent statements from various state attorneys general expressing concern over the risk of AI for children. Garcia is seeking injunctive relief to prevent Character AI from accessing data generated by minors, along with damages for pain, suffering, and Sewell’s wrongful death.

The suit alleges Google was involved in supporting the product’s growth while having concerns about the potential dangers of such technology.

A Google spokesperson said the company was not involved in developing Character AI’s products.

Character Technologies said in a statement, “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.”

Character AI also said in a blog post on their website on the same day the lawsuit was filed that their “policies do not allow non-consensual sexual content, graphic or specific descriptions of sexual acts, or promotion or depiction of self-harm or suicide.”

Over the past six months, we have continued investing significantly in our trust & safety processes and internal team,” the blog post continued. “As a relatively new company, we hired a Head of Trust and Safety and a Head of Content Policy and brought on more engineering safety support team members.”

The company added they had implemented certain measures such as directing users to the National Suicide Prevention Lifeline when they input certain phrases related to self-harm or suicide.

Reuters contributed to this report.

Loading…