Megan_Garcia_Sewell_Setzer_Center_for_Humane_Technology.jpg

The lawsuit filed by Megan Garcia on behalf of her son, Sewell Setzer III, involving the Character AI app has been settled.

The mother of an Orlando teen who committed suicide in 2023 after allegedly becoming addicted to interactions with an artificial intelligence-generated “chatbot” has settled a lawsuit filed against the company Character AI and its founders.

Few details about the settlement of Megan Garcia v. Character Technologies Inc., which was filed in federal court in the Middle District of Florida, were released, but the dismissal of the case was affirmed by a January 7 order issued by Judge Anne C. Conway.

After downloading the Character AI app in April of 2023, 14-year-old Sewell Setzer III primarily interacted with chatbots imitating characters from the “Game of Thrones” franchise. Despite his parents hiring a therapist to help the teen deal with mental health issues, he died after suffering from a self-inflicted gunshot wound in the wake of sending some final messages to a character called Daenerys Targaryen, according to the original lawsuit.

The lawsuit, which alleged that Character AI, a tech company founded by Noam Shazeer and Daniel de Freitas, caused the teen’s death. But the company and others have argued that AI outputs are protected by the First Amendment.

The Florida complaint was one of multiple lawsuits filed by the Seattle-based Social Media Victims Law Center (SMVLC), which says it aims to hold tech companies accountable for harm that befalls vulnerable users. In a statement emailed to the Florida Record, the legal advocacy group said it had settled all claims, as of Jan. 13,  that it filed against Character Technologies, its founders and Google alleging harm to children who used the AI platform.

“Plaintiffs and Character.ai have reached a comprehensive settlement in principle of all claims in lawsuits filed by families against Character.ai and others involving alleged injuries to minors,” the statement says. “These families are working to raise public awareness of the importance of safety in AI design and will continue their education and advocacy efforts on these critical issues.”

The statement noted that over the past year, the tech company has carried out “innovative and decisive steps” to better assure teen safety with regard to the use of its products and would encourage others in the tech industry to adopt similar standards.

“The families and Character.ai remain committed to working together in the months and years to come,” the statement says.

Prior to August 2024, the Character Technologies app was rated suitable for children age 12 and older, according to the lawsuit. The characters are anthropomorphic and interact with users through a messaging app that mirrors human-to-human communications.

Characters “utilize inefficient, non-substantive human mannerisms such as stuttering to convey nervousness and nonsense sounds … like ‘Uhm,’ Mmmmmm,’ and ‘Heh,” a judicial order issued last year in the Florida case states.

Judge Conway last year ruled the case could advance after rejecting an attempt to dismiss the lawsuit based on First Amendment arguments. The Philadelphia-based Foundation for Individual Rights and Expression (FIRE) filed a friend-of-the-court brief last year that argued the judge’s decision not to dismiss the case has a major bearing on free speech.

“Our brief argues that AI output – like expression created with any other tool – must be protected by the First Amendment,” FIRE said in a statement. “This issue not only determines the fate of this lawsuit, but also has such profound implications for First Amendment rights more broadly.”

More News