ChatBot.jpg

A federal judge has decided that First Amendment protections don’t shield an artificial intelligence company from a lawsuit accusing the firm and its founders of creating chatbots that figured prominently in an Orlando teen’s suicide.

Judge Anne C. Conway of the Middle District of Florida denied several motions by defendants Character Technologies and founders Daniel De Freitas and Noam Shazeer to dismiss the lawsuit brought by the mother of 14-year-old Sewell Setzer III. Setzer killed himself with a gun in February of last year after interacting for months with Character.AI chatbots imitating fictitious characters from the Game of Thrones franchise, according to the lawsuit filed by Sewell’s mother, Megan Garcia.

“... Defendants fail to articulate why words strung together by (Large Language Models, or LLMs, trained in engaging in open dialog with online users) are speech,”  Conway said in her May 21 opinion. “... The court is not prepared to hold that Character.AI's output is speech.”

Matthew_Bergman_Social_Media_Victims_Law_Center.jpg

Bergman

She suggested that the technology underlying artificial intelligence, which allows users to speak with app-based characters, may differ from content protected by the First Amendment, such as books, movies and video games.

Conway, however, did allow the dismissal of the plaintiff’s claim alleging that the defendants engaged in an intentional infliction of emotional distress. She denied other motions by Character Technologies Inc., Shazeer, De Frietas and Google to dismiss the lawsuit, but Conway dismissed Garcia’s claims against Google’s parent company, Alphabet Inc.

Garcia sufficiently argued that Google was potentially liable as a “component part manufacturer” in the rollout of Character.AI’s chatbots and in aiding and abetting the artificial intelligence company’s actions. Google eventually did enter into a $2.7 billion licensing deal with Character Technologies.

A Character.AI spokesperson told the Florida Record in an email that the company looks forward to defending its position based on the merits of the case.

“It’s long been true that the law takes time to adapt to new technology, and AI is no different,” the spokesperson said. “In (the May 21) order, the court made clear that it was not ready to rule on all of Character.AI’s arguments at this stage …”

The company emphasized that it has put in place features designed to keep users safe while providing an engaging experience.

“We have launched a number of safety features that aim to achieve that balance, including a separate version of our Large Language Model model for under-18 users, parental insights, filtered Characters, time-spent notification, updated prominent disclaimers and more,” the spokesperson said.

The company also said it has put in place protections to detect and prevent dialog about self-harm. That may include a pop-up message directing users to the National Suicide and Crisis Lifeline, according to Character.AI.

Plaintiff Garcia is represented by the Social Media Victims Law Center, which has alleged that,  “Character.AI recklessly gives teenage users unrestricted access to lifelike AI companions without proper safeguards or warnings, harvesting their user data to train its models.”

The center stressed that Character.AI and other AI companies should be held accountable for their products’ effects on children and teens.

“Character.AI’s developers intentionally marketed a harmful product to children and refused to provide even basic protections against misuse and abuse,” the center’s founder, Matthew P. Bergman, said in a synopsis of the case.

More News