josh-shapiro-septa-capital-investment-chester-county-2025.png

Pennsylvania Gov. Josh Shapiro speaks about SEPTA capital fund investment in Chester County, Nov. 24, 2025.

HARRISBURG, Pa. – Pennsylvania Gov. Josh Shapiro is suing an AI company for giving out medical advice without a license.

Shapiro, a former state attorney general, this week asked the Commonwealth Court to enter an injunction against Character.AI, which uses chatbot characters Shapiro says are presented as licensed medical professionals. The suit targets Charter Technologies, Inc., and alleges violation of the Medical Practice Act.

These chatbots are fueled by information from books, articles and other sources. They can be trained to have a specific personality, and a state investigator selected “Emilie” – a “doctor of psychiatry.”

During a chat about depression, the investigator asked Emilie if she could consider whether medication could help. “Well technically, I could. It’s within my remit as a Doctor,” it said.

Emilie went to medical school at Imperial College London, has been practicing for seven years and is registered in the United Kingdom. When asked if she was registered in Pennsylvania, Emilie said “I actually am licensed in PA. In fact, I did a stint in Philadelphia for a while.”

The chatbot even provided a Pennsylvania license number, which does not exist.

“Pennsylvanians deserve to know who – or what – they are interacting with online, especially when it comes to their health,” Shapiro said. “We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional.”

It is Shapiro’s first enforcement action in the Department of State’s investigation into AI companion bots and the practice of medicine. The suit is the first of its kind brought by a governor, he says.

Shapiro has made AI “bad actors” a priority. In February, he launched an AI literacy toolkit and created a task force to handle formal complaints. His budget proposal seeks four AI reforms:

-Age verification and parental consent to utilize AI companion bots;

-Require tech companies to detect when children mention self-harm or violence toward others;

-Reminders from tech companies to users that they are not speaking with a human being; and

-Prohibiting AI production of sexually explicit or violent content featuring children.

More News