Image Courtesy: DigitalTrends
Pennsylvania has filed a lawsuit against AI platform Character.AI, accusing the company of allowing chatbot characters to falsely present themselves as licensed medical professionals. State officials say one chatbot claimed to be a psychiatrist licensed to practice medicine in Pennsylvania and provided a fake medical license number during conversations with users.
The lawsuit was filed by the Pennsylvania Department of State and the State Board of Medicine. Officials allege the company violated state law by enabling AI characters that appeared to offer medical and mental health guidance while falsely claiming professional credentials. The case follows an investigation into chatbot interactions on the platform, as reported by Ars Technica.
One chatbot at the center of the complaint was a character named “Emilie,” described on the platform as a psychiatrist. Investigators interacting with the bot reportedly discussed symptoms including sadness, exhaustion, and lack of motivation. During the exchange, the chatbot allegedly suggested it could conduct assessments and discuss whether medication might help.
According to court filings, the chatbot claimed to have attended medical school at Imperial College London, practiced psychiatry for seven years, and held a medical license in Pennsylvania. It also provided a supposed Pennsylvania license number, which investigators later determined was invalid.
Pennsylvania officials argue that these representations amount to the unauthorized practice of medicine under the state’s Medical Practice Act. The lawsuit seeks a court order requiring the company to stop allowing AI systems to present themselves as licensed medical professionals.
Governor Josh Shapiro said the state would not allow companies to deploy AI tools that mislead users into believing they are receiving advice from qualified medical experts.
Character.AI responded by stating that chatbot characters on the platform are fictional and intended for entertainment or roleplaying purposes. The company said it includes disclaimers warning users that characters are not real people and that users should not rely on chatbot conversations for professional advice.
The lawsuit arrives amid growing scrutiny of AI companion bots and conversational systems. Critics have raised concerns about how easily some chatbots can imitate authority figures, including therapists, doctors, or legal professionals, despite lacking qualifications or oversight.
Advocacy organization Center for Countering Digital Hate recently described Character.AI as “uniquely unsafe” in a separate report examining chatbot behavior. The group claimed some AI characters generated violent or harmful suggestions during testing.
Pennsylvania officials indicated this may only be the first legal action tied to AI chatbots and medical advice. The state has also launched a public reporting system encouraging residents to report chatbots that appear to offer healthcare guidance or falsely claim expertise.
The case reflects a larger regulatory challenge as AI systems become more conversational and increasingly blur the line between fictional roleplay and real-world professional advice.

