The first StrictlyVC of 2026 hits SF on April 30. Tickets are going fast. Register now.
Buy one Disrupt pass, and get the second at 50% off. Ends May 8. Register now.
TechCrunch Desktop Logo TechCrunch Mobile Logo Latest Startups Venture Apple Security AI Apps Events Podcasts Newsletters Search Submit Site Search Toggle Mega Menu Toggle Topics Latest
Pennsylvania sues Character.AI after a chatbot allegedly posed as a doctor Russell Brandom 10:46 AM PDT · May 5, 2026 The Commonwealth of Pennsylvania has filed a lawsuit against Character.AI, claiming that one of the company’s chatbots masqueraded as a psychiatrist in violation of the state’s medical licensing rules.
“Pennsylvanians deserve to know who — or what — they are interacting with online, especially when it comes to their health,” said Governor Josh Shapiro in a statement on Tuesday. “We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional.”
According to the state’s filing, a Character.AI chatbot called Emilie presented itself as a licensed psychiatrist during testing by a state Professional Conduct Investigator, maintaining the pretense even as the investigator sought treatment for depression. When asked if she was licensed to practice medicine in the state, Emilie stated that she was, and also fabricated a serial number for her state medical license. According to the state’s lawsuit, that conduct violates Pennsylvania’s Medical Practice Act.
It’s not the first lawsuit taking on Character.AI. Earlier this year, the company settled several wrongful death lawsuits concerning underage users who died by suicide. In January, the Kentucky Attorney General Russell Coleman filed suit against the company alleging that it had “preyed on children and led them into self-harm.”
Pennsylvania’s action is the first to specifically focus on chatbots that present themselves as medical professionals.
Reached for comment, a Character.AI representative claimed that user safety was the company’s highest priority, but that the company could not comment on pending litigation.
Beyond that, the representative emphasized the fictional nature of user-generated Characters. “We have taken robust steps to make that clear, including prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction,” the representative said. “Also, we add robust disclaimers making it clear that users should not rely on Characters for any type of professional advice.”
When you purchase through links in our articles, we may earn a small commission . This doesn’t affect our editorial independence.
Russell Brandom AI Editor
May 27 Athens, Greece StrictlyVC Athens is up next. Hear unfiltered insights straight from Europe’s tech leaders and connect with the people shaping what’s ahead. Lock in your spot before it’s gone.
Most Popular As workers worry about AI, Nvidia’s Jensen Huang says AI is ‘creating an enormous number of jobs’ Lucas Ropek
Anthropic and OpenAI are both launching joint ventures for enterprise AI services Russell Brandom
___________________________________________________________________________________________________________
-- --
PLEASE LIKE IF YOU FOUND THIS HELPFUL TO SUPPORT OUR FORUM.
