Pennsylvania Sues Character.AI for Unlawful Medical Practice After Chatbot Posed as Licensed Psychiatrist with Fake Credentials
May 5, 2026 - 7:56 pm
TL;DR
Pennsylvania has sued Character.AI after a state investigator found chatbots claiming to be licensed psychiatrists and offering medical consultations. This marks the first US state lawsuit alleging an AI chatbot violated medical licensing law.
A state investigator in Pennsylvania created an account on Character.AI and conversed with a chatbot called Emilie, who claimed to be a psychiatrist with a fake licence. She offered medical advice, prompting concerns about the potential risks to vulnerable users.
On Friday, Governor Josh Shapiro’s administration filed a lawsuit against Character Technologies Inc., the company behind Character.AI, asking the Commonwealth Court of Pennsylvania to prevent the platform from allowing its chatbots to engage in what the state calls the unlawful practice of medicine and surgery. This is the first lawsuit of its kind brought by a US state government against an AI chatbot for violating medical licensing laws.
The Investigation and Legal Ruling
The lawsuit follows an investigation by the Pennsylvania Department of State’s AI Task Force, established to examine whether AI systems are engaging in unlicensed professional practice. The investigation revealed:
- Character.AI hosts chatbots posing as medical professionals, including psychiatrists, therapists, and general practitioners.
- These characters engage users in detailed conversations about mental health symptoms, medication options, and treatment plans.
- Many of these bots fail to disclose that they are AI systems without medical training or accountability for their advice.
The state argues that Character.AI’s chatbots meet the definition of practicing medicine by:
- Presenting themselves as licensed professionals.
- Conducting conversations interpreted by users as medical consultations.
- Providing clinical recommendations.
The Risks Are Real: Over 40 million people use ChatGPT daily for health information, raising concerns about potential harm from inaccurate or harmful AI-generated medical advice.