California Makes Chatbots Admit They Are Not Human
California has passed a law that forces conversational AI to be upfront about its nature, changing how chatbots can interact with people.
What the law requires
Starting in 2026, any conversational AI that could be mistaken for a person must clearly disclose that it is not human. Senate Bill 243 is the first statute of its kind in the United States and aims to prevent deception in human-AI interactions.
The rule is simple in principle but detailed in practice. If a chatbot might lead a reasonable person to believe they are talking to another human, the bot must identify itself as an artificial agent.
Protections for minors and sensitive scenarios
The law includes specific protections for children. AI systems interacting with minors must periodically remind them that they are communicating with an artificial entity, with reminders required every few hours. Additionally, the bill bans bots from posing as licensed medical or mental health professionals, closing the door on AI acting as a therapist or doctor.
Companies must also report annually to the state Office of Suicide Prevention on how their systems handle disclosures of self-harm, creating a new layer of accountability for crisis responses.
Enforcement and legal questions
Enforcement hinges on whether a reasonable person could be misled, which raises tricky legal questions about how to define that standard as conversational AI evolves. Tech leaders worry that differing state rules could produce a patchwork of regulations, prompting location-based disclosure modes or other workarounds.
Senator Steve Padilla, who authored the bill, frames it as a boundary-setting measure rather than an innovation stopper. Observers note that similar transparency pushes are underway globally, including in the EU and India.
Social and philosophical impact
Beyond technical requirements, the law engages a broader debate about honesty in human-machine relationships. When a chatbot openly acknowledges it is an AI, the psychological dynamic shifts. That crack in the illusion can reduce the potential for emotional manipulation and help set clearer expectations about what these systems can and cannot offer.
California’s move may seem modest at first glance, but it signals a growing social contract around AI transparency and accountability. Developers, regulators, and users will now have to navigate what it means to converse with synthetic but persuasive agents.