/Passle/66030b5f24299750fade21de/SearchServiceImages/2025-06-06-17-37-01-272-6843273dba3a430c3e250dfc.jpg)
California Senate Bill 243, Regulation of AI Companion Chatbots to Protect Youth Mental Health passed on June 3, 2025, in response to concerns that AI Chatbots used for companionship could be harming the mental health of users—particularly children. Legislators have begun the process of exploring the role of guardrails for AI, given that the technology is still new, while providing some latitude to avoid stifling its innovative potential.
The legislation would require certain disclosures on the platforms. Under provisions of the bill, chatbot operators are required to warn users that chatbots may not be appropriate for some minors. Operators must also clearly disclose that the chatbots are not human at the beginning of each interaction and every 3 hours during prolonged use. Additional engagement design restrictions would also prohibit a platform from encouraging compulsive use.
Mental health safety risks associated with chatbots’ simulated humanity inspired the bill’s safety protocols, which also prohibits chatbots from interacting with users unless the platform has implemented a process for identifying and responding to suicidal ideation or self-harm. Given the concerns for students’ mental health addressed by this bill, school districts should consider the relational effects of student use of chatbots for educational purposes.
The bill awaits consideration in the State Assembly. For additional analysis, or to assess compliance implications for your organization, please do not hesitate to contact our office.