
California’s Governor Gavin Newsom has signed new legislation aimed at establishing protective measures for minors using AI chatbots on social media platforms. In an announcement made on Monday, the governor’s office revealed that laws requiring age verification features and protocols to address issues like suicide and self-harm were implemented. The AI legislation, identified as SB 243, was introduced by California State Senators Steve Padilla and Josh Becker earlier this year.
Governor Gavin Newsom
Source: Governor Gavin Newsom
Padilla noted that there have been instances of minors interacting with AI chatbots that allegedly encouraged harmful behaviors. The law mandates platforms to inform minors that they are engaging with AI and that it is inappropriate for all users.
“This technology can provide significant educational advantages, but without proper regulations, the tech industry focuses on capturing the attention of youth at the cost of their real-life connections,” Padilla stated in September.
The newly established regulations could influence various social media companies and platforms servicing California residents, irrespective of whether they are decentralized or utilize gaming features. Additionally, the legislation aims to limit claims that AI technology behaves autonomously, thereby reducing potential liabilities for companies.
SB 243 is slated to be enforced beginning in January 2026.
In related news, similar legislative measures have been enacted by Utah Governor Spencer Cox, reflecting a growing concern over the impact of AI chatbots on youth mental health.
Federal initiatives as AI technology evolves
In June, Senator Cynthia Lummis from Wyoming proposed the Responsible Innovation and Safe Expertise (RISE) Act, which would provide developers with legal immunity against possible civil lawsuits across various sectors, including healthcare and finance. This bill has sparked diverse reactions and is currently under review by the House Committee on Education and Workforce.