Last month, I wrote about an age-verification law that made its way to California Governor Newsom. Today, that bill – and several other landmark children's online safety bills – have been signed into law. You can find more information about all of these new laws here.
Below, I have broken down what I think are some of the most impactful changes made by these new laws and when we can expect these laws to be effective.
AI Chatbot Safeguards (SB 243) - Effective January 1, 2026
- Link to Bill Text.
- Scope: This applies to “AI Chatbots” which provides to users “adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs[.]”
- Excluded: This does not apply to customer service chatbots, video game features (provided that it cannot discuss “mental health, self-harm, sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game”), and other standalone devices, like voice-activated virtual assistants.
- Requirements: Broadly, Companies must:
- Give “clear and conspicuous” notice that the chatbot is not a person;
- Prevent chatbot from engaging with users unless protocols are in place to prevent suicide ideation or other self-harm content;
- Implement special protections for users under 18 years old (U18), including disclosures, periodic reminders, and prohibiting sexually explicit content or suggestions to engage in sexual conduct; and
- Document and report key metrics, such as protocols limiting the chatbot and number of crisis service provider referrals issued in the prior year.
- Penalties: Consequences for violations include civil claims for injunctive relief, actual damages or $1,000 per violation (whichever is greater), and recovery of attorney's fees.
Age Verifications (AB 1043) - Effective January 1, 2027
- Link to Bill Text.
- Verification & Categories: App stores and operating system providers must build and maintain account systems where users indicate their ages, separated into the following categories:
- 12 years old and under
- 13-15 years old
- 16-17 years old
- +18 years old
- Signals: App developers will receive a “signal” from app stores and operating systems. Developers receiving this signal are then “deemed to have actual knowledge” of their users’ age range.
- Data Sharing: App developers are prohibited from sharing age data with any third parties for purposes unrelated to legal compliance.
- Penalties: The California Attorney General has the power to enforce the law, with penalties including injunctive relief and fines of up to $2,500 per affected child for negligent violations and $7,500 per affected child for intentional violations.
Social Media Warning Labels (AB 56) - Effective January 1, 2027
- Link to Bill Text.
- Scope: Social media platforms are broadly defined under both CA Health & Safety Code § 27000.5(b)(1) ("Addictive internet-based service or application") and CA Business & Professions Code § 22675(f) ("Social media platform").
- Excluded: This does not apply to services whose primary function is: (1) the sale of goods/services, (2) cloud storage, (3) email, (4) direct messaging (without public interaction/access), (5) internal communications, (6) private organizational collaborating services.
- Requirements: These “social media platforms” must show U18 users “black box” warning labels that appear upon initial access and reappear after certain thresholds of continuous use.
- Initial Warning: Must be shown upon initial access for at least 10 seconds (unless the user affirmatively closes the warning), occupying at least 25% of the screen.
- Periodic Warning: Must be shown after 3 hours of cumulative use, and then at least every one hour thereafter – and duration must be at least 30 seconds, occupying at least 75% of the screen.
- Exact Wording: The warning label must display the below language in black text on a white background:
- “The Surgeon General has warned that while social media may have benefits for some young users, social media is associated with significant mental health harms and has not been proven safe for young users.”
In signing these laws, Governor Newsom commented: “Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids. We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability. We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way. Our children’s safety is not for sale.”
The message clear: children's online safety, especially with evolving AI, is top of mind for California. This is a meaningful step in the online safety trend, particularly at State-level legislation, and we are seeing a complex – and ever-growing – patchwork of laws emerging across the United States. Trite as it sounds, legal guidance is more essential than ever for tech companies trying to navigate this landscape.