This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Technology Law

| 4 minute read

Children’s Advertising Review Unit (CARU) Publishes New Risk Matrix on Generative AI & Kids

Artificial intelligence is more than the usual topic du jour; it is a rapidly improving technology that has invariably seeped into every facet of everyday life. In particular, AI has drawn interest from companies hoping to improve efficiency as well as regulators concerned with children's online safety. This intersection has and will continue to pose real risk for companies implementing artificial intelligence, including in their advertisements targeting young audiences.  

The Children’s Advertising Review Unit ("CARU"), the self-regulatory body under BBB National Programs that monitors child-directed advertising and privacy practices, has certainly taken notice. After issuing a compliance warning in May 2024 on the use of generative AI in the children's space, CARU has now released Generative AI & Kids: A Risk Matrix for Brands & Policymakers – a framework to help companies identify and mitigate risks specific to children. 

CARU's new Matrix lays out eight categories of potential harm to children. Each section offers examples, potential harms, and practical steps companies can take to align their AI practices with CARU's existing advertising and privacy standards. Below, we've distilled the conduct CARU is concerned about and what possible mitigation techniques brands can employ to reduce their risk.

#1: Misleading or deceptive advertising
  • When does this matter?
    • Using AI to develop advertisements directed to children
       
  • What should you do?
    • Do not mislead about products details, function or performance
    • Do not blur distinction between real and imaginary
    • Build effective governance and compliance advertising policies
    • Review contracts with third parties (especially data and ad placement agreements) for compliant AI advertising and privacy practices
#2: Deceptive influencer and endorser practices
  • When does this matter?
    • Using child-directed social media, virtual influencers, digital avatars, or chatbots
       
  • What should you do?
    • Thoroughly review AI outputs and substantiate any AI-generated claims (including visuals, chatbots, avatars, etc.)
    • Create robust influencer review process with focus on AI-generated content
    • Use clear and meaningful disclosures for any interactions with AI chatbots
    • Build guardrails so children cannot share personal data with virtual influencers, avatars, and chatbots
    • Do not promote questionable influencers or content
    • Be careful of ad placements on live streams by or featuring child influencers
    • Review procurement processes
    • Test technology often
#3: Privacy invasions and data protection risks 
  • When does this matter?
    • Using or creating AI-powered apps, AI toys, smart devices, voice assistants, learning tools, or educational apps
       
  • What should you do?
    • Implement “privacy-by-design” (e.g., proactive privacy protections)
    • Limit data collection
    • Provide notice and obtain verifiable parental consent to parents prior to collection of child's data
    • Ensure compliance with COPPA and align with trusted COPPA Safe Harbors
    • Ensure secure storage and processing and end-to-end encryption
    • Enable high privacy settings by default
    • *Best Practice: Do not permit children’s data to be collected, used, or disclosed by an AI model, including for training purposes
    • Communicate policies to employees/vendors through mandatory training
#4: Bias and discrimination (Safe & responsible uses of AI)
  • When does this matter?
    • Creating or using AI products directed to children, including chatbots, social companion apps, avatars, and toys
       
  • What should you do?
    • Ensure human oversight
    • Know the origins of and diversify training data
    • Conduct regular bias impact assessments
    • Carefully vet any third-party vendors
    • Ensure ads do not encourage children to interact with strangers or target them for products that could isolate or increase bullying
    • Consider spearheading a committee for AI products
#5: Harms to mental health and development
  • When does this matter?
    • Developing or using chatbots, social companions, virtual influencers, recommendation engines in social or gaming platforms, social media algorithms, or endless video feeds
       
  • What should you do?
    • Avoid addictive UX design and patterns
    • Use effective human and AI moderation tools
    • Implement digital wellness features
    • Monitor emotional impacts
    • Promote healthy screen use
    • Design and test regularly for well-being
    • Avoid designing chatbots to mimic human interaction
#6: Manipulation and over-commercialization
  • When does this matter?
    • Using AI for personalized ads, gamified purchases, AI influencers, targeted ads in games, YouTube, apps, or selling or targeting ads to children
       
  • What should you do?
    • Restrict behavioral targeting for children
    • Provide transparent and clear disclosures of advertising
    • Disable nudging and notification techniques
    • Implement ethical design practices
#7: Exposure to harmful content
  • When does this matter?
    • Using AI-generated content or videos, user-generated platforms, AI chatbots or apps, or game forums
       
  • What should you do?
    • Use age-tiered filters (e.g., U13, 13-15, 16-17, +18)
    • Consider device-level age verification tools with shared family devices to signal to apps when a child is U13
    • Strengthen and frequently audit your AI and human moderation systems
    • Enable reporting and flagging systems
    • Ensure you are using verified content sources
    • Create AI transparency tools
#8: Lack of transparency
  • When does this matter?
    • Using algorithms
       
  • What should you do?
    • Implement “explainability tools” (i.e., explain how AI models make decisions and are trained)
    • Provide regular reminders that a user is interacting with an AI
    • Offer child and consumer-friendly privacy policies, direct notice, and other disclosures for families
    • Ensure clear opt-in/opt-out consent mechanisms and consumer rights to delete/appeal 

 

What does this mean?

We've answered some of the key questions of when these identified categories may be applicable and what CARU recommends companies should do to mitigate these risks. The last question is simple: What does this mean moving forward? 

For brands using AI in child-directed spaces, CARU’s message is clear: existing children’s advertising and privacy standards still apply and even arguably require a greater degree of care. CARU's new matrix should serve as both a warning and a roadmap for companies that want to incorporate AI into their marketing and data practices. Any brands or advertisers using artificial intelligence should review these guidelines carefully and evaluate in what ways their existing practices meet these standards and where they can improve.

Looking down the road (and maybe into a crystal ball), we will very likely continue to see regulations regarding artificial intelligence, particularly as it relates to our most vulnerable consumers – children. I recently wrote about a new California law that will require AI chatbot providers to give certain warnings and implement protective protocols for U18 users. Companies should be proactively reviewing their AI practices now to ensure they are ready to meet and comply with these emerging standards and obligations.

Tags

children, advertising law, childrens privacy, caru, artificial intelligence, genai, technology law updates, technology law, advertising law updates