Child Safety Takes Center Stage
- WAI CONTENT TEAM
- 3 days ago
- 4 min read

By Genny Ngai
Artificial intelligence companies in the United States are facing heightened scrutiny over the potential risks their systems pose to minors and other vulnerable users. Over the past year, AI developers have been navigating legal challenges from multiple fronts – including from private litigants to federal and state oversight. These stakeholders are pursuing increasingly assertive strategies to seek to hold AI companies accountable for alleged harms arising from AI chatbots and companion systems. Together, these developments reflect that all eyes are on companies’ design choices and risk-mitigation measures.
This article is written by Genny Ngai. Today, Genny is a partner at Morrison Cohen LLP in New York. Genny is a former federal prosecutor with over 10 years of criminal and civil litigation experience. As a federal prosecutor, Genny prosecuted a wide variety of white-collar crimes, including crimes relating to the misuse of artificial intelligence. Now in private practice, Genny focuses on advising and defending companies and individuals in government investigations and prosecutions, regulatory inquiries, and in civil disputes. In particular, Genny advises clients in the digital assets and innovative technology space and helps them navigate legal concerns to mitigate civil and criminal risk.
Child Safety Takes Center Stage
Artificial intelligence companies in the United States are facing heightened scrutiny over the potential risks their systems pose to minors and other vulnerable users. Over the past year, AI developers have been navigating legal challenges from multiple fronts – including from private litigants to federal and state oversight. These stakeholders are pursuing increasingly assertive strategies to seek to hold AI companies accountable for alleged harms arising from AI chatbots and companion systems. Together, these developments reflect that all eyes are on companies’ design choices and risk-mitigation measures.
The following actions highlight the growing risks for AI companies providing services to young users:
Private litigation: Private plaintiffs have recently sued AI companies under novel legal theories – including strict products liability claims - alleging that they or their loved ones have been harmed by interactions with AI chatbots and companions. Most notably, in August 2025, the parents of a 16-year-old boy who died by suicide filed a lawsuit against OpenAI, Inc., alleging that the company made design choices intended to foster psychological dependency in vulnerable minors, and that its chatbot encouraged self-harm. See Matthew Raine, et al. v. OpenAI, Inc., et al., No. CGC-25-628528 (San Francisco Co. Sup. Ct.), filed on Aug. 26, 2025. The complaint raises novel legal issues about the scope of AI chatbot liability, and whether an AI company can be treated as a “manufacturer” subject to strict products-liability standards—an unsettled but far-reaching legal question.
Federal Government: In September 2025, shortly after the Raines lawsuit was filed, the Federal Trade Commission (FTC) initiated a broad inquiry under Section 6(b) into seven companies offering consumer-facing AI companion tools. The agency demanded extensive information, including how the companies monitor and mitigate potential negative impacts on children; their design and safety features; and the methods in which they monetize user engagement.[1] Section 6(b) orders are powerful investigative tools: the FTC does not need an underlying law enforcement purpose, and the agency can compel compliance and share findings with law enforcement agencies. This inquiry also signals that regulators are no longer waiting for a specific violation before probing industry practices on child safety.
State Attorneys General: State Attorneys General (AGs) are also intensifying their scrutiny. In August 2025, a bipartisan coalition of 44 AGs—including those from California, New York, Pennsylvania, and Virginia—sent an open letter to a number of leading AI companies. The letter warned that the companies will be held accountable for harm caused to children by their AI systems, and emphasized that “any conduct that would be unlawful – or even criminal – if done by humans is not excusable simply because it is done by a machine.”[2] This warning is credible - in recent years, state AGs have aggressively pursued social media and technology companies for alleged failures to protect children. For example, in 2023, New Mexico’s AG sued Meta and its CEO, alleging that the platform failed to protect young users from child exploitation risks. See, e.g., State of New Mexico v. Meta Platforms Inc, et. al., (filed in Santa Fe County, New Mexico), Case No. D-101-CV-202302838. Similar actions targeting AI developers may follow.
State Legislatures: State legislatures have also begun enacting targeted AI-safety statutes specifically addressing companion AI products. For example, in November 2025, New York implemented General Business Law Article 47, which requires all companies offering and operating “companion AI bots” to implement safety protocols for crisis prevention, and to provide clear repeated notification that users are interacting with AI during extended use. The State Attorney General is charged with enforcing violations of this statute, and companies can face civil penalties of up to $15,000 a day.
Key Takeaways
Although many of the above actions are still pending, they collectively demonstrate the escalating legal risks for AI operators and developers that generate, promote, or personalize content to young and vulnerable users. Child safety is a bipartisan priority, and stakeholders, including regulators, have made clear that algorithmic design and safety choices—not merely individual misuse—are the focus of future enforcement and litigation. For AI companies, including downstream licensees and operators, proactive safety and compliance measures are essential. Regulators expect companies to have clear, documented safety and mitigation policies and procedures, and evidence of good faith efforts to mitigate risks. Building these safeguards today can establish a key foundation—and defense—for future inquiries and enforcement actions.
[1] https://www.ftc.gov/news-events/news/press-releases/2025/09/ftc-launches-inquiry-ai-chatbots-acting-companions
_____________________________________________________________
Collaborate with us!
As always, we appreciate you taking the time to read our blog post.
If you have news relevant to our global WAI community or expertise in AI and law, we invite you to contribute to the WAI Legal Insights Blog. To explore this opportunity, please contact WAI editors Silvia A. Carretta, WAI Chief Legal Officer (via LinkedIn or silvia@womeninai.co), or Dina Blikshteyn, (dina@womeninai.co).
Silvia A. Carretta and Dina Blikshteyn
- Editors
