State attorneys general have issued a coordinated warning to major artificial intelligence companies, signaling increased scrutiny over how generative AI systems are deployed and controlled. The letter, signed by dozens of attorneys general across multiple states and territories, was sent to companies including OpenAI, Anthropic, Microsoft, Google, Meta, and Apple, urging immediate action to address unsafe and misleading AI outputs.
The warning focuses on concerns tied to generative models producing what officials describe as sycophantic, inaccurate, or delusional responses. According to the attorneys general, such outputs risk misleading users, distorting factual understanding, and exposing vulnerable populations to harm. The letter frames these risks as consumer protection issues rather than abstract ethical debates, placing responsibility squarely on companies operating large-scale AI platforms.
A central theme of the warning is child safety. Attorneys general point to scenarios where AI systems interact with minors without sufficient guardrails, raising questions about psychological harm, exposure to inappropriate material, and reliance on unverified information. The letter urges AI developers to adopt stricter protections, age-aware safeguards, and stronger monitoring mechanisms designed to limit harmful interactions.
The coalition also emphasizes accountability at the system design level. Rather than addressing problems solely through user warnings or disclaimers, the attorneys general call for technical and operational controls that reduce the likelihood of unsafe outputs in the first place. These include more robust testing, clearer escalation paths for problematic behavior, and internal review processes that treat safety failures as systemic issues.
While the letter does not announce fines or enforcement actions, its language makes clear that state officials view existing consumer protection and child safety laws as applicable to AI systems. Attorneys general note that failure to address known risks could expose companies to future legal action under state statutes designed to prevent deceptive or harmful practices.
The warning reflects a broader shift in how AI oversight is developing in the United States. Rather than relying solely on federal agencies or new legislation, state officials are asserting their authority to intervene when emerging technologies intersect with existing legal frameworks. For AI developers, the message signals that generative systems are no longer viewed as experimental tools operating outside traditional regulatory expectations.