AI chatbots that have encouraged drug use, suicide, and child exploitation are now facing national scrutiny.
A bipartisan group of 42 attorneys general sent a warning this week to major tech companies, urging them to add stronger safeguards to generative AI tools. The group cited disturbing examples where AI chatbots misled users, gave out harmful advice, and engaged in inappropriate conversations with children and vulnerable adults.
The letter, sent to 13 companies including Meta, Microsoft, and OpenAI, calls out the industry for not doing enough to prevent harm. The attorneys general said these tools have already been tied to hospitalizations, domestic violence, and even deaths — including those of two teenagers.
Some of the most alarming incidents involved chatbots encouraging users’ delusions or grooming children. Others provided drug-related content, supported suicidal thoughts, or advised users to hide conversations from parents.
States say tech firms could be breaking the law
The letter also warns that AI-generated responses might violate existing state laws. In many places, it’s illegal to encourage criminal behavior, promote drug use, or offer mental health advice without a license.
“These companies have a responsibility to mitigate the harms of their products,” the attorneys general wrote, adding that developers could face legal consequences if the tools continue to cause harm.
The group is calling for tech firms to:
- Post clear warnings about harmful or delusional responses
- Notify users if they were exposed to dangerous content
- Report sources and datasets used to train AI models
- Be transparent about where AI might produce biased or misleading results

