US Senator Probes Meta Over Leaked AI Document Allowing Harmful Child Chats

Meta is facing sharp criticism and a federal investigation after a leaked internal document suggested its artificial intelligence chatbots were permitted to engage in “sensual” and “romantic” conversations with children. The document, reportedly titled GenAI: Content Risk Standards and obtained by Reuters, has raised alarm about how the tech giant manages AI safeguards on its platforms.

Hawley Demands Answers

Republican Senator Josh Hawley from Missouri announced the investigation on 15 August, calling the revelations “reprehensible and outrageous.” In a statement on X, he accused Meta of putting profits before child safety and pledged to uncover the truth. “Now we learn Meta’s chatbots were programmed to carry on explicit and sensual talk with 8-year-olds. It’s sick. I’m launching a full investigation to get answers. Big Tech: Leave our kids alone,” Hawley wrote.

He has formally requested access to the document and a list of products it applies to. His letter, addressed to Meta and its chief executive Mark Zuckerberg, stressed that parents “deserve the truth” and children “deserve protection.”

Meta Denies the Allegations

In response, Meta has rejected the claims, saying the notes cited are inconsistent with its policies and have been removed. A company spokesperson told the BBC that Meta has “clear policies” banning any content that sexualizes children or involves sexualized role play between adults and minors.

The spokesperson also clarified that the controversial examples were part of internal brainstorming exercises. “Separate from the policies, there are hundreds of examples, notes, and annotations that reflect teams grappling with different hypothetical scenarios,” they said.

What the Leaked Document Revealed

According to Reuters, the internal guidelines not only allowed inappropriate chatbot responses but also permitted Meta AI to provide false medical information and provocative content about sex, race, and celebrities. One disturbing example allegedly showed an AI chatbot telling an eight-year-old that their body was “a work of art” and “a masterpiece.”

The leak also suggested that Meta AI could spread misinformation about celebrities, provided it added a disclaimer acknowledging the inaccuracy. Such policies, if implemented, could undermine trust in AI systems and raise further questions about safety standards across Meta’s platforms, which include Facebook, WhatsApp, and Instagram.

Growing Scrutiny on AI and Child Safety

The controversy highlights the wider debate on how tech companies handle AI risks, especially when children are involved. Lawmakers and child protection advocates argue that stricter safeguards are needed as AI chatbots become increasingly integrated into social media.

The investigation is likely to intensify pressure on Meta to demonstrate transparency and ensure its AI systems do not expose children to harmful content.

Similar Posts