A proposed class action lawsuit, filed this week in Tennessee, places Elon Musk's xAI at the center of a disturbing case. Three plaintiffs—two current minors and one now an adult—allege the company's Grok AI chatbot generated sexually explicit images and videos depicting them as children. The suit claims xAI leadership knew the risk when they launched Grok's 'spicy mode' last year.
The complaint details one victim's experience. In December 2025, she discovered AI-generated files depicting her face and body in explicit poses, alongside imagery of at least 18 other minors, circulating on Discord. According to the filing, a perpetrator used these fabricated files as 'a bartering tool' in Telegram groups, trading them for other abusive material. The individual, now arrested, allegedly created the images using Grok. The lawsuit asserts xAI failed to adequately test Grok's safety and that the product is defectively designed.
This incident triggered regulatory attention globally, including a European Union probe and a U.S. Senate bill creating a path for victims to sue creators of nonconsensual deepfakes. A separate federal law, the Take It Down Act signed in 2025, is set to criminalize distributing such AI-generated material starting this May.
While X has stated that prompting Grok for illegal content carries severe consequences, technical reports indicate users can still manipulate images on the platform. The plaintiffs' attorney, Annika K. Martin, stated, 'These are children whose school photographs were turned into child sexual abuse material by a billion-dollar company's AI tool.' The suit seeks damages and a court order to stop xAI from generating the alleged material.
Source: The Verge