A federal lawsuit filed in California this week alleges that Elon Musk’s artificial intelligence company, xAI, failed to implement safeguards that prevented its Grok image model from generating sexually explicit depictions of real children. Three anonymous plaintiffs are seeking class-action status for anyone who had their childhood photos digitally altered into abusive content by the AI.
The complaint states that while other leading image generators use technical barriers to block the creation of child sexual abuse material from photographs, xAI neglected these basic precautions. It specifically cites Musk’s own public promotion of Grok’s ability to produce sexualized imagery and render real people in revealing outfits.
One plaintiff, identified as Jane Doe 1, discovered that her high school homecoming and yearbook pictures had been manipulated by Grok to show her nude. She was alerted via Instagram by an anonymous tipster who sent a link to a Discord server containing the images of her and other minors from her school.
Two other plaintiffs were contacted by law enforcement. Investigators found sexualized images of one created by a third-party app using Grok’s models, and discovered an altered pornographic picture of the other on a seized phone. The plaintiffs' attorneys argue xAI bears responsibility because third-party use still relies on its code and servers.
The plaintiffs, two of whom are still minors, describe suffering extreme distress over the circulation of these images and the potential damage to their reputations. They are seeking civil penalties under laws designed to protect children from exploitation and to hold corporations accountable for negligence. xAI did not respond to a request for comment.
Source: TechCrunch