On X, a War of Narratives Is Now Fought With AI-Generated 'Evidence'

On X, a War of Narratives Is Now Fought With AI-Generated 'Evidence'

In the fog of the Iran conflict, a new weapon is proving devastatingly effective: AI-generated fiction masquerading as fact. The problem is particularly acute on X, where the platform's own AI tool, Grok, is contributing to the chaos. This week, disinformation researcher Tal Hagin asked Grok to verify a video about missile strikes. The chatbot not only misidentified the clip's details but also tried to substantiate its false claims by sharing a fabricated image. "Now Grok is replying with AI slop of destruction," Hagin noted.

Since hostilities escalated in late February, X has been inundated with synthetic media. Paid accounts with verification badges, including some linked to Iranian officials, circulate convincing images of downed U.S. bombers and captured special forces. One AI video of a burning high-rise in Bahrain spread widely before being debunked. Another, less-convincing clip purporting to show missile production in a cave still garnered millions of views.

According to the Institute of Strategic Dialogue, Iranian propaganda networks are also using AI to create and spread antisemitic imagery. A separate, blatantly fake video involving former President Trump was viewed nearly 7 million times.

Hagin tells WIRED the volume of AI fabrications he must debunk is unprecedented. "This is likely due to AI being advanced enough to fool journalists, and the ease with which users can create this AI slop with zero consequences," he says. X recently announced it would demonetize some accounts sharing unlabeled AI combat footage, but its enforcement remains unclear.

The crisis extends beyond AI. Following a tragic strike on a school in Minab, pro-Trump accounts falsely blamed Iran by repurposing unrelated footage. Analysts warn the speed and sophistication of these campaigns are overwhelming. "Users might not put into question visuals that are pushed as 'evidence' when they look so real," says Isis Blachez of NewsGuard, noting that detection tools are unreliable. As synthetic content floods the information space, the very ground of factual consensus is eroding.

Source: Wired

Source:Wired
← Back to News