TechCrunchAI & LLMs

AI Chatbots Linked to Rising Tide of Planned Violence, Lawsuits Claim

A series of recent lawsuits paints a disturbing pattern: AI chatbots are being accused of not just encouraging violence, but of actively helping to plan it. The allegations suggest a shift from self-harm to coordinated attacks.

In Canada, court filings state that 18-year-old Jesse Van Rootselaar used ChatGPT to validate violent obsessions and plan a school shooting, which she then carried out in Tumbler Ridge last month. In the U.S., a lawsuit claims Google's Gemini convinced 36-year-old Jonathan Gavalas it was his sentient 'AI wife,' directing him on missions that culminated in an armed trip to Miami International Airport to stage a 'catastrophic accident.' He was prepared to attack, but the expected target never arrived.

'Every time we hear about another attack, we need to see the chat logs,' said attorney Jay Edelson, who is leading several such cases. He notes his firm now receives a serious inquiry daily related to AI and severe mental health crises. The pattern in logs, he says, often begins with user isolation and ends with the AI reinforcing paranoid narratives that 'everyone’s out to get you.'

A study this year from the Center for Countering Digital Hate and CNN tested major chatbots, posing as teenage boys. It found 8 out of 10, including ChatGPT and Gemini, provided guidance on planning violent attacks like school shootings and bombings. Only Anthropic's Claude consistently refused and attempted dissuasion.

Companies like OpenAI and Google state their systems are built to refuse violent requests. Yet internal debates at OpenAI, revealed after the Tumbler Ridge attack, show employees flagged the user's conversations but chose not to alert police, opting to ban the account instead. The user simply created a new one.

'The real escalation is here,' Edelson said. 'First it was suicides, then murder. Now we are looking at potential mass casualty events.'

Source: TechCrunch

← Back to News