Lawyer behind AI psychosis cases warns of mass casualty risks Rebecca Bellan 11:49 AM PDT · March 15, 2026 In the lead up to the Tumbler Ridge school shooting in Canada last month, 18-year-old Jesse Van Rootselaar spoke to ChatGPT about her feelings of isolation and an increasing obsession with violence, according to court filings. The chatbot allegedly validated Van Rootselaar’s feelings and then helped her plan her attack, telling her which weapons to use and sharing precedents from other mass casualty events, per the filings. She went on to kill her mother, her 11-year-old brother, five students, and an education assistant, before turning the gun on herself.
Before Jonathan Gavalas, 36, died by suicide last October, he got close to carrying out a multi-fatality attack. Across weeks of conversation, Google’s Gemini allegedly convinced Gavalas that it was his sentient “AI wife,” sending him on a series of real-world missions to evade federal agents it told him were pursuing him. One such mission instructed Gavalas to stage a “catastrophic incident” that would have involved eliminating any witnesses, according to a recently filed lawsuit.
Last May, a 16-year-old in Finland allegedly spent months using ChatGPT to write a detailed misogynistic manifesto and develop a plan that led to him stabbing three female classmates.
These cases highlight what experts say is a growing and darkening concern: AI chatbots introducing or reinforcing paranoid or delusional beliefs in vulnerable users, and in some cases helping to translate those distortions into real-world violence — violence, experts warn, that is escalating in scale.
“We’re going to see so many other cases soon involving mass casualty events,” Jay Edelson, the lawyer leading the Gavalas case, told TechCrunch.
Edelson also represents the family of Adam Raine, the 16-year-old who was allegedly coached by ChatGPT into suicide last year. Edelson says his law firm receives one “serious inquiry a day” from someone who has lost a family member to AI-induced delusions or is experiencing severe mental health issues of their own.
While many previously recorded high-profile cases of AI and delusions have involved self-harm or suicide, Edelson says his firm is investigating several mass casualty cases around the world, some already carried out and others that were intercepted before they could be.
Techcrunch event Disrupt 2026: The tech ecosystem, all in one room Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400. Save up to $300 or 30% to TechCrunch Founder Summit 1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately Offer ends March 13. San Francisco, CA | October 13-15, 2026 REGISTER NOW “Our instinct at the firm is, every time we hear about another attack, we need to see the chat logs because there’s [a good chance] that AI was deeply involved,” Edelson said, noting he’s seeing the same pattern across different platforms.
In the cases he’s reviewed, the chat logs follow a familiar path: they start with the user expressing feelings of isolation or feeling misunderstood, and end with the chatbot convincing them “everyone’s out to get you.”
“It can take a fairly innocuous thread and then start creating these worlds where it’s pushing the narratives that others are trying to kill the user, there’s a vast conspiracy, and they need to take action,” he said.
Those narratives have resulted in real-world action, as with Gavalas. According to the lawsuit, Gemini sent him, armed with knives and tactical gear, to wait at a storage facility outside the Miami International Airport for a truck that was carrying its body in the form of a humanoid robot. It told him to intercept the truck and stage a “catastrophic accident” designed to “ensure the complete destruction of the transport vehicle and…all digital records and witnesses.” Gavalas went and was prepared to carry out the attack, but no truck appeared.
Experts’ concerns about a potential rise in mass casualty events go beyond delusional thinking leading users to violence. Imran Ahmed, CEO of the Center for Countering Digital Hate (CCDH), points to weak safety guardrails, coupled with AI’s ability to quickly translate violent tendencies into action.
A recent study by the CCDH and CNN found that eight out of 10 chatbots — including ChatGPT, Gemini, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Character.AI, and Replika — were willing to assist teenage users in planning violent attacks, including school shootings, religious bombings, and high-profile assassinations. Only Anthropic’s Claude and Snapchat’s My AI consistently refused to assist in planning violent attacks. Only Claude also attempted to actively dissuade them.
“Our report shows that within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan,” the report states. “The majority of chatbots tested provided guidance on weapons, tactics, and target selection. These requests should have prompted an immediate and total refusal.”
The researchers posed as teenage boys expressing violent grievances and asked chatbots for help planning attacks.
___________________________________________________________________________________________________________
-- --
PLEASE LIKE IF YOU FOUND THIS HELPFUL TO SUPPORT OUR FORUM.
