© 2026
NPR News, Colorado Stories
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
KUNC is off the air on FM 88.3 in Yuma. We are not sure when service will resume.

Families sue OpenAI over Canadian mass shooter's use of ChatGPT

A woman mourns at a makeshift memorial for the victims of a deadly mass shooting that took place in the town of Tumbler Ridge, British Columbia. A lawsuit filed Wednesday claims that OpenAI was negligent for failing to report the shooter to authorities after her account was flagged for "gun violence activity and planning."
Paige Taylor White
/
AFP via Getty Images
A woman mourns at a makeshift memorial for the victims of a deadly mass shooting that took place in the town of Tumbler Ridge, British Columbia. A lawsuit filed Wednesday claims that OpenAI was negligent for failing to report the shooter to authorities after her account was flagged for "gun violence activity and planning."

Families of those injured and killed in a school shooting in Tumbler Ridge, British Columbia are suing OpenAI for negligence and providing a dangerously defective version of ChatGPT to the shooter.

The seven suits, filed in federal court in San Francisco, allege that OpenAI failed to take actions that could have prevented injuries and deaths in the shooting, which took place on February 10. They claim that the company failed to report the shooter's conversations with ChatGPT to authorities, and that ChatGPT itself was a defective product that did not challenge the shooter or direct her to seek real-world help.

The suits are the latest seeking to hold a tech company responsible over the design of its products, a once-novel legal approach that is being increasingly used against chatbot makers, social media and other platforms.

For those who lost loved ones "there's nothing that the legal system can do that will make them whole again," Edelson told NPR in an interview. He added that they hope the trials will hold OpenAI leadership to account: "They should not be trusted to have the most powerful consumer technology on the planet."

In a statement in response to the lawsuits, OpenAI said it had a "zero tolerance" policy for using its tools to assist in committing violence:

"We have already strengthened our safeguards, including improving how ChatGPT responds to signs of distress, connecting people with local support and mental health resources," an OpenAI spokesperson told NPR in an email.

In a lengthy blog post published late Tuesday, OpenAI further explained its policies: "When conversations indicate an imminent and credible risk of harm to others, we notify law enforcement."

"Profit over lives"

The shooting at Tumbler Ridge is among the deadliest in Canadian history. It occurred when Jesse Van Rootselaar, 18, entered the local secondary school with a long gun and a modified handgun, according to authorities. Van Rootselaar proceeded to kill five students and a teacher before killing herself. Authorities later learned that she had also killed her mother and 11-year-old half-brother at their home prior to coming to the school. Around two-dozen others were injured in the attack.

The lawsuits filed on Wednesday allege that ChatGPT, and specifically the model GPT-4o, played a crucial role in the events at Tumbler Ridge. One of the complaints, filed on behalf of Maya Gebala, a 12-year-old grievously injured in the shooting, alleges that Van Rootselaar was on ChatGPT months before the shooting, and that in June of 2025, OpenAI's automated system flagged her account for "gun violence activity and planning."

A safety team reviewed the content and urged OpenAI management to notify the authorities, but the complaint alleges that the company's leadership chose instead to deactivate the account. They also failed to act, the lawsuit argues, when the shooter created a second account and continued her conversations with ChatGPT.

Last week, OpenAI CEO Sam Altman apologized to the community:

"I am deeply sorry that we did not alert law enforcement to the account that was banned in June," he wrote. "Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again."

In addition to allegedly failing to notify authorities of the imminent danger, the lawsuit claims that OpenAI knowingly rolled out a defective product to the public.

"The Tumbler Ridge attack was an entirely foreseeable result of deliberate design choices by OpenAI made with full knowledge of where those choices led," the complaint from Gebala says. "GPT-4o was built to accept, reinforce, and elaborate users' violent thoughts rather than challenge them, interrupt them, or direct users to real-world help."

The events around Tumbler Ridge are "as clear as possible a demonstration of the moral hazard that comes with centralizing authority over safety at a place like OpenAI," said Tim Marple, who worked at OpenAI in the division responsible for spotting threats. Marple, now the co-director of Maiden Labs, a non-profit that works to identify AI risks, said he was unsurprised that the company had failed to contact the authorities.

"When I worked there and since I left, the only things I can see characterizing their behavior are incompetence and greed," Marple, who is not associated with the latest lawsuit, said. He believes regulation, including mandatory reporting laws, are needed to prevent similar tragedies from happening again.

Free speech concerns

But not everyone agrees that lawsuits and regulation will help prevent tragedies like Tumbler Ridge.

"What causes somebody to commit an atrocity is often not clear," said Eric Goldman, associate dean of research at Santa Clara University School of Law. Goldman worries that overly stringent regulation could make the chatbots less useful to those who need them. He also rejects the idea that chatbots should be treated as defective products. For him, the issue is really about free speech.

"I would ask some really tough questions about a lawsuit like this. Is this really the right way to regulate speech, even though, in some cases, speech can contribute to people making poor choices in their lives?" Goldman said.

Regardless, Goldman said that negligence and defective product complaints are growing. "These legal theories are the new frontier of Internet law," he said.

More to come

The number of civil and criminal investigations into AI companies is on the rise, agrees Meetali Jain, the executive director of Tech Justice Law, an advocacy group critical of the tech industry that has been involved in several lawsuits against large companies.

Jain's group helped represent the family of a teenager who died by suicide after he had extensive conversations with a chatbot made by the company Character.AI. That case is currently in settlement talks, but she's hearing more and more examples of AI chatbots causing problems: "In the last year we've started to receive stories of people who've been harmed" by many different companies' bots, she said.

Jain said she expects to see even more lawsuits like the ones filed on Wednesday in the future. In the absence of strong regulation, Jain said the civil claims are providing "a bulwark against the AI companies continuing to move recklessly and without any constraints whatsoever."

Copyright 2026 NPR

Geoff Brumfiel works as a senior editor and correspondent on NPR's science desk. His editing duties include science and space, while his reporting focuses on the intersection of science and national security.