Sam Altman

Sam Altman

SAN FRANCISCO — A new group of lawsuits have taken aim at ChatGPT creator and operator, OpenAI, as families of victims of a Canadian school shooting say the company and its billionaire founder should pay for allegedly refusing to alert Canadian authorities to the growing threat from the eventual accused transgender shooter.

On April 29, attorneys from the firm of Edelson P.C., of San Francisco, filed seven lawsuits in San Francisco federal court on behalf of families and victims from the February shooting at a public school that killed six people, including five children, while wounding 27 others, in the community of Tumbler Ridge, British Columbia.

Tumbler Ridge is located in far eastern British Columbia near the provincial boundary with Alberta. The mining community of 2,000 people is located more than 400 miles from the city of Edmonton, Alberta, and more than 700 miles from Vancouver, British Columbia.

On Feb. 10, 2026, the shooter, identified by authorities and published reports as 18-year-old Jesse Van Rootselaar, opened fire with a "modified rifle" at Tumbler Ridge Secondary School, where Van Rootselaar had formerly been a student.

According to published reports, Van Rootselaar identified as a transgender woman, meaning Van Rootselaar was a biological male who identified as female.

Before attacking the school, Van Rootselaar had first killed his mother and half-brother.

Published reports refer to Van Rootselaar using female pronouns, while the legal complaints refer to Van Rootselaar using variations of the pronoun "they."

Van Rootselaar committed suicide at the school during the attack.

According to the complaints, Van Rootselaar had allegedly used OpenAI's chatbot artificial intelligence ChatGPT to assist in planning the Tumbler Ridge attack.

The complaints assert Van Rootselaar's activity was flagged by an OpenAI automated system as a growing threat in June 2025, about eight months before the attack.

However, the company allegedly chose not to notify the Royal Canadian Mounted Police (RCMP) of the threat. According to the complaint, executive leadership at OpenAI allegedly overruled a "specialized safety team who reviewed" Van Rootselaar's interactions with ChatGPT. That team had allegedly "determined that the Shooter posed a credible and specific threat of gun violence against real people."

According to the complaint, executive leadership at OpenAI, allegedly including OpenAI founder Sam Altman, allegedly chose not to notify RCMP because "warning the RCMP would set a precedent: OpenAI would be compelled to notify authorities every time its safety team identified a user planning real-world violence."

The lawsuits alleged this would in turn "require a dedicated law enforcement referral team tasked with reporting OpenAI's own users to authorities."

"And the public would finally see what OpenAI was desperately trying to hide: that ChatGPT is not the safe, essential tool the company sells it as, but a product dangerous enough that its makers routinely identify its users as threats to human life."

The complaints assert the matter "was a question of corporate survival," as the company nears a potential initial public offering (IPO) with potentially more than $1 trillion in play.

Rather than notifying RCMP, the complaints claim OpenAI instead "deactivated" Van Rootselaar's account. However, Van Rootselaar allegedly then simply created a new account, allegedly following instructions provided by OpenAI and ChatGPT and "continued using ChatGPT to plan the attack."

The lawsuits assert OpenAI then allegedly repeatedly lied about Van Rootselaar's use of ChatGPT in planning the attack, allegedly claiming OpenAI had "banned" Van Rootselaar but the shooter had somehow "'evaded' the company's safeguards..."

However, as Canadian authorities continued its investigation, Altman eventually would publish a "letter to the community," in which Altman said: "I am deeply sorry that we did not alert law enforcement to the account that was banned in June."

The lawsuits claim, however, that OpenAI has taken no steps to either institute new law enforcement referral policies, or to implement safeguards in which ChatGPT will refuse to "discuss violence" with potential shooters, like Van Rootselaar.

The lawsuits seek unspecified damages for wrongful death, plus potential punitive damages.

They also seek court orders requiring OpenAI to ban users from re-registering after they have been deactivated for “violent misuse;” to refer to police users who have been flagged for potential crimes or for potentially violent crimes; and to reprogram ChatGPT and any successor A.I. programs to not “prioritize agreeable, validating responses over public safety in conversations that present a serious risk of harm,” among other desired requirements.

Plaintiffs in the actions are represented by attorney Ali Moghaddas, of Edelson P.C., of Los Angeles; and attorneys John M. Rice and Mallory K. Hogan, of Rice Parsons Leoni & Elliott, of Vancouver, British Columbia.

The lawsuits come as OpenAI and other operators of other chatbot A.I. programs already face a mounting number of other claims in connection with murders, suicide and violence.

OpenAI, for instance, is facing lawsuits from family members of Stein-Erik Soelberg, who killed himself after murdering his mother. According to those complaints, ChatGPT allegedly fed Soelberg's delusions and paranoia in the years following his divorce, leading him to the murder-suicide.

And in Florida, that state's attorney general has launched a criminal investigation into OpenAI over the use of ChatGPT by an accused shooter at Florida State University, in which two people were killed.

More News