Advertisement
Categories: NewsTechWorld

Altman Apologizes for Not Alerting Law Enforcement About ChatGPT Account Used in Canadian School Shooting

Advertisement

OpenAI CEO Sam Altman has apologized to members of a Canadian community where a mass shooting took place earlier this year for not flagging the ChatGPT account of the shooter to law enforcement.

“The pain your community has endured is unimaginable,” Altman wrote in a letter shared Friday on social media by the British Columbia Premier David Eby. “I have been thinking of you often over the past few months.”

Eight people were killed in the Feb. 10 massacre in the small community of Tumbler Ridge in northeast British Columbia. Six people were fatally shot when 18-year-old Jesse Van Rootselaar opened fire at Tumbler Ridge Secondary School, authorities said, and the shooter’s mother and 11-year-old brother were killed at a nearby residence. Van Rootselaar died of a self-inflicted gunshot wound, officials said.

Altman wrote in the letter, dated Thursday, that Van Rootselaar’s ChatGPT account had been banned in June 2025 — about eight months prior to the shooting. “I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said.

In February, OpenAI told CBS News that Van Rootselaar’s account had been flagged last year by automated abuse detection tools and human investigators that identify potential misuses of ChatGPT for violent activities. OpenAI said the account was then banned for violating its usage policies.

OpenAI said that the company had weighed whether to flag the account to law enforcement, but had determined at the time that it did not pose an imminent and credible risk of serious physical harm to others, failing to meet the threshold for referral.

“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” OpenAI said in a statement to CBS News in February following the shooting. “We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation.”

OpenAI says that ChatGPT is trained to discourage real-world harm, and is instructed to refuse to help when it detects an illicit intent. Users that indicate plans to harm others are flagged to human reviewers who determine whether a case poses an imminent threat of physical harm and should be referred to law enforcement, according to the company.

Altman wrote in his letter that OpenAI will remain focused on preventative efforts “to help ensure something like this never happens again.”

“I want to express my deepest condolences to the entire community,” Altman said. “No one should ever have to endure a tragedy like this.”

Advertisement
News Desk

Recent Posts

Engro Elengy Terminal Handles Country’s Largest Ever LNG vessel!

Engro Elengy Terminal (EETL) has successfully received and began offloading the largest LNG cargo in…

18 hours ago

Chery: China’s No.1 Quality-Ranked Automotive Brand Now Reshaping Pakistan’s SUV Market

Karachi: As China’s No.1 vehicle exporter for over 23 consecutive years, with presence across more…

18 hours ago

AKU Hosts Consultations as Part of Joint Effort to Upgrade the National Museum of Pakistan

Karachi, May 15, 2026: The Aga Khan University (AKU) and the Aga Khan Trust for…

20 hours ago

BingX Expands Global Capital Gala Campaign, Positioning Users for the Next Macro Wave

BingX, a leading cryptocurrency exchange and Web3-AI company, today announced the launch of the next…

1 day ago

Pakistan Enters Top 10 TBR Tyre Exporters to US, Brazil as Service Long March Drives Export Surge

KARACHI: Pakistan has emerged among the top 10 exporters of truck and bus radial (TBR)…

2 days ago

Introducing Incognito Chat with Meta AI: A completely private way to chat with AI

Chatting with AI has quickly become a critical part of how people get information and…

2 days ago