Advertisement
Categories: NewsTechWorld

Altman Apologizes for Not Alerting Law Enforcement About ChatGPT Account Used in Canadian School Shooting

Advertisement

OpenAI CEO Sam Altman has apologized to members of a Canadian community where a mass shooting took place earlier this year for not flagging the ChatGPT account of the shooter to law enforcement.

“The pain your community has endured is unimaginable,” Altman wrote in a letter shared Friday on social media by the British Columbia Premier David Eby. “I have been thinking of you often over the past few months.”

Eight people were killed in the Feb. 10 massacre in the small community of Tumbler Ridge in northeast British Columbia. Six people were fatally shot when 18-year-old Jesse Van Rootselaar opened fire at Tumbler Ridge Secondary School, authorities said, and the shooter’s mother and 11-year-old brother were killed at a nearby residence. Van Rootselaar died of a self-inflicted gunshot wound, officials said.

Altman wrote in the letter, dated Thursday, that Van Rootselaar’s ChatGPT account had been banned in June 2025 — about eight months prior to the shooting. “I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said.

In February, OpenAI told CBS News that Van Rootselaar’s account had been flagged last year by automated abuse detection tools and human investigators that identify potential misuses of ChatGPT for violent activities. OpenAI said the account was then banned for violating its usage policies.

OpenAI said that the company had weighed whether to flag the account to law enforcement, but had determined at the time that it did not pose an imminent and credible risk of serious physical harm to others, failing to meet the threshold for referral.

“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” OpenAI said in a statement to CBS News in February following the shooting. “We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation.”

OpenAI says that ChatGPT is trained to discourage real-world harm, and is instructed to refuse to help when it detects an illicit intent. Users that indicate plans to harm others are flagged to human reviewers who determine whether a case poses an imminent threat of physical harm and should be referred to law enforcement, according to the company.

Altman wrote in his letter that OpenAI will remain focused on preventative efforts “to help ensure something like this never happens again.”

“I want to express my deepest condolences to the entire community,” Altman said. “No one should ever have to endure a tragedy like this.”

Advertisement
News Desk

Recent Posts

Lee Andrews Plans UK Return Amid Dubai Rumours

Lee Andrews, 41-year-old husband of Katie Price, plans to return to the UK this summer.…

6 minutes ago

India Faces Genocide Allegations Amid Trump’s Hellhole Remarks

Gurpatwant Singh Pannun, leader of pro-Khalistan group Sikhs for Justice, has strongly condemned India and…

1 hour ago

PIA Begins Pre-Hajj Flights From Lahore, Handles Over 12K Pilgrims

Pakistan International Airlines (PIA) kicked off its pre-Hajj operations from Lahore with the first flight,…

6 hours ago

Chevron CEO Warns of Potential US Navy Escorts in Strait of Hormuz Reopening

Washington - Chevron's CEO stated that it is probable the US Navy will need to…

7 hours ago

Nadra Launches Modernised Website for Integrated Citizen Services

National Database and Registration Authority (NADRA) has launched a modernized website, integrating various identity services,…

8 hours ago

FIA Busted Fake Veterinary Injection Factory, Four Arrested

Federal Investigation Agency's Anti-Human Trafficking Circle in Karachi conducted a raid on a factory producing…

8 hours ago