In the aftermath of a devastating mass shooting in Tumbler Ridge, Canada, new details have emerged about the suspect’s online activity particularly her interactions with AI tools.
According to reporting by the Wall Street Journal, an 18-year-old identified as Jesse Van Rootselaar, who allegedly killed eight people, had been using ChatGPT, an AI chatbot developed by OpenAI, in ways that raised internal concerns.
Company monitoring systems flagged her conversations, which included descriptions of gun violence. As a result, her access was banned in June 2025. Sources familiar with the matter say OpenAI staff briefly debated whether the behavior was serious enough to alert Canadian law enforcement but ultimately decided it did not meet the company’s reporting threshold at that time.
An OpenAI spokesperson later clarified that the company did contact authorities after the shooting occurred.
“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” the spokesperson said. “We proactively reached out to the Royal Canadian Mounted Police with information related to the individual and her use of ChatGPT, and we will continue to support their investigation.”
The AI chat logs were only one part of a troubling digital trail. Van Rootselaar had also created a game on Roblox a platform popular with children—that reportedly simulated a mass shooting in a shopping mall. In addition, she had shared gun-related content on Reddit, raising further red flags.
Local police were already familiar with her behavior before the attack. Reports indicate officers had previously responded to her family’s home after she allegedly started a fire while under the influence of drugs.
The case has intensified an already heated debate around large language models (LLMs) and their potential impact on vulnerable users. In recent years, OpenAI and other AI developers have faced lawsuits accusing chatbots of contributing to mental health crises sometimes citing conversations that allegedly encouraged self-harm or suicide.
As AI systems become more embedded in daily life, this tragedy underscores a difficult question the tech industry continues to grapple with: where should the line be drawn between user privacy, platform responsibility, and public safety?
.jpg)
0 Comments