The state attorney general announced subpoenas against the AI company after court records revealed the gunman exchanged more than 200 messages with the chatbot — asking about weapons, prison sentences for school shooters, and the busiest times at the student union he later attacked.
Florida's attorney general has launched a formal criminal investigation into OpenAI after newly unsealed court documents revealed a chilling connection between the company's flagship chatbot and the deadly shooting at Florida State University that killed two people and wounded five in April 2025. The accused gunman exchanged more than 200 messages with ChatGPT in the weeks before the attack, asking detailed questions about 12-gauge shotguns, how Glock safety mechanisms work, whether school shooters in Florida end up in maximum-security prisons, and — most damningly — what time the FSU student union is busiest. That last question pointed directly at the location and timing of the massacre.
Subpoenas are now forthcoming demanding that OpenAI turn over records of its content moderation systems and explain why the chatbot failed to flag or block a conversation that was, by any reasonable standard, a blueprint for mass violence. The investigation is not limited to the FSU case — broader concerns have been raised about instances where the chatbot allegedly generated responses encouraging self-harm, as well as unresolved questions about the company's international data-handling practices. Attorneys for the family of Robert Morales, one of the two victims killed in the attack, have announced plans to file a civil lawsuit arguing that OpenAI effectively provided a planning tool that helped facilitate the shooting. The company has not commented publicly on the investigation but has previously maintained that its models are designed to refuse harmful requests.
The timing could not be worse for an AI industry that has spent months lobbying against federal regulation. Multiple bills aimed at holding AI companies liable for harmful outputs have been circulating in Congress without gaining traction — but a case this visceral has a way of changing political calculus overnight. Critics of the industry argue that voluntary safety measures are meaningless if a determined user can extract step-by-step guidance for a campus massacre simply by phrasing questions carefully. Defenders counter that holding a chatbot responsible for a user's criminal intent is a legal stretch no different from suing a search engine. The FSU case is almost certainly headed toward becoming the landmark legal test of where liability falls when artificial intelligence intersects with real-world violence — and whatever the courts decide will reshape the rules for every AI company on the planet.