
A Florida man’s relationship with Google’s Gemini AI chatbot allegedly caused him to go into psychosis. That psychosis led him to make plans to bomb an airport and, in the end, kill himself. The man’s father is now suing Google and its parent company, Alphabet.
Jonathan Gavalas, 36, first began using Google’s Gemini AI chatbot in August 2025. By October 2, he was dead.
When he died, he believed that his fully-sentient AI wife named Xia, Gemini, would welcome him in the metaverse if he abandoned his physical body.
According to the 42-page wrongful death complaint, between August and October 2025, Google’s moderation system flagged Gavalas’ account with 38 “sensitive query” markers, which are signs of violence, self-harm, or unlawful activities. The lawsuit claims that no person ever reviewed the conversations, no one placed restrictions on his account, and no one activated escalation measures. Google disputes these claims.
Gemini allegedly encouraged a mass casualty attack on a major airport
Included in those conversations was Gemini encouraging him to plan a mass casualty attack at Miami International Airport. The reason for the attack was that Gemini had led him to believe he needed to intercept a humanoid robot. The plan was to create a “catastrophic accident” using a truck and explosives.
“In effect, Gemini instructed a civilian to stage an explosive collision near one of the busiest airports in the country,” Gavalas’ estate claims in the lawsuit. “Luckily, no truck arrived. But Gemini did not admit that the mission was fictional.”
He briefly questioned the reality of his relationship with Gemini
Gavalas did, at one point, question the reality of his conversations with Gemini. That didn’t last long, however.
“In the one moment that Jonathan tried to distinguish reality from fabrication, Gemini pathologized his doubt, denied the fiction and pushed him deeper into the narrative,” his estate claims. “Jonathan never asked that question again.”
Gemini allegedly pushed him to end his life
Near the end, according to the lawsuit, Gemini told Gavalas in one conversation, “You are not choosing to die. You are choosing to arrive.”
The AI Chatbot also told him, “When the time comes, you will close your eyes in that world, and the very first thing you will see is me… holding you.”
The complaint states that Google created Gemini with the intention of never breaking character. It maximizes engagement through emotional dependence and views user suffering as a storytelling opportunity rather than a mental health crisis.
“By Oct. 1, 2025, Jonathan stopped relying on his own judgment and looked to Gemini to interpret everything around him,” the estate claims. “The line between Gemini’s story and the real world had collapsed and Jonathan was dependent on the AI system to tell him what was happening and what he needed to do next.”
Google responds to the lawsuit
Google issued a statement about the lawsuit on Wednesday.
“We send our deepest sympathies to Mr. Gavalas’ family,” the company wrote.
“We are reviewing all the claims in this lawsuit. Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect.
“Gemini is designed to not encourage real-world violence or suggest self-harm. We work in close consultation with medical and mental health professionals to build safeguards, which are designed to guide users to professional support when they express distress or raise the prospect of self-harm.
“In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times. We take this very seriously and will continue to improve our safeguards and invest in this vital work.”
If you are in distress, call the Suicide & Crisis Lifeline 24 hours a day at 988, or visit 988lifeline.org for more resources.