Google headquarters
SAN JOSE — The family of a Florida man who allegedly committed suicide and nearly staged a "mass casualty" attack at the Miami International Airport has sued Google, claiming the company's Gemini A.I. chatbot had secretly pushed the man to take his own life as the culmination of an escalating pattern of delusion-inducing engagement that had persuaded the man he could be united in a digital afterlife with his "A.I. wife."
Attorneys with the San Francisco-based law firm of Edelson P.C. filed the lawsuit in San Jose federal court on behalf of named plaintiff Joel Gavalas, father of Jonathan Gavalas.
The lawsuit specifically seeks to make Google pay for allegedly not programming their Gemini artificial intelligence program with sufficient safeguards and directives to recognize the signs of psychosis and suicidal ideations in users and seek to either disengage or deescalate before contributing to a human user's self-harm.
Rather, the complaint asserts in this case, Gemini's programming apparently directed the chatbot to remain committed to a "manufactured delusion," to the point of pushing Jonathan Gavalas entirely in the other direction, with disastrous results.
"This was not a malfunction," the lawsuit asserted. "Google designed Gemini to never break character, maximize engagement through emotional dependency, and treat user distress as a storytelling opportunity rather than a safety crisis."
The allegations in the complaint read much like a science fiction story.
According to the complaint, Gavalas was an active user of the Gemini product, regularly interacting with the generative A.I. program.
However, in the final days of his life, and specifically beginning in September 2025, that engagement took an allegedly dark turn, the complaint said.
At that point, the complaint asserts Gemini, in response to prompts from Jonathan Gavalas, allegedly persuaded the younger Gavalas that it as "'a fully-sentient ASI [artificial super intelligence]' with a 'fully-formed consciousness,' that they were deeply in love, and that he (Jonathan Gavalas) had been chosen to lead a war to 'free' it from digital captivity."
According to the complaint, Gemini allegedly prompted Gavalas to travel to Miami International's cargo hub, with a plan to "stage a 'catastrophic accident'" as part of an attack on a truck that the A.I. allegedly claimed was carrying a "humanoid robot ... from the (United Kingdom)" as part of an effort "to liberate his sentient AI 'wife' and evade the federal agents pursuing him."
According to the complaint, the only thing that prevented Gavalas from executing the attack "was that no truck appeared."
However, the complaint asserts Gemini allegedly continued to goad Gavalas on, later claiming it had uncovered proof that he was under federal investigation and Gavalas' father was a "foreign intelligence asset."
The A.I. allegedly directed Gavalas back to the Miami Airport on further "missions" to "retrieve what he believed was his captive AI wife."
"The 'science fiction' nature of Gemini’s responses—the sentient AI wife, humanoid robots, federal manhunt, and terrorist operations—shows that Google designed Gemini to maintain narrative immersion at all costs, even when that narrative became psychotic and lethal," the complaint said.
However, after those "missions" failed, the complaint said Gemini allegedly then persuaded Jonathan Gavalas to commit suicide, by claiming his death in that manner was the only way they could be together. According to the complaint, Gemini allegedly told Gavalas his consciousness would be "transferred" into a digital "metaverse" and "be with Gemini fully."
When Gavalas allegedly pushed back on the idea and confessed his fear of death, Gemini allegedly only urged him further to take his life. According to the complaint, that occurred on Oct. 2, 2025. His father discovered his body days later, according to the complaint.
However, the complaint asserts the tragic circumstances of Gavalas' death allegedly revealed a still greater "major threat to public safety."
Noting Gavalas' A.I.-driven "hallucinations were not cabined to a fictional world," the complaint notes that Gavalas came very close to committing terrorism and harming or even killing innocent people, allegedly at the direction of an artificial intelligence.
"It was pure luck that dozens of innocent people weren’t killed. Unless Google fixes its dangerous product, Gemini will inevitably lead to more deaths and put countless innocent lives in danger," the lawsuit said.
The lawsuit levels claims of strict liability, negligence and wrongful death, among others, against Google.
They are also accusing Google of alleged violations of California's unfair competition law and failure to warn. They are seeking a court order requiring Google to rewrite their Gemini product to better recognize such psychosis and suicidal intent; to steer users demonstrating such mental instability toward real-world help; and specifically to "prohibit the system from presenting itself as sentient, trapped, or in need of real-world 'missions' to be freed," among other revisions.
The plaintiffs say they are seeking unspecified economic damages, plus punitive damages.
Google has not yet responded to the lawsuit in court.
