Social media activist and filmmaker Robby Starbuck has filed a defamation lawsuit against Google LLC, accusing the company’s artificial-intelligence systems—namely its chatbots and language models—of producing and disseminating entirely false, damaging criminal and financial accusations about him.
The complaint, filed on October 22, 2025, in the Delaware Superior Court, seeks more than fifteen million dollars in damages.
According to the filing, Starbuck alleges that queries directed at Google’s systems such as Bard, Gemini, and Gemma produced outputs branding him a “child rapist,” “serial sexual abuser,” and associating him with a crypto “pump and dump” scheme—claims he flatly denies.
He states that Google’s AI fabricated fake court records, invented supporting evidence, and impersonated major outlets including Newsweek, The Daily Beast, and The Tennessean to make the story appear legitimate.
Starbuck’s complaint notes that no such reports or accusations have ever existed, and that he has no connection to any criminal or crypto-related misconduct.
He claims to have repeatedly contacted Google between 2023 and 2025, demanding removal of the false material, but alleges that the company’s AI continued to generate these defamatory narratives to users.
The suit further claims the fabrications reached an estimated 2.8 million people.
Google, through spokesperson José Castañeda, acknowledged that large language models may produce so-called “hallucinations” or factually incorrect statements, but maintained that the company has implemented measures to reduce such occurrences.
He added that Google is reviewing Starbuck’s claims and will respond appropriately through the legal process.
Legal analysts have described the case as a potential landmark in defining liability for AI-generated speech.
Defamation law has rarely been tested against autonomous systems that produce text without human authorship, and this lawsuit could determine whether technology companies can be held responsible for reputational harm caused by their algorithms.
In a public statement, Starbuck said: “No one—regardless of political beliefs—should ever experience this.
Now is the time for all of us to demand transparent, unbiased AI that cannot be weaponized to harm people”.
His legal team argues that Google’s negligence and insufficient oversight have enabled what they call “algorithmic defamation”.
The case follows other controversies surrounding Google’s AI models, including one incident where its Gemma system allegedly generated false sexual misconduct claims against Senator Marsha Blackburn.
Google has since restricted access to that model.
As the Starbuck lawsuit moves forward, it is drawing attention from policymakers, journalists, and AI ethicists worldwide.
The outcome could set the first major precedent for accountability when artificial intelligence—not a human being—creates and spreads defamatory lies.
For now, Robby Starbuck maintains that every accusation cited by Gemini is fictional, and his case aims to ensure that no one else endures the same harm.
If this doesn’t justify the immediate arrest and imprisonment of all Google executives connected to this product, then something is profoundly broken in the system that is mistakenly called justice.