Robby Starbuck Sues Meta Over AI-Generated Defamation Claims

Robby Starbuck

The growing debate over the legal and ethical limits of artificial intelligence has taken a new turn this week, as conservative commentator and activist Robby Starbuck filed a defamation lawsuit against Meta. The case alleges that the tech giant’s AI chatbot generated false and damaging statements about him — including claims linking him to the January 6, 2021, U.S. Capitol riot.

AI Gone Rogue

Filed in Delaware Superior Court on Tuesday, the lawsuit claims Starbuck first became aware of the AI-generated allegations in August 2024, after publicly criticizing Harley-Davidson’s corporate diversity, equity, and inclusion (DEI) initiatives. According to Starbuck, a Harley-Davidson dealership retaliated by sharing a screenshot from Meta’s AI chatbot that falsely claimed he had participated in the Capitol insurrection.

“One dealership was unhappy with me and posted a screenshot from Meta’s AI in an effort to attack me,” Starbuck posted on X (formerly Twitter). “This screenshot was filled with lies. I couldn’t believe it was real so I checked myself. It was even worse when I checked.”

Starbuck, who says he was in Tennessee at the time of the riot, contends the AI-generated statements unleashed a wave of online harassment and long-lasting reputational harm. He’s now seeking over $5 million in damages.

Meta Responds, But Questions Remain

In response, a Meta spokesperson stated: “As part of our continuous effort to improve our models, we have already released updates and will continue to do so.”

Joel Kaplan, Meta’s chief global affairs officer, also acknowledged the issue on X: “This is clearly not how our AI should operate. We’re sorry for the results it shared about you and that the fix we put in place didn’t address the underlying problem.”

Starbuck claims he repeatedly reached out to Meta’s leadership, legal teams, and even the AI chatbot itself to dispute the misinformation. Ultimately, Meta reportedly “denylisted” his name from AI-generated responses — a move Starbuck criticized as a superficial fix that ignored deeper systemic flaws.

Adding to the controversy, Starbuck alleges Meta’s AI also falsely associated him with Holocaust denial and criminal convictions, despite having no criminal record.

Legal Pressure Mounts on AI Developers

The lawsuit highlights the increasingly thorny legal terrain AI companies now navigate as generative tools become more embedded in digital platforms. Legal scholars argue that disclaimers attached to AI outputs are insufficient defenses in defamation cases.

“You can’t just say, ‘This might be unreliable, but by the way, this guy’s a murderer,’” James Grimmelmann, a professor of digital and information law at Cornell Tech and Cornell Law School, told reporters. “A blanket disclaimer doesn’t fix everything.”

This isn’t the first high-profile AI defamation case. In 2023, a Georgia-based radio host sued OpenAI, alleging ChatGPT falsely accused him of financial misconduct. Cases like these are expected to increase as AI-generated content continues to intersect with real-world reputations and legal accountability.

Context Note: A Growing Trend in AI Defamation Lawsuits

Legal experts say defamation risks from AI tools are still largely uncharted territory. While most platforms append disclaimers about AI output reliability, courts are beginning to examine whether those disclaimers hold up when demonstrably false, damaging, or reckless statements are made. The Starbuck case adds to a small but growing list of defamation actions targeting AI developers — a legal frontier that could reshape the rules around AI liability in the coming years.