Google’s AI Overview falsely stated that legendary singer Diana Ross was arrested for cocaine possession and entered rehab in 1992—claims not supported by any known record.

The statement, surfaced through Google’s new “AI Overview” feature in response to a basic search from The Dallas Express on July 30, 2025, stated: “Yes, Diana Ross has publicly admitted to struggling with drug use in the past. In 1992, she was arrested for possession of cocaine and later entered rehab.” It added that she had become an advocate for drug prevention.

That account is not true.

The Breakdown:

  • Google’s AI falsely claimed that Diana Ross was arrested for cocaine in 1992.
  • Public records and media archives do not show any such arrest or rehab admission.
  • Google cited unrelated sources—including one about George Floyd’s ex-girlfriend.
  • Legal experts say such AI-generated statements could meet the threshold for defamation.
  • Google responded that it corrected the issue and uses these mistakes to improve systems.

A review of news archives on both Google and LexisNexis by The Dallas Express turned up no evidence that Ross was arrested for cocaine or entered rehab in 1992. While the Motown icon did check into rehab in 2002, reports at the time pointed to prescription drugs and alcohol, not cocaine.

The AI system, when asked for its sources, listed four.

One was indeed about Diana Ross, the singer, but three others were irrelevant or entirely erroneous. One concerned Courtney Ross, George Floyd’s former girlfriend, testifying during the trial of Derek Chauvin. That article didn’t mention cocaine or anyone named Diana.

The AI’s narrative strayed further.

When asked where Diana Ross lives, it ignored a 2025 report about her Florida property listing and instead referenced her Detroit birth and bizarrely concluded: “She also lived in Central Park, as evidenced by the ‘Diana Ross Live in Central Park’ event, according to Wikipedia.” When pressed, the system contradicted itself, stating she never lived in Central Park, only performed there.

CLICK HERE TO GET THE DALLAS EXPRESS APP

Ross was convicted of a DUI in Arizona in 2004, but has never been arrested for cocaine.

The Dallas Express found a news story about a pregnant Galveston-area woman, also named Diana Ross, who was convicted of possessing crack cocaine in 2006. However, this story was never cited or referenced by Google’s AI overview, leaving the bot without a clear reason for confusion.

Defamation or a Fluke?

Under Texas law, defamation includes false statements in writing or digital form that injure a person’s reputation or expose them to ridicule or financial harm. Notably, “a libel is a defamation… that tends to injure a living person’s reputation and thereby expose the person to public hatred, contempt or ridicule,” Texas Civil Practice and Remedies Code Section 73.001 states.

Diana Ross, as a public figure, would need to meet a high legal threshold known as “actual malice” to prevail in court. That means proving Google’s AI either knew the information was false or acted with reckless disregard for its truth or falsity.

A similar case played out recently in Georgia.

After ChatGPT falsely claimed radio host Mark Walters had embezzled from a nonprofit, Walters sued OpenAI for defamation. The lawsuit was dismissed. The court found OpenAI lacked the “state of mind” necessary for defamation and emphasized user disclaimers.

However, that may not be the final word.

In Anderson v. TikTok, a federal appeals court found that TikTok’s algorithms were responsible for promoting content that led to harm, rejecting Section 230 protections that typically shield platforms from liability for third-party content. The court ruled that TikTok’s algorithm acted more like a speaker than a host.

Under Section 230 of the Communications Decency Act, platforms generally can’t be sued for content they didn’t create. But that protection doesn’t extend to first-party speech, meaning content the platform itself generates or promotes. The question now before courts is whether AI-generated summaries—like those from Google’s AI Overview—constitute first-party speech.

If so, Google could lose Section 230 immunity, legal scholars say.

Even the AI itself seemed to understand the legal stakes. When asked, “Is it defamation to say someone was arrested for drugs?” the response was: “It could be defamation… if the statement is false and harms their reputation.”

In Ross’ case, it’s unclear whether the damage is legally actionable, but the reputational risk is real.

In a statement to The Dallas Express, a Google spokesperson said: “The vast majority of AI Overviews are factual and we’ve continued to make improvements… When issues arise… we use those examples to improve our systems, and may take action under our policies, as we did in this example.”

The Dallas Express reached out to Miss Ross’ representatives but received no comment.