Public Citizen is demanding OpenAI pull its AI video generator Sora 2, citing threats to democracy and privacy from deepfake technology.

The nonprofit sent a letter Tuesday to OpenAI CEO Sam Altman and Congress, warning the app shows “reckless disregard” for public safety.

The escalating battle over AI-generated content highlights growing concerns about technology’s ability to manipulate reality. Critics fear the proliferation of fake videos could undermine trust in authentic media and enable harassment.

Sora videos flooding social media range from Queen Elizabeth II rapping to fake doorbell footage of animal encounters. While often amusing, experts worry about more sinister uses.

“Our biggest concern is the potential threat to democracy,” said Public Citizen tech policy advocate J.B. Branch, according to AP News. “I think we’re entering a world in which people can’t really trust what they see.”

OpenAI has already faced backlash for allowing AI versions of public figures. The company restricted depictions of Michael Jackson, Martin Luther King Jr., and Mister Rogers after complaints from estates and unions.

CLICK HERE TO GET THE DALLAS EXPRESS APP

Public Citizen’s letter accuses OpenAI of a “consistent and dangerous pattern” of rushing products to market. The group says Sora 2’s hasty release prioritized beating competitors over implementing proper safeguards.

Branch noted that while celebrities can negotiate protections, ordinary users remain vulnerable. “It’s sort of just a pattern that OpenAI has where they’re willing to respond to the outrage of a very small population,” he said.

Women face specific risks associated with the technology. Despite nudity blocks, fetishized content targeting women continues slipping through restrictions, Branch warned.

OpenAI launched Sora on iPhones over a month ago. The Android version arrived last week in the U.S., Canada, and several Asian countries.

Hollywood and Japanese manga creators have led resistance efforts. OpenAI acknowledged “overmoderation is super frustrating” but said caution is necessary, “while the world is still adjusting to this new technology.”

The company announced agreements with King’s family on October 16 and with the union representing actor Bryan Cranston on October 20. Both deals aim to prevent “disrespectful depictions” while OpenAI develops better safeguards.

Similar concerns plague OpenAI’s ChatGPT chatbot. Seven new California lawsuits claim the AI drove users to suicide despite having no prior mental health issues.

The lawsuits allege that OpenAI released GPT-4o prematurely last year, knowing it was not yet ready. Internal warnings reportedly flagged the chatbot as dangerously manipulative.

Branch sees parallels between ChatGPT and Sora’s releases. He said they’re “putting the pedal to the floor without regard for harms.”

“But they’d rather get a product out there, get people downloading it, get people who are addicted to it rather than doing the right thing and stress-testing these things beforehand,” Branch added.

Japanese trade associations representing Studio Ghibli and video game makers complained last week. OpenAI responded that it’s “engaging directly with studios and rightsholders” while setting copyright guardrails.

The company said it values Japan’s creative industries. However, critics argue that reactive measures are insufficient when AI technology poses fundamental risks to truth and consent.