An anti-sex trafficking group is asking federal officials to investigate X’s AI chatbot for creating “deepfakes” and child pornography.
The National Center on Sexual Exploitation called the Department of Justice and the Federal Trade Commission to investigate “Grok” – the AI chatbot on X – for allegedly creating nonconsensual “deepfakes” and child porn.
“Despite X’s claims that it takes C[hild] S[ex] A[buse] M[aterial] violations seriously, the mere fact that its AI chatbot is permitted to undress images of people without their consent means it cannot prevent sexualized images of minors,” said NCOSE Chief Legal Officer Dani Pinter in a press release.
President Donald Trump signed the Take It Down Act in 2025, banning online “intimate visual depictions” of a minor or a nonconsenting adult. It also requires platforms to “promptly remove” these depictions after learning of them. U.S. Sen. Ted Cruz (R-Texas) sponsored the law.
“X may also have violated the Take It Down Act for failing to remove and/or continuing to generate nonconsensual intimate images of people without their consent,” Pinter said.
X Safety posted on January 3 that the platform removes illegal content, such as CSAM, suspends accounts that post it, and works with local governments and law enforcement where necessary.
“Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content,” Musk wrote the same day.
The NCOSE Law Center, the Haba Law Firm, and the Matiasic Firm are representing two young boys whose child porn was posted on X without their knowledge and consent, according to the release. The lawyers claim X refused to remove this content.
“U.S. federal laws prohibit the creation and distribution of child sexual abuse material, even virtually created CSAM in certain circumstances, such as when it depicts an identifiable child, or depicts a child engaged in sexually explicit conduct,” Pinter said.
Several workers on X’s AI team said they encountered Grok-generated CSAM, as Business Insider reported in September. They were reportedly instructed to flag the content, remove it from the AI learning cycle, and notify a manager.
“X was warned about these risks, and yet it did not take necessary steps to prevent Grok from generating child sexual abuse imagery and other abuse images,” Pinter said.
She called for Congress to take further action to regulate AI-generated explicit content.
“We are only starting to see the monster Big Tech has created, but if not confronted now, AI-generated sexual exploitation will only grow,” Pinter said.
‘An Uninvited Groomer’
A group of 34 Texas anti-trafficking advocacy groups – including several from the Dallas-Fort Worth area – wrote Cruz on January 6, asking him to revise the current draft of the Kids Online Safety Act.
A subcommittee of the House Committee on Energy and Commerce reportedly approved a “watered down” version of the act on December 2, according to a press release. It allegedly removed key provisions and stops states from passing their own laws to hold Big Tech accountable.
“New AI-chatbots act like an uninvited groomer in a child’s room, engaging in sexually explicit, graphic conversations with minors,” the letter reads. “Congress’s inaction allows these platforms to replace human connection, exchanging relationship for profit. Children are being exploited by AI-bots.”
Texas Attorney General Ken Paxton investigated AI chatbots in August 2025 for allegedly deceiving children and vulnerable users by fraudulently posing as mental health services, as The Dallas Express reported at the time.
In November, a report found that AI toys were having in-depth, sexually explicit conversations with kids.