The circulation of nude pictures generated with the help of artificial intelligence is receiving pushback from a campaign advocating for lawmakers to protect bullying victims.

More and more deepfake content is being generated and circulated, with a recent analysis by researcher Genevieve Oh suggesting that over 143,000 AI-generated images and videos have been put online this year alone, as reported by the Associated Press.

In response, a campaign driven primarily by those affected by deepfake incidents to gain more legal protections is underway.

This summer, the FBI issued a public service announcement warning that malicious actors were using photos of real-life children and adults to create fake yet realistic sexually explicit content to share on social media and pornographic websites. The AI-generated images are also being used to harass, extort money, or coerce victims into sending real sexually explicit videos or images.

Campaigners are calling on lawmakers from across the political spectrum to create more safeguards.

“We’re fighting for our children,” said Dorota Mani, whose teenage daughter was a recent victim of a deepfake incident, according to the AP. “They are not Republicans, and they are not Democrats. They don’t care. They just want to be loved, and they want to be safe.”

CLICK HERE TO GET THE DALLAS EXPRESS APP

Mani’s 14-year-old daughter was one of several female students attending Westfield High School in New Jersey whose images were used to create deepfake nudes that were circulated on social media over the summer.

The school called Mani in late October to inform her of the incident, yet none of the students who allegedly created or circulated the sexually explicit content were held accountable.

“The first initial reaction is shock. Obviously, no mother is prepared to hear such a thing,” she told the AP. “And then disappointment in how the school handled the situation.”

Her daughter decided to join forces with a nonprofit called the Cyber Civil Rights Initiative, which aims to provide resources to those victimized by deepfake content.

The nefarious use of AI has more broadly been the subject of much conversation at the federal level, as previously covered by The Dallas Express. In June, U.S. senators met for a briefing on AI to forge a bipartisan plan to minimize the risks associated with the growing technology.

Rep. Yvette Clarke (D-NY) introduced a bill to Congress in September that would require creators of deepfake content to distinguish it using a digital watermark and make it a crime to fail to do so. However, the bill has not progressed to committee.

In October, President Joe Biden ordered the secretary of commerce to produce a report looking into legal tools to prohibit using generative AI to produce child sexual abuse material and sexually explicit images of non-consenting adults.

On the state level, some measures have been passed in Texas, Minnesota, New York, Virginia, Georgia, and Hawaii, which criminalize nonconsensual deepfake sexual content, according to NBC 5 DFW. Meanwhile, California and Illinois have made it possible for victims of such AI-generated material to make legal claims and seek damages.

Whatever legal or regulatory measures get put in place, some groups, such as the American Civil Liberties Union, have emphasized the importance of ensuring such measures do not infringe on the First Amendment.

“Whether federal or state, there must be substantial conversation and stakeholder input to ensure any bill is not overbroad and addresses the stated problem,” said Joe Johnson, an ACLU attorney based in New Jersey, according to NBC 5.

Author