A nonprofit found dark web guidance on “nudification” tools for pedophiles looking to generate fake nudes of children to extort them.

With the advent of artificial intelligence technological advances, there has been a considerable rise in criminals generating realistic child sexual abuse material and posting it online. These fake yet realistic images have been used to blackmail and manipulate victims into sending the perpetrators graphic material or money, with more and more such incidents being reported to authorities worldwide.

Earlier this year, the FBI issued a public advisory warning about malefactors targeting minors in online communities for sextortion. Some incidents have led to victims committing suicide, as covered by The Dallas Express. The warning pointed to a 20% rise in reports involving adolescent males from October 2022 to March 2023 compared to the previous six-month period.

Since its inception nearly three decades ago, the Internet Watch Foundation (IWF), a charitable organization based in the UK, has launched several projects to protect children from pedophiles and harmful content online. The group’s analysts spearhead this mission by creating tools to root out and delete abusive material, maintaining a hotline for anonymous tips and reports, and collaborating with public officials to increase legal protections for children.

CLICK HERE TO GET THE DALLAS EXPRESS APP

IWF told The Guardian that it had found an online manual of nearly 200 pages detailing how to use AI tools to remove clothing from images in which the subject is in their underwear. The anonymous author then advised how to use the doctored images to manipulate the victim into sending graphic content back, allegedly boasting about getting several 13-year-old girls to send nudes.

“This is the first evidence we have seen that perpetrators are advising and encouraging each other to use AI technology for these ends,” a spokesperson for the organization said, per The Guardian.

IWF reported finding more child sexual abuse material online last year than any time before — a total of 275,652 web pages. A whopping 92% of the images had been “self-generated,” although likely involved victim coercion, extortion, or grooming. This is a considerable uptick from 2022, with 78% of material being self-generated. Children aged 11-13 were the most common victims, and 94% were female. IWF also noted a 65% increase from the year prior in reports of self-generated material featuring children aged 7-10.

The organization reported that a record-breaking share of child abuse material found in 2023 was classified as “Category A,” which IWF defines as “involving penetrative sexual activity; images involving sexual activity with animals or sadism.” Of the self-generated material found, 21% fell within Category A, 23% within Category B for “non-penetrative sexual activity,” and 56% within Category C for being “indecent.”

The access children have to social media — and thus, arguably, the access child predators have to them — has been at the center of many policy debates. In Florida, Gov. Ron DeSantis recently signed a bill into law that bans minors under the age of 14 from maintaining social media accounts and requires individuals aged 14 and 15 years to obtain parental consent to log on in the interest of protecting children from online predators. Supporters of the ban claim it will empower parents and protect children from online predators.

As covered by The Dallas Express, stakeholders in Texas are weighing a similar social media age restriction measure. A recent operation in North Texas arrested 15 men accused of soliciting minors for sex online.

“There’s a reason when you walk in and you buy a case of beer or pack of cigarettes or a lottery ticket or a pornography magazine, you have the cashier there to check your ID. Because it’s a longstanding legitimate role of government to protect children from harmful activities,” Rob Henneke, executive director and general counsel of the Texas Public Policy Foundation, recently told The Dallas Express.

Author