fbpx

Board To Probe Meta’s Response to Deepfake Porn

Meta
Meta applications | Image by Chesnot / Contributor/Getty Images

An independent board is taking a look into the way Meta handled reports of deepfake pornography created by artificial intelligence on its platforms.

Consisting of 40 members “from around the world that represent a diverse set of disciplines and backgrounds,” an “Oversight Board” was created by Meta to help it decide “what to take down, what to leave up, and why,” per the group’s website.

The Oversight Board will use “its independent judgment to support people’s right to free expression and ensure those rights are being adequately respected. The Board’s decisions to uphold or reverse Facebook, Instagram and Threads content decisions will be binding, meaning Meta will have to implement them, unless doing so could violate the law,” the Board’s website continues.

The Oversight Board released a statement on April 16 announcing that it will be examining two recent cases of deepfake pornographic AI images of female public figures that were posted on Meta’s platforms.

The first case involves an AI-generated image of a nude woman posted on Instagram that resembles a public figure from India. A user reported the content to Meta for pornography. However, the report was automatically closed after Meta did not review it in 48 hours. In turn, the user appealed Meta’s decision to leave the content up, but this claim was automatically closed again. The user then appealed to the Oversight Board.

According to the Board, Meta determined that its decision to leave the content up was an error and has since removed the post.

The AI-generated image was shared on an Instagram account that only posts deepfake images of Indian women.

The second case involves an AI-generated pornographic “image of a nude woman with a man groping her breast.” The image was posted on Facebook and created to resemble an American public figure named in the caption.

A different Facebook user had already posted the photo in question, and it was subsequently taken down for violating Meta’s bullying and harassment policy, specifically for “derogatory sexualized photoshop or drawing.”

Once removed, that image was added to a memory bank that allows Meta’s automated enforcement system to find and remove previously banned images.

In this case, Meta’s systems found and removed the image. The user who posted the photo appealed the removal, but the report was automatically closed. It was then appealed to the Board.

The Board is asking for public comments to help them reach a decision in the deliberation of these two cases.

Specifically, public comments that address how deepfake pornography harms women, especially women who are public figures, are being sought. Additionally, the Board is seeking contextual information about the use of AI-generated pornography globally, strategies for how Meta can address deepfake pornography on its platform, and the challenges of relying on an automated system that closes appeals in 48 hours if no review has taken place are being requested.

As part of its decision-making process, the Board will also provide recommendations to Meta regarding content policies and more. Unlike the binding Board decisions on individual cases, recommendations must be responded to by Meta within 60 days and are not binding.

Meta responds to the Board’s recommendations by implementing them in whole or in part, assessing feasibility, confirming that the recommendation is work Meta already does, or taking no further action.

The prevalence of artificially generated pornographic images has become a significant concern in recent years with the increasing accessibility of AI technology.

Last year, a campaign was created by those affected by deepfake incidents to gain more legal protections, as previously reported by The Dallas Express.

Dorota Mani, one of the campaign creators, was the mother of a 14-year-old girl who was one of several high school female students in New Jersey whose images were used to create deepfake nudes that were circulated on social media.

Mani was notified by the school about the deepfake nudes; however, reportedly, none of the students who allegedly created or circulated the sexually explicit content have been held accountable.

“We’re fighting for our children,” said Mani, according to The Associated Press. “They are not Republicans, and they are not Democrats. They don’t care. They just want to be loved, and they want to be safe.”

In October of 2023, President Joe Biden issued an executive order with the goal of making AI “safe and secure.”

“Meeting this goal requires robust, reliable, repeatable, and standardized evaluations of AI systems, as well as policies, institutions, and, as appropriate, other mechanisms to test, understand, and mitigate risks from these systems before they are put to use,” the order reads.

Support our non-profit journalism

Submit a Comment

Your email address will not be published. Required fields are marked *

Continue reading on the app
Expand article