After the Taylor Swift scandal, Meta's “supreme court” takes up false images sexualizing women

The pornographic “deepfakes” featuring the famous American singer Taylor Swift, shared en masse on X, caused a scandal across the Atlantic at the end of January. While the practice is multiplying on social networks, the supervisory board of Meta (Facebook, Instagram) announced this Tuesday, April 16, that it would take up two cases, concerning false sexually explicit images of female public figures.

The two cases were selected “ to assess whether Meta's regulations and their application are effective in treating (the problem) sexual images generated by artificial intelligence (IA)”, notes in a press release this council nicknamed “supreme court”.

READ ALSO : “Deep fakes”, massive misappropriations, freewheeling fake news… why OpenAI’s Sora will bug the real world

Set up by the social media giant and made up of independent members, the council is responsible for deciding thorny issues of content moderation. The rise of generative AI, which automates the production of sophisticated content, has given new impetus to the phenomenon of “deepfakes”, or “hyperfakes”, in particular manipulated and sexualized images which feature women, for purposes intimidation or harassment.

The first case chosen by the Meta Oversight Board involves an AI-generated image of a naked woman posted on Instagram, “ resembling an Indian public figure », Indicates the press release. A user complained that the Californian company does not remove this image. “ Meta determined its decision to leave the content posted was wrong and removed the post for violating its rules on bullying and harassment », notes the council.

READ ALSO : Deepfake: fake faces inspire more trust than real ones

The second case concerns an image posted on a Facebook group for content creation with AI, showing “ a naked woman with a man groping her breasts “. Wife ” looks like an American public figure », also named in the legend. Meta had removed the image, and added it to a content bank that is part of its enforcement system, to automatically find and remove from its platforms images already identified by employees as problematic.

Last January, a fake pornographic image of American superstar Taylor Swift was viewed 47 million times on X (formerly Twitter) before being deleted by the social network – around fifteen hours after being posted online.

The affair aroused indignation from his fans, many personalities and even the White House. According to a 2019 study by Dutch AI company Sensity, 96% of fake videos online are non-consensual pornography and most of them depict women, famous or not.

READ ALSO : “AI reduces our thoughts, our language to stereotypes calculated on past averages”

This Tuesday, April 16, the Conservative government of the United Kingdom announced that it wanted to sanction the production and publication of false pornographic images, or sexual “deepfakes”. Creating such a pornographic image, even without the intention of sharing it, will soon be considered a criminal offense, punishable by a fine in the country, while sharing it after creating it will be punishable by a prison sentence , announced the Ministry of Justice in a press release, affirming that it wanted to fight in particular against “ abuse against women “.

Source link

Similar Articles

Comments

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Instagram

Most Popular