Deepfake porn is an earthquake that affects everyone

Her name is QTCinderella and she is a digital creator and influencer by profession. But the live streaming of her that perhaps became more viral was that of last January in which she confessed, in tears, that he had seen a porn video online with her as the protagonist. The small detail is that the twenty-nine-year-old influencer had never shot porn, nor had she spread anything online. Someone had superimposed her face on that of a hardcore actress and, thanks to the progress of new video processing software, had made her, in spite of her, the protagonist of red light images. The influencer only realized what had happened when she saw the offending video on the PC of one of her influencer colleague (and friend). The man’s apology and her confession followed: she had paid a site to make deepfake porn of QTCinderella and many other influencers.

It may seem like a curious story, but it risks becoming more common than we think. Celebrities like Emma Watson or Scarlett Johansson are already victims of similar videos and advertisements for software to make deepfake videos are accessible everywhere. About 96% of the material produced is porn and the victims are overwhelmingly women.

Porn and deepfakes: why the victims are women

The term deepfake is relatively recent: it was born in 2017. It is a technique used essentially to overlay images, audio and video. In itself it has always existed, the point today is the potential of new artificial intelligence software. Through machine learning and artificial intelligence processes, the new platforms, starting from real content (images and audio), are able to modify or recreate, in an extremely realistic way, the characteristics and movements of a face or body and to faithfully imitate a given voice .

The difference with the past is that today they are practically accessible to everyone. As recently revealed by a survey by NBC, sites that promote porn videos with the faces of non-consenting women (often celebrities) are currently reachable through Google and are advertised on the major porn content platforms. The creators of these videos charge just over $5 to download hundreds of clips of celebrity deepfake videos and accept payments via credit card or cryptocurrencies.

Deepfake videos with pornographic content have doubled every year on the net since 2018. And many are used explicitly to offend and create “revenge porn” because many platforms today offer the possibility of creating deepfakes starting from anyone’s images. Also the NBC investigation reveals that on the Discord app a digital creator asked just 65 dollars to create the video of the “girl you want”. But to realize that these tools are increasingly accessible (and cheaper), as the story of Qt Cinderella also reminds us, it is enough to go around the net. With the consequence that soon, in the absence of adequate regulation, they could be manageable and widely spread among all adolescents.

And the paradox is that often there is not even a regulatory framework capable of adequately striking the guilty. After all, today many deepfake porn platforms are easily reachable through common search engines.

“Every single attorney I’ve talked to has come to the conclusion that there’s no way I could sue,” QtCinderlla confessed. Because the problem is that, in the frenetic technological evolution we live in, regulatory frameworks are not always able to keep up. And the consequences could also be enormous for political life.

Thus the deepfake could manipulate democracy

A few days ago an unusual image of Pope Francis circulated on the net. The pontiff wore a fashionable, almost trapper-like down jacket, an unusual look even for a personality who has contributed to rejuvenating the Church in many aspects.

In the same days, the Pope was hospitalized in the Gemelli hospital for respiratory distress, a pathology that strongly contrasted with the image of Francis with the fashionable down jacket. It was obviously a fake generated with the Midjourney app. And once again it is not so much the process that amazes (the photo montages are more or less the same age as the photographs), but the ease with which it is now possible to manipulate images and create hyper-realistic photos of individual personalities. In the program in question, also based on artificial intelligence processes, an image can be generated starting from a single text field. And if in this case the example can make you smile, there are already precedents that are just signs of what can await us.

This photo shows all the dangers of artificial intelligence

On March 2, 2022, a few days after Russia invaded Ukraine, a video of President Volodymyr Zelenskyy appeared on the Ukraine 24.1 website. He is in military gear and begs the Ukrainians to lay down their arms and surrender to the attackers. The video immediately went viral on Russian social media VKontakte and Telegram before being picked up by international media and immediately denied. It is in fact a fake created by Russian hackers to disrupt the progress of the conflict. This is just one example of how, in a world where the virtual has absorbed the real as prophesied by Jean Baudrillard, this kind of video could be used in the future.

Legitimizing wars and revolts, falsifying orders, sowing confusion, dividing and demoralizing warring armies, undermining popular support, polarizing social groups, discrediting leaders and opponents, quarreling potential allies: this is the non-exhaustive list compiled from a study by Northwestern University of the potential risks we could run in the future with the dissemination of deepfake content with a political background. Just imagine, for example, the release of an artfully fabricated video the day before the elections to discredit a political opponent. It seems like a scenario born from the TV series “Black Mirror”, but it could soon become part of our reality.

Are you sure this image and these words are real? The era of deep fakes

Source link

Leave a Comment