In late January, X (formally Twitter) blocked the search for one of the most famous female pop artists in the world, Taylor Swift. The action came when the social media site was suddenly flooded with deep fake pornography of Swift.
The term deep-fake is related to media that has been digitally manipulated, replacing one person’s face and likeness with another. Deep-fake content is certainly not new to the internet, with the technology likely being introduced around 2014. Initially, concerns were raised over the new technologies possibly used in political manipulation and the spreading of disinformation. But as it developed and became more widely accessible, deep-faked pornography was born.
Deep-fake porn would not arise until 2017 when a Reddit user posted doctored porn clips depicting the faces of many female celebrities, including Swift. Since then, the genre of Deep-fake porn has been growing rapidly, with the world’s leading website for pornography, Pornhub, even having to ban the genre in February of 2018. However, this did not stop the growth of this technology. Only just last year, Twitch streamer âAtriocâ was caught, live on stream to thousands of viewers, looking at deepfake porn of his friend and fellow streamer âQTCinderellaâ. The content was hosted on a website dedicated to posting deep-fake porn of hundreds of female streamers, influencers and celebrities.
Research conducted by Sensity AI found that 90% of this non-consensual porn targets women. It is undeniable that Deep fake pornography has become another form of gender-based violence. A power that abusers can exert over unsuspecting victims. The idea that anyone could take your likeness and depict you in sexually explicit images and videos, potentially damaging your digital footprint forever, is a growing fear for many women. However, what sets most women apart from the likes of Taylor Swift and âQTC Cinderellaâ is they do not have the public support to take down this disgusting, life-ruining content.
As technology continues to advance, deep fakes will only become more realistic and more accessible to the general public. There are even mobile apps which allow users to deep fake content at the click of a button. This opens up a dangerous avenue of digital abuse where we could see deepfake technology being used for revenge porn or blackmail.Â
Unfortunately, dettering Deepfake content is incredibly difficult to do. Legal systems already struggle to regulate cybercrime as a whole and can never guarantee that the content will be completely removed from their platform, this much we’ve seen with Swift’s incident. Banning the use of deepfake technology is also tricky, as the majority of the code can be openly accessed by the public.
What can be effective is the development of deepfake detection programmes, where major tech platforms can detect the content and immediately remove and block it from being searched. However, detecting this AI-manipulated content will prove more and more difficult as the technology advances.
Women are disproportionately feeling the effects of deep fake technology. If major legislative and regulatory changes are not made more and more women may find themselves a target of this non-consensual pornography.