Forget the ‘Fake News’ craze for a minute: Deepfakes are the new threat to democracy, to truth and especially to privacy… and women are the main targets.
Deepfake videos, named after the Artificial Intelligence “deep learning” technology used to create them, have become increasingly popular since they first emerged on the web in late 2017. They use manipulated software to create or alter video content to display events that never actually took place, and it’s getting harder to distinguish the fake from the real.
Taken from Bloomberg’s YouTube video (Sept. 27, 2018).
Since then, Amsterdam-based cyber security company Deeptrace Labs has tracked the rapid rise of deepfake videos online and the results are alarming: there has been an 84 percent increase in deepfake videos available on the internet since last December, growing from 7,964 to nearly 15,000, CNN reports. And the numbers keep rising.
So what exactly does this mean?
While there is considerable panic online in regards to the potential threat deepfake technology poses to presidential elections, political campaigns and political disinformation, a new report by Deeptrace states that 96 percent of deepfakes circulating on the web are pornographic.
Not surprising, since deepfake roots stem directly from a Reddit account called deepfakes, which began posting fake porn videos in November 2017 created using software that morphed the faces of the real performers with the faces of well-known female celebrities.
While this falsehood campaign can affect anyone, women are frequently the primary targets of this “non-consensual” pornography that is quickly spreading across all internet platforms.
The report from Deeptrace also notes that the technology used to create fake sexually explicit content is becoming increasingly accessible—an example being the computer app DeepNude, which can undress the fully-clothed photo of any woman.
And if that isn’t scary or disturbing enough, the report’s authors told Quartz that, “The software will likely continue to spread and mutate like a virus, making a popular tool for creating non-consensual deepfake pornography of women easily accessible and difficult to counter.”
This means that women are once again at the forefront of a fake media movement meant to belittle, sexualize and expose them, all while forcing women into a vulnerable position by invading their privacy.
Sound familiar? Probably because women are the main targets in any sexual attack, and this is increasingly a repeating narrative thanks to the rapid growth of the internet.
The key factor in all of this deepfake panic swirls back to an important conversation that women are just recently regaining control of: the topic of consent.
While deepfakes have the power to potentially create fake news, push forward a problematic agenda and incite violence within countries and social groups, they can also ruin reputations, particularly of women, due to their main focus on pornography.
Rana Ayyub, an investigative journalist and writer from India, experienced this firsthand. Writing for The Huffington Post, she detailed how she was targeted in a deepfake porn plot intended to silence her due to her public disapproval of India’s decision to protect a child sex abuser.
Image via Rana Ayyub’s Twitter (December 30, 2018).
“The entire country was watching a porn video that claimed to be me and I just couldn’t bring myself to do anything,” she wrote. Local law enforcement was not willing to help Ayyub, either. Eventually, the United Nations intervened but it was too late.
“Now I don’t post anything on Facebook. I’m constantly thinking what if someone does something to me again. I’m someone who is very outspoken so to go from that to this person has been a big change. I always thought no one could harm me or intimidate me, but this incident really affected me in a way that I would never have anticipated,” she continued.
As Ayyub’s story proves, the effects of being a victim of a targeted deepfake attack are significant and often traumatizing. And while major news outlets have written hundreds of articles about the topic of deepfakes, college students seem to not care.
Why?
Particularly because most news coverage about deepfakes center around two things: wealthy politicians who possess the means to defend themselves against the threat, and its potential to intervene in electoral politics.
The average college student might think ‘Well how does this affect me?’ and truth is, it really doesn’t. However, the real threat comes from what is so little discussed: how deepfakes can affect regular people too, such as women, people of color and members of the LGBTQ community.
Often times, these particular groups are targeted in malicious attacks because of their lack of protection and low-income majority.
As the New York Daily News reports, “the people who are most vulnerable to being targeted by deepfakes are those without the means to control what counts as evidence about them.”
And while California is trying to hinder the increased threat of deepfake media by passing a law allowing residents to sue if their image is used for sexually explicit content, this law “will face a number of roadblocks,” says Jane Kirtley, a professor of media ethics and law at Hubbard School of Journalism and Mass Communication.
Speaking to The Guardian, Kirtley says she is skeptical California will be able to truly enforce this law, mainly due to free speech protections that favor political speech.
The overall theme of this article is rather simple: Women are the victims of a harsh crime that is increasingly becoming more powerful, more invasive and harder to defeat.