Open AI just released a program called “Sora”: a text-to-video model that allows you to make videos simply by inputting a prompt. While it’s still in the early stages, “Sora” is already able to make scarily realistic and detailed videos. Plus, this program was announced merely two weeks ago and has already been plastered all over the internet. But what does this mean for us? We’ve just recently begun to accept the reality that artificial intelligence is now an integral part of our lives online, no matter how much we may dislike it, but what kind of implications can fake videos have? The truth is, artificial intelligence affects everyone, but some may be at a higher risk than others.
Recently, an X account uploaded AI photos of Taylor Swift in the stands of a football game being groped. This account has now been deleted, but not before the photos were out long enough to begin circulating on every platform. They weren’t very realistic, but the goal wasn’t for people to think they were real. The point was to damage her image and laugh at the idea of inflicting something like this on her. While female celebrities, especially Swift, are used to harmful threats and rhetoric, this takes it to an entirely new level. Ever since programs like ChatGPT and QuillBot were popularized, people have questioned how tools like these can and will impact a society. Do the benefits outweigh the drawbacks?
Deepfakes are altered audio, photos, or videos- where the voice or face of the original person has been changed to another person. They are typically used to spread misinformation, propaganda, and fake celebrity porn. Deepfake celebrity porn is incredibly common, where porn stars’ faces and voices are replaced with those of popular celebrities (oftentimes female). Deepfake misinformation and propaganda has also found a cozy home on the internet, with videos of popular politicians and presidents, most notably Barack Obama, Donald Trump, and Joe Biden commonly circulating online. This can come in the form of putting their faces and voices on top of someone else to make it seem as if these politicians are saying, essentially, whatever the creator wants them to say. They could make it look like the politicians are declaring war, endorsing another politician, making sexist or racist remarks or announcing “news” pertaining to another country’s politics. Deepfakes have been around much longer than most advanced AI programs, so it’s typically much more realistic. The first instance of deepfakes online was in November of 2017, when an anonymous Reddit user shared the technology. For reference, ChatGPT was released in November of 2022. It can be incredibly difficult to discern the difference between a real photo and a deepfake, and as technology evolves, it only gets harder.
Another concern in the AI world is family content. Family vloggers and content creators constantly uploading images of their children on the internet have always been a controversial subject; they are often met with criticism for publicly sharing their young kids on platforms where they can be met with predators. These parents have come under more and more fire recently as AI programs become increasingly more prevalent. The main criticism is that they’re putting their children in danger by uploading their faces, and sometimes very vulnerable moments, on social media platforms available to the public. Recently, some of these families have discovered AI photos of their children on darker parts of the website. It’s typically a photo they posted themselves, but the clothes on the children have been removed. Other times, it’s an original photo created by AI, showing the children in compromising positions. This is the most abhorrent use to come out of this technology.
Similarly, women who post bikini photos on the internet are no longer safe. Some women are now seeing their bikini photos posted by anonymous accounts on platforms like Reddit or X, but with the bikini taken off using AI. There are now hyper realistic—but fake—nudes of women being circulated on the internet. The scariest part may be: they aren’t just going after celebrities. Deepfake porn has often targeted female celebrities to fulfill some sort of fantasy or perversion, but AI nudes are now targeting random women on the internet with small accounts and barely any posts. It sometimes isn’t even by someone they know. There are just random people out there targeting strangers’ photos just to hurt them. If this is what we see happening with AI photos, it’s easy to picture how much worse it can and will get with AI videos.
Revenge porn is a term used to refer to sex tapes leaked by someone’s partner after, for example, a messy break up or a fight. Revenge porn is purely meant to hurt and humiliate the person who did not give consent for the content to be shared. It could be sent to a few friends, or posted publicly on the internet. Both cause trauma and severe mental health consequences. Because of this technology, women everywhere could wake up one day to see fake revenge porn of them uploaded on the internet. Women will wake up one day to see fake sex tapes of themselves uploaded on the internet by strangers. Women most likely won’t be believed when they defend themselves by saying it’s fake because of how realistic the videos are. People will point out the intricate details AI videos can include as proof that the video is real. Women will see how many views these fake videos got and have to deal with the humiliation as well as knowing there’s no use even trying to defend themselves. Meanwhile, money is lining the pockets of OpenAI and big tech companies. They are entirely unaffected by the consequences of the technology they helped create.
As AI video technology evolves to get more and more realistic, it’s theorized that the movie industry will begin replacing people who need to be paid with AI that can do the same work for free in order to save money. Movies will be soulless, devoid of any human emotions. Artists who dedicated their lives to videography and creativity will now be out of jobs in film, cinematography, art, and virtually any other job that still allows for passion and creativity left in our society. Media literacy will be out the window. People will rarely be able to tell the difference between a real video of a politician giving a speech and a fake one; if a country is actually going to war with another, or if the video of tanks storming the border is AI. Propaganda will skyrocket. Fake videos being used as justification for attacks or hatred will pop up all over the world.
Is there really a good reason for AI technology? And if so, are there enough benefits in comparison to the consequences that make it worth it? Most people would say no. There is ultimately very little reason to care so much about developing AI technology, except of course for the tech bros who make millions off of it. As much as I hate when women are told how to avoid sex crimes, because it is absolutely never their fault, I want to advise women to take this technology into account when posting anywhere on social media. We are used to this happening to big celebrities, which is awful in and of itself, but it’s progressed to a point where regular girls with a few hundred or a couple thousand followers have to worry too. Privating your accounts and deleting any photos you feel are at risk of being manipulated is a measure you can take to greatly reduce your risk of being targeted, if you choose to take it. While the internet has never been an entirely safe place for women, it’s reached a new peak, one that every woman should be aware and informed of, whether you let it affect your decisions or not.