When asking a class GroupMe for help on a college writing assignment, chances are someone has responded, “just use ChatGPT.” This ChatGPT, which you can access for free here, is a version of GPT-3, a language model created by Silicon Valley research company OpenAI. Upon its release to the public on Nov. 30, 2022, ChatGPT has already prompted article titles like “The College Essay is Dead” for its human-like writing abilities. Four months later, on March 14, OpenAI released GPT-4, which you can access on the Internet if you pay a $20 monthly subscription for ChatGPT Plus. GPT-4 is considerably more powerful than GPT-3; notably, GPT-4 scored in the 90th percentile of the LSAT, while GPT-3 scored in just the 10th percentile.
But what even is a language model like Chat-GPT? GPT-4 and its predecessors are part of what is called “Generative A.I.” Until very recently, most of our daily interactions with artificial intelligence have been with ranking systems, think the Facebook, Youtube, and TikTok algorithms that use your browsing data to determine what preexisting internet content to show you. In contrast, generative A.I. systems are trained on a dataset of essentially all the information on the Internet and can create a new piece of content based on the directions you give it. Generative A.I. chatbots can do anything from writing poetry to creating recipes to coding. Fundamentally, these chatbots are like a very advanced autocomplete, predicting which words will come next after a prompt from its training database of the entire Internet.
But what are the shortcomings of using the Internet as a training tool? Existing research has established that facial recognition technology is less likely to recognize darker-skinned people and women, demonstrating how technology can reflect, if not exacerbate, existing societal inequalities if there are no efforts to counteract these in training datasets. Additionally, there are clearly some bad things on the Internet that you would not want a chatbot to regurgitate. A TIME Magazine investigation found that to create an A.I. tool that could detect what not to repeat via ChatGPT, OpenAI outsourced sifting through the text of the worst parts of the internet to Kenyan workers, paying them less than $2 a day.
The source databases of generative A.I. programs have also prompted immense pushback from artists. Many text-to-image generators like Open AI’s DALL-E, as well as Midjourney and Stable Diffusion have been recently released. Stability AI, the creator of Stable Diffusion, has already been sued by Getty Images for using their images in their training dataset without permission or compensation. Said @StevenZapataArt in a YouTube video titled, “The End of Art: An Argument Against Image AIs”, “if you wanted to build an ethical [text-to-image generator], you would build it on a foundation of public domain and creative commons images, embellish it with images your company produces internally, commission artists to create training images for you, or compensate artists who opt-in to have their images added to the dataset.” Many artists are expressing their concerns and filing lawsuits over the fact that their art, including copyrighted art, has been used to train these models without any permission or compensation.
Some say that “generative AI” should be called “imitative AI,” given that it can’t produce anything new and only creates a mashup imitation of humans’ existing work. For example, A.I. content generators are particularly strong with prompts to create “x” in the style of “y.” YouTube channel @demonflyingfox uses a combination of A.I. powered image, animation, and speech programs to create comedy videos like Harry Potter characters in a Balenciaga ad or Joe Rogan interviewing Jesus Christ. @demonflyingfox said to TechCrunch that he deliberately uses outdated software and outlandish video premises to make clear that the videos are not real. However, even in the past few weeks, people, including myself, have mistakenly assumed certain viral A.I.-generated images to be real. While Photoshop has been around for years now, the speed at which AI is advancing begs the question of if one day, AI could create entire TV shows and novels or have enough of what feels like a personality to be our companions.
So why is generative A.I. development moving so quickly? Part of the reason is that big tech companies, primarily Google, Facebook/Meta, and Microsoft, compete to have market dominance in the AI field in the way Google currently controls 93% of the worldwide search market share. After a minor player in Silicon Valley, OpenAI captured global attention with Chat-GPT, Google has released its own less successful chatbot, Bard. Additionally, Google and Microsoft have announced that they will add generative AI tools to their sites. However, many worry that this push to release new AI systems as quickly as possible is causing safety and ethical issues not to be fully considered and accounted for. Over 1,000 people, including Elon Musk, signed a letter calling for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
Others in the field of A.I. ethics, including Timnit Gebru, former leader of Google’s A.I. ethics team who was pushed out of the company following a paper she co-authored about the dangers of large language models and criticizing Google’s diversity efforts, have countered the A.I. pause letter, saying that the letter’s sole concern of the dangers of future A.I. is “the focus of a dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today.” Many highlight the need for appropriate legislation for regulation and transparency of A.I. systems and their development. A first step has been taken with the White House’s “Blueprint for an A.I. Bill of Rights.”
For as much discussion as there has been on how society has changed due to the Internet, we are still currently interacting on the Internet with, for the most part, human-made content, even if presented by an A.I. algorithm. In the future, will our devices be inundated with A.I.-generated how-to articles, emails, art, music, shows, teachers, and friends, and what are the implications of that? CEO of Google, Sundar Pichai, said in 2018, “A.I. is probably the most important thing humanity has ever worked on; I think of it as something more profound than electricity or fire.” While big tech companies have dramatized the world changings of their technologies to appeal to investors in the past, many still believe that we are on the tip of the iceberg of A.I. development, and I suppose we will see in the future to what degree any, that Pichai is correct.
Want to see more HCFSU? Be sure to like us on Facebook and follow us on Instagram, Twitter, TikTok, YouTube and Pinterest!