I’m sure that many of you reading this article have grappled with the question of Artificial Intelligence, commonly referred to as AI, in at least some capacity. Whether it is debating the academic ethics of using AI to complete an assignment, worrying about its implications for your future career field, learning about the vast environmental and ethical implications of such technology, or simply delving into what “Artificial Intelligence” – a misnomer in and of itself – actually means, the questions that arise surrounding Artificial Intelligence are extensive.Â
As a writer, the implications of AI – particularly Large Language Models (LLMs) and Generative AI systems like ChatGPT – have weighed heavily on my mind throughout these past few years. From watching the Writer’s Guild of America go on strike in 2023 over issues of AI (among much else), to following a number of AI Copyright lawsuits that will set compelling precedents for the future of this technology, to taking a class in the Textual Studies Department at the University of Washington entitled “Artificial Intelligence and Human Creativity in a Historical Perspective,” this topic has evidently grabbed my attention in more ways than one.Â
While taking this course in the TXTDS Department, I had the opportunity to attend an event organized by the UW AI Task Force entitled “AI, Society, and the Path Forward.” This online webinar featured a conversation with Margaret O’Mara, a Professor of History at UW, and Sam Altman, an American Entrepreneur and the CEO of OpenAI.
Notably, OpenAI is the Artificial Intelligence-focused company that created ChatGPT, launching a new wave of passionate discourse about the power of machine learning, Generative AI, and Artificial General Intelligence (AGI). As the CEO of OpenAI, Sam Altman has often been positioned as one of the leading figures in the world of Artificial Intelligence research and development, making his conversation one I simply could not miss.Â
Sitting in front of a brick wall, with plants on either side of him and a stack of books in the back corner, Sam Altman joined the zoom webinar with a reserved sheepishness I wasn’t necessarily expecting from a man as esteemed as him. Yet, as he began answering Professor O’Mara’s questions about Artificial Intelligence and OpenAI, Altman gained a confident resolve that immediately shifted my perception of him – and my perception of the conversation ahead of us. I suppose that I went into the webinar anticipating a back-and-forth exploration of what AI truly means for society, neglecting to consider who exactly I was listening to. As the CEO of a company championing the benefit of producing “AI systems that are generally smarter than humans,” Sam Altman presented his perspective exactly as you would imagine: unabashedly in favor of the future that AI will bring us.Â
Throughout the conversation, Altman continuously returned to the sentiment that technology and society grow together – it is in this coevolution, he believes, that the value of technological developments such as AGI will change the world for the better. When asked about the impact that AI will have on human-centered skills, he argued again for this coevolution, explaining that advancements in such technologies will allow us to “think better, do more.” He even affirmed that for creatives, the development of AI technology will “enhance creative work” and ensure “creatives create more and better.” This was an interesting claim indeed, but made without any real explanation of how this would happen – a theme I noticed throughout the entire webinar.Â
Another topic that Altman touched on was the ethical impacts of AI technologies, such as ChatGPT. The question of where these language models get their training data and how this data may prioritize hegemonic viewpoints is a major point of contention in the AI conversation, but – as I somewhat expected by this point in the 45-minute webinar – it did not seem to concern Altman. Namely, the phrase “AI Ethics” is one that Altman disagrees with, arguing instead for the use of the term “AI Safety.” Regardless of the wording, Altman frequently circled back to his belief that with the newest models of ChatGPT, there is “more control in the hands of the users” in terms of dealing with bias and harmful AI output. In his perspective, users of ChatGPT have the agency of inputting any prompt they want into the chatbot, which is then able to pick up on or reflect certain worldviews and points-of-view in order to best “help” that individual user.Â
Perhaps I’m overly critical, but the nature of algorithms to reflect back what we already believe seems to be a big enough issue for social media platforms, nevermind for a resource like ChatGPT. Will this not create a loop of echo chamber-esque responses that reaffirm our own (often unintentional) biases? And with so many proponents of AI advocating for the “positive change” it will create in the educational landscape, how does this idea of “AI Safety” or “AI Ethics” come into play? How can we be responsible users of a technology that, as OpenAI believes, will one day be smarter than us?
Is creating a technology seemingly smarter than humans even a good idea?Â
These questions – and many more – arose for me while listening to the UW AI Task Force’s conversation with Professor O’Mara and Sam Altman. Though I am apt to be more critical of Machine Learning and Generative AI, I appreciated hearing Altman’s distinct perspective on an issue that has truly captivated me.Â
While I am unsure what the future of Artificial Intelligence will look like, there is no doubt it will be a complicated one. That said, I plan to continue finding resources to educate myself and stay informed, combatting the obscurity and inaccessibility of the topic of Artificial Intelligence overall.
If anything in this article has interested you, I encourage you to do the same!