The most recent advent of artificial intelligence in the social eye came in the form of ChatGPT, by far the most famous generative AI model to date. The demand for analysis on the subject of AI, although prevalent for a long while (eg. The Matrix) exploded upon the release of ChatGPT in 2022, and has kept up impressively since then. Stunning, controversial, and increasingly accessible, AI forms a cultural nexus of conversation over “the future” and how humans will shape it. AI discourse recalls historical discourse over other technologies, recently the computer and the internet. Throughout the past century, the things we use have become conspicuously “smarter” and more able to mimic human behavior. Despite popular fear of AI diminishing the critical thinking capacities of people, its introduction in the context of other technologies of the Information Age presents an exciting opportunity to reclaim aspects of mental agency that are currently being estranged.
Automating machines have made obsolete many skills akin to those in supposed jeopardy by AI, but have failed to collapse human thinking. Pre-Turing innovations like irrigation and industrial manufacturing removed much need for not only “unskilled” labor like walking and assembly but also “skilled” labor like artisanship. Digital computers have increasingly replaced arithmeticians and mathematicians. Universalization of information through the internet has decimated the opportunity for independent development of knowledge. Despite these mental and physical skills having been practically automated by technology, society on an everyday level continues to function. The humans you interact with are still people with passions and faculties. If we dare to extrapolate this empirical quality of technology to AI, we can start to clear away the aura of doom that has been built over it and identify the specific skills that may be lost and how we might be able to rectify that loss. We must realize that thinking and personhood are human conditions that cannot be alienated by any conceivable technological replacement.
The “personality” of AI may expose both its artificiality and its non-omniscience to us in a manner that counteracts the subsuming knowledge of the internet. Even its name—artificial intelligence—constantly reminds us that this chatbot we communicate with, or the generator of this image or video, is itself an algorithmic machine and not a real human. Its conversational text is novel and distinct from past iterations of information technology, but for the foreseeable future, it will often make mistakes that jar us awake and still remind us that AI only “guesses” what to say without real consciousness. These slight humanity and conspicuous inhumanity can perhaps subversively counteract a primary fear associated with AI: misinformation. Misinformation already has a platform on the non-AI internet, with much credibility given automatically to those who can publish true or untrue information on reputable websites or even in casual spaces. There is a level of distance between misinformation posts and their readers that allows misinformation to be taken at face value. AI, however, presents itself as a peer, or even a subordinate attempting to learn from the user, and can be treated more like a real person who could be mistaken and whom we do not always believe. By the piecemeal manner in which it tends to answer questions, amalgamating from other sources online, we may see its answers as less credible than a usual internet search, and even be inclined to fact-check the AI ourselves, a step up from simply accepting the information from a non-AI internet source. Constant questioning of the trustworthiness—a personal term not naturally associated with front-facing internet sources—of AI may rebel against a norm of misinformation created by the internet. As many have argued and as past automation has shown, AI will probably contribute to the loss of skills by automating them and to misinformation through its ability to quickly and massively generate realistic content. However, it also presents exciting opportunities to increase anti-misinformation capabilities in people and to make highly visible the artificiality of itself and many other machine innovations that have fallen into the normal and rendered invisible our dependencies on them. Rather than fearing AI as the harbinger of the death of thinking, we should recognize human resilience in the face of continually automating technologies and instead look forward to the potential positive outcomes of the AI experiment.