Her Campus Logo Her Campus Logo
This article is written by a student writer from the Her Campus at St. Andrews chapter.

To us humanities students, AI feels like a grave imposition – an inescapable doom, rendering our already slim pickings for the job market even smaller, and our skillset more obsolete.

If ChatGPT is already writing our job applications for us, who is to say AI won’t take the place of journalists, authors, and other editorial positions.

There lies a problem in that jump, however. AI text is recognizable, lacking the humanity that makes writing so captivating, and reading so enjoyable and infectious. 

Whilst we have long tolerated the burden of having to decipher the language of tech and computer programming, the future of AI inverses this dynamic, teaching the AI to talk our language instead.

However, AI needs tweaking and training – it must be taught how to think and write like us, a job that cannot be fulfilled by stem-oriented individuals. As affirmed by IBM AI Chief Matt Candy, “the jobs of the future will be filled by those who can work with AI using language and creative thinking nurtured in liberal arts degrees.” 

So how do English (and writing-focused humanities majors) come into the picture? 

Conor Grennan, Head of Generative AI at NYU Stern, reminds us that firstly, we must treat AI as if it has a human mind, such that the future of coding is in “natural language.” Yet AI struggles in its ability to approach and understand empathy, and as thus, possesses severely limited communication skills. 

This opens a new role in the tech sphere that requires little to no IT skills whatsoever – the prompt engineer. Already a six figure job, prompt engineers are tailored to humanities majors, feeding large language models, such as ChatGPT, questions and information to help augment its ability to think and behave like a human.

Whilst it sounds very doable, many struggle to communicate to the AI as if it is a person. As Grennan summarizes, our neural pathways lead us to miscategorize ChatGPT, subconsciously treating it like Google rather than as a peer. Approaching AI this way (even subconsciously) is the “absolute worst approach.” 

Tech specialists may be able to break this barrier by viewing AI as a cerebral robot, rather than a search engine, but humanities students will be able to view AI as, and communicate with it like, a human.

Further, as large language models face challenges with proper and effective communication, as well as the accuracy of their information, the need for individuals with language-centric expertise is made amply apparent.

In sum, whilst us English majors have grown to fear an AI-dominant future, and perhaps still should, we can at least feel some security in knowing our skillset is highly coveted and employable in this new intelligence age.

Rhiannon Peacock

St. Andrews '25

Rhiannon Peacock is a fourth year at the University of St Andrews studying International Relations and English, and serving as the Editor-in-Chief of our chapter!