There is a prevalent assumption that artificial intelligence, or any kind of technology for that matter, is detached from social and cultural climates. However, all artificial intelligence systems, though perceived to be scientific and objective, hide behind a layer of neutrality and are conversely embedded with social bias.
Last weekend, I had the opportunity to visit the Science Gallery near London Bridge, where they are currently showcasing an exhibition brought together by King’s College London researchers called AI: Who’s Looking After Me?. The exhibition takes visitors on a journey through the different ways artificial intelligence has been used and integrated into our daily lives: from self-driving cars, to heart monitors, and even digital voice assistants such as Siri and Alexa. In an age where artificial intelligence is gradually becoming ubiquitous technology, AI: Who’s Looking After Me? raises some critical and increasingly important questions on how these systems are built, and how biases can often seep into digital technologies. Here are some of my favourite works from the exhibition:
1. Sentient Beings by Salomé Bazin in collaboration with William Seymour & Mark Coté
Sentient Beings is an ‘immersive, evolving soundscape’ that raises questions on surveillance and comments on how AI assistants like Siri, Alexa, and Google Home extract our personal information, often inconspicuously. As soon as I entered this installation, I was enveloped in an eerily quiet space. Then, suddenly, one move triggered a dozen voice assistants (all in different languages) to ‘intervene’, asking how they could help.
Speakers and mics surrounded the room to catch any subtle sound or movement. There were also these silver mirrorball-like spheres that hung from the ceiling, which I interpreted as a reference to the omnipresence of these digital assistants. Whether we, as users, are aware of it or not, AI voice assistants are always listening, and consequently, so are the big tech companies that created them.
It certainly does not help that AI assistants carry themselves in such a friendly, courteous manner that coaxes us to bare our personal information to these interfaces. But in doing so, are we not also willingly giving our personal information to the institutions that make and monitor these virtual helpers? Long gone is the age of privacy as we welcome a new era: the age of blatant surveillance.
2. Heartificial Intelligence by TripleDotMakers in collaboration with ECHO Teens and Wellcome/EPSRC Centre for Medical Engineering researchers
Heartificial Intelligence features a fingertip oximeter that reads pulse rhythms and interprets that into data for generative AI. A large screen is displayed up front, showing how the visitors’ pulses synchronise with the images being presented on screen. It seemed to rack up quite a lot of engagement as people were lining up to measure and essentially ‘watch’ their heartbeat. When it was finally my turn, I placed my index finger in the oximeter just like everyone else, but I proceeded to think about how peculiar it was that the entire basis of my existence could be encapsulated in just a handful of visual graphs and images.
However, automated heart monitors are not something we are unfamiliar with. In fact, healthcare professionals have been using these technologies for years. But what happens when we fail to acknowledge the problems that may arise if AI dominates healthcare? Heartificial Intelligence challenges us to rethink how these monitors are being used and recognise how AI increasingly quantifies users, essentially dehumanising them.
‘AI cannot understand disease or patients,’ the exhibition claims, ‘it can only make predictions by analysing data.’ Hence, we need to realise how these technologies can potentially reshape the relationship between patients and their doctors and nurses. Heartificial Intelligence implores us to question these up-and-coming technologies and evaluate the extent to which these digital tools benefit and/or harm us.
3. The Future is Here! by Mimi Onuoha
The Future is Here! tackles the ways datasets (the ones embedded and taught to algorithms) are often collected and interpreted by underpaid workers in the Global South. ‘The Future is Here! presents the domestic sites where crowdsourced labourers carry out their work,’ states the exhibition, ‘accompanied by stylised renditions of those spaces.’ Visitors are invited to watch a projection of images, which feature small rooms that are representative of the Global South, as well as comic, Roy Lichenstein-esque versions of these images.
The Future is Here! critiques and shatters the preconceived notion that AI is an objective technology situated in the futuristic offices of Silicon Valley. These algorithms are not fine-tuned in these high-tech labs and facilities. Instead, they are developed in these cramped spaces that are hardly suitable for work. I immediately think back to the many times I have encountered workers with heavy accents on the frontline of tech support services, and I cannot help but wonder how artificial intelligence and tech companies, especially those that seemingly promote the equal progression of society, play a part in re-entrenching social hierarchies in regards to the divide between the Global North and South.
All in all, the exhibition left me with more questions than answers. But that’s part of the fun, right? The intertwined relationship between artificial intelligence (a technology we would otherwise consider to be neutral and independent of bias), humans, and society is one that we are still learning and exploring. And, as this exhibition demonstrated, we have only scratched the surface.