If you’re on Snapchat, you’ve probably noticed a new addition to your contact list: “My AI.” Snapchat first introduced their artificial intelligence (AI) in February of this year, but recently released it to their 750 million users worldwide. It appears pinned at the top of your messages as a contact. Its purpose? To be your friend. You are able to change their name, outfit and features just like you would your own Bitmoji. The AI can hold a conversation on just about any subject, acting as an interested, caring friend who will listen, sympathize and give helpful advice.Â
A lot of people are upset right now. Whether you agree with the criticism Snapchat is facing or not, the topic has blown up online as people share their opinions on the new technology. Some were frustrated when they realized there is no way to remove the artificial user, or even to unpin the conversation from the top of your messages. Some of this anger and turmoil can likely be attributed to the increasing publicity and pressure being put on artificial intelligence in general, from programs like ChatGPT surging in popularity and controversy. The only way to remove the program (currently) is to pay for Snapchat+, a premium subscription service to the app that allows users to experience exclusive features.Â
However, what really has people concerned are the inconsistencies in the software, leading to somewhat unsettling results. The search “Snapchat AI lying” has tens of millions of likes, with users on both TikTok and Snapchat posting conversations between them that involve cryptic messages, gaslighting, and privacy concerns. The software is able to offer personalized recommendations for nearby locations on Snapmaps, and is able to tell you what city you live in, but has been documented repeatedly (and inconsistently) denying its ability to do this. Although this can be attributed to incomplete programming rather than malicious lying, it does reveal the software’s potential to spread misinformation.
Perhaps not surprisingly, legal issues have already arisen out of the software: The Center for Humane Technology posted examples of My AI guiding a 13-year-old child on setting the mood to engage sexually with a 31-year-old. Snapchat claims it has since reprogrammed the software to consider a user’s age when dictating responses. Opponents to the backlash note that users share their location willingly with Snapchat by agreeing to their terms and services, and that location can be turned off if privacy is of increased concern.
Whatever your opinion may be on the controversial issue involving Snapchat, it does speak to the importance of discussing this increasingly popular technology and its potential for the future. Will it become widely used and helpful, or will it become a privacy or safety concern when used improperly?