Let’s talk about the dangers of human bias in AI

Let’s talk about the dangers of human bias in AI

In light of recent reports of a Google engineer being suspended after saying that an AI chatbot has become sentient, the dangers of AI are in the spotlight once again. Most experts have derided the claim but the fear of Terminator-esque scenarios, where artificial intelligence becomes self-aware and threatens the survival of humankind, leaves a bad taste. However, as much as it is true that we should approach AI with caution, it may not be for the reasons you expect. While it is widely accepted that there is a long way to go before the concerns regarding world domination even begin to venture outside the sci-fi genre, it is easy to forget that AI is already causing much harm. 

As the term suggests, machine learning means teaching an algorithm to recognize certain patterns and act on that information. Thanks to machine learning, Siri detects your speech, Shazam tells you which song is playing in the background within seconds, and self-driving cars are no longer futuristic. Much greatness has been (and will be) achieved thanks to AI. However, to recognize a pattern, the algorithm needs to learn from a set of data. And this is where we meet a major ethical issue. Contrary to popular belief, data can’t escape bias. 

Who is the beneficiary?

Rashida Richardson, a civil rights lawyer who focuses on technology and algorithmic bias, breaks down these issues in an interview with Payaar to the people, stating that: “if you are building tech to solve problems for a world that you think is mostly white and male, then of course it’s not going to work for entire parts of the population.” Let’s dissect. 

One of the more controversial ways that AI is utilized is facial recognition programmes within law enforcement. As you might have guessed, facial recognition is taught by exposing the algorithm to a large set of pictured faces. The problem is that these data sets tend to be overrepresented by white males. What happens next is that the algorithm can identify such faces at a significantly higher accuracy rate than the faces of women and POC. In other words, when AI is used to identify a police suspect, it is therefore more likely that people from marginalized groups are misidentified. 

To make matters worse, the databases used to search for matches are often overrepresented by POC, which further adds to the probability that people with a darker skin color are misidentified. As pointed out by John Loeffler, this can have detrimental consequences, especially as “unconscious bias may also have made the witnesses and officers more likely to accept that grainy footage and AI-matching at face value.” 

Bilde: Camilo Jimenez

Predictive policing

Similarly, police departments around the world use AI to predict where crime is likely to occur and who is the likely perpetrator. Once again, the algorithm is fed biased data. The crime statistics are skewed because the areas where the majority are of lower income and minority backgrounds, are often the areas which are targets for overpolicing. The result is an evil circle where the algorithm is taught that there is a high probability of crime in these areas – predicts crime – the police continue to focus their attention and resources on those areas – and, thus, continue to find crime there. 

Another example, as described by Richardson, is police departments that have a history of racist stop and frisk policies. The data sets are biased in the sense that POC is disproportionately represented even if that does not necessarily reflect the real crime rate. The algorithm in itself is not racist, but it is fed racist data and therefore helps perpetuate these harmful biases. 

Richardson points out that one of the overarching issues is how we treat machine learning like it is neutral when, in reality, it can act as an extention of the systemic discrimination of maginalized people. She says: “People assume that if something is based on data it is more objective than human judgment, and better. They think predictive policing is better than the beat cop. In reality, these technologies just displace discretion. They are not objective. They are human creations that reflect flaws and problems humans have.”

At the end of the day, becoming aware of these flaws means we can train better algorithms, which is why it is worth learning about ethical AI from figures such as Rashida Richardson; Safiya Umoja Noble, author of best-selling Algorithms of Oppression: How Search Engines Reinforce Racism; and Timnit Gebru, computer scientist and tech activist. Nonetheless, it is worth remembering that we cannot discuss ethical AI without examining how to create structural changes beyond tech. 

No Comments

Post a Comment