Majd Alkawass | Staff Writer
As we speak, multiple machines are making decisions on our behalf, in an attempt to make human life easier and help out mankind. These days we use computers to automate almost every part of our lives. We even started to incorporate what we call artificial intelligence to automate every task and decision-making process that comes to mind. This movement has helped educate millions of people, save lives, develop vaccines, and prolong human life.
But every invention is a sword with two ends. Most machine learning (ML) models learn from the data we present it with. For example, if we want a model that can tell us if a given picture is a picture of dog or a cat, we train the model on thousands of pictures of cats and dogs. Basically, the model tries to predict the answer and if it is wrong, we tell the model it made a mistake, and this happens thousands of times until the model’s mistakes are minimized. However, this method of learning comes with a downside, and this downside starts to be a real problem when we depend on ML models to make complicated decisions that have significant social ramifications. For instance, whether to recruit someone for a job, give them a loan, send them to prison or grant them parole.
An example is COMPAS, a widely used algorithm to assess whether a convict or a defendant is likely to commit future crimes. Most of the time, this algorithm is used to decide on parole release for prisoners. At first, you might think this must be a fair assessment given that it is done by a machine that only understands logic and facts. In fact, many have thought so for quite some time. However, a report by ProPublica explains that when we inspect the types of mistakes that COMPAS made, black defendants were almost twice as likely to be mislabeled as “likely to reoffend” and, thus, they were treated more harshly by the criminal justice system as a result.
At this point, an important question comes to play. We have already established that machines only understand logic and facts, so how can the results generated by a machine be illogical? Let us consider this example. Imagine a kid who spent his whole life in an environment where everybody told him cats are poisonous. Naturally, this kid will always avoid cats and grow up to treat them differently. In other words, the kid will develop a bias against cats. The same thing happens with machine learning. If the data an ML model learns from includes some implicit biases, the model will learn it and become biased too. In fact, COMPAS was trained using historical data of apprehended and prosecuted citizens, which is fundamentally tainted by racial inequalities in the criminal justice system.
Black people are arrested more often than whites, even though both commit crimes at the same rate. Black people are also sentenced more harshly and are more likely to be searched or arrested during a traffic stop. That’s the context that is susceptible to being lost on an algorithm (or an engineer) taking those numbers at face value. Hence, when COMPAS was learning from this data the harsh treatment of black in the criminal justice system was embedded in the data COMPAS learned from. And just like the kid who learned to avoid cats from the data they had, COMPAS reflected the discrimination that is embedded in the data against black people in its decision-making process.
Such problems can be solved by inspecting the datasets for any biases, imbalances, or imperfections. Another solution, which is a current active branch of research, is developing glass box ML models whose decision-making processes humans can understand and justify. However, with or without those solutions we are presented with a dilemma: Should we move back to human-based decision making systems, or try to improve the automated systems and reduce the overall bias?