Can Algorithms Discriminate?

Rachel Hendrix
5 min readJul 10, 2020

--

Image Source: University of Notre Dame

It is human nature to carry biases. Everybody holds prejudices, or at a minimum partiality in one way or another, that usually stems from who you are and how you were raised. So, to combat these intrinsic biases, we rely on computers, data, evidence, statistics, and the like. We have essentially centered many parts of our lives around computer algorithms upon the notion that technology can’t be biased because it uses science and math, objective methods of arriving at conclusions. Any time you use your phone or computer, an ATM, a mileage calculator in your car — you are using an allegedly unbiased algorithm.

However, these algorithms aren’t always completely objective.

Machine learning systems (like algorithms) pick up on patterns through exposure. If you feed a computer a bunch of different pictures of cars, ultimately it will start to recognize this pattern and be able to identify cars in other pictures. Therefore, algorithms rely on the data you plug into them to spit out answers. But if the data is biased in one way or another, then the algorithms are simply proliferating these biases.

Here is an example:

Risk assessment tools are commonly used in local justice systems. Defendants are given a test with a wide range of questions, some about themselves personally and some about how they would react in a hypothetical situation, and then an algorithm takes the defendants’ answers and predicts their likelihood of recidivating (reoffending after being released from prison). Defendants are labeled as being of low risk, medium risk, or high risk (of recidivating), and then this assessment can be given to judges during criminal sentencing. Judges can use risk assessments as tools to decide whether or not to set bail, length of a sentence, or use of a punishment alternative to imprisonment.

The COMPAS algorithm is a type of algorithm deployed by the company Equivant (previously known as Northpointe) that is widely used for these risk assessments. However, the independent nonprofit organization ProPublica recently issued an expose demonstrating how the COMPAS algorithm is racially biased.

I’ve linked the expose here, but essentially ProPublica discovered that black defendants were nearly twice as likely to be mislabeled as high risk than white defendants, while white defendants were much more likely to be mislabeled as low risk than white defendants. This leads to much more intense criminalization and imprisonment of black defendants, and not as much criminalization and imprisonment of white defendants as is necessary.

In case one example of algorithmic bias wasn’t enough for you, feel free to check out Automating Inequality by Virginia Eubanks. Chapter 4 in particular gives an example of an algorithm used in the child welfare system that is biased towards poor families, called the Allegheny Family Screening Tool (AFST).

So, now we have established that algorithms, tools we have deemed unbiased and objective, actually do discriminate and do carry biases, just as humans do. This leaves us with the question: should we continue to let algorithms make decisions for us?

On one hand, we should leave the decision making to algorithms because humans can carry explicit biases. A particularly prejudiced individual might want to (say) deliberately subordinate women in the workplace because he believes their capacities for critical thinking and decision making are inherently inferior to men. I’m not saying that every person carries explicit biases, but there are people who do, and these people very well may hold positions of power and be making critical decisions for us.

Furthermore, algorithms are only biased because our data is biased; the actual algorithm itself isn’t consciously discriminating against people for one reason or another. Therefore, we should continue to use algorithms because, if we can find a way to remove biases from our data, then this tool will be extremely useful.

On the other hand, there is a saying that algorithms are like a “black box.” People feed the computers data, and they spit out answers. We don’t know how they arrived at these answers, but we trust them blindly because we assume the evidence doesn’t lie. And we let them make these life-changing decisions for us at critical times, such as during sentencing in a courtroom.

Don’t we deserve an explanation as to the reasoning behind the life-changing decisions that are made for us? Don’t we deserve an answer beyond “well, the algorithm said so!” Humans are biased, but we can track the human train of thought, and potentially discard the ultimate decision if the train of thought is proven to be biased.

Furthermore, Benjamin Eidelson argues in his essay Treating People as Individuals that the moral wrong with discrimination is that it fails to treat people as individuals. Discriminating against people makes gross generalizations about them and assumptions based on their identity or membership to a socially salient group instead of treating them as their own persons. Eidelson stipulates that autonomy is an important element of treating people as individuals; we must recognize a person’s capacity to be a free agent and make autonomous decisions, and recognize that they are a cumulative product of free choices.

Using Eidelson’s definition, algorithms are morally wrong because they fail to treat people as individuals. Algorithms don’t get to know the individuals they are making life-changing decisions for, and aren’t concerned with their abilities to make autonomous decisions. They simply assume that people will uphold certain patterns of behavior, failing to see that they might break out of this pattern and exercise their own autonomy. The COMPAS risk assessments, for instance, ask questions such as whether or not the defendant’s parents went to jail, or whether or not the defendant’s friends take drugs. These questions are about circumstances in the defendant’s life, not about the defendant themself or their autonomous choices. Algorithms like COMPAS treat people as an accumulation of factors that are out of their control, which is effectively the opposite of treating them as autonomous individuals.

Now, I’m not arguing to end the use of all algorithms, because this simply is not feasible considering how automated our current day society is. However, we should be particularly prudent about the overuse of algorithms during moments of critical decision making. Judges should not rely solely on COMPAS risk assessments during sentencing. Social workers should not rely solely on family screening algorithms to decide whether or not to open an investigation on a family. This would be knowingly allowing a biased, non-human system override our human decision making capacities, the upshot of which might be an even more discriminatory and prejudiced society than we already live in. Instead, algorithms should be used to supplement decision making with the knowledge that the algorithms may be proliferating biases in mind.

References:

Angwin, Julia, et al. “Machine Bias.” ProPublica, 23 May 2016,
www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Accessed 9 July 2020.

Eidelson, Benjamin, Treating People as Individuals (July 25, 2013). Forthcoming in Philosophical Foundations of Discrimination Law (Deborah Hellman & Sophia Moreau, eds., Oxford University Press), Available at SSRN: https://ssrn.com/abstract=2298429

Equivant. “Response to ProPublica: Demonstrating Accuracy Equity and Predictive Parity.” Equivant, 1 Dec. 2018, www.equivant.com/
response-to-propublica-demonstrating-accuracy-equity-and-predictive-parity/. Accessed 9 July 2020.

*Special thanks to Harvard Secondary School for introducing machine learning and algorithmic fairness to me!

--

--

Rachel Hendrix

follow my philosophy journey!