If you are arrested and charged with a crime, you generally experience the following sequence of events:

  • A grand jury decides whether there is sufficient evidence for a trial.
  • A judge decides whether you can be out on bail while awaiting a trial.
  • If the punishment is sufficiently severe, a jury decides whether there is sufficient evidence to prove guilt. Otherwise, a judge does it.
  • A judge decides the sentence (possibly sending you to prison).
  • While in prison, you periodically meet with a parole board, which is a group of individuals who try to assess whether the accused has been “rehabilitated” and can “contribute” to society.
  • If parole is granted, the parole board also decides what restrictions should be placed on your actions so as to prevent you from being tempted to commit a future crime.

So if you think about it, your fate is twice decided by a jury of your peers, twice decided by an individual (the judge), and once decided by a panel of (hopefully) experts.

These people use their impression of you and of the presented evidence of your crime to make these very important choices. As you’ve probably seen from the news or know from personal experience, when people make judgments about others they are often quite biased, even unconsciously.

Enter machines. Machines, on the face of it, don’t seem biased – they do what they’re told based on data. If you’re a scientist who cares about criminal justice, you might look at our broken system and say “what if we used data to decide the fate of others, instead of relying on biased humans?”

This has led to a swath of research and commercial software. The outcome they’ve chosen to focus on is criminal recidivism, i.e. will this person go on to commit more crimes?

From a criminal justice perspective, this is quite a simplification: it treats the purpose of imprisonment solely as the prevention of future crimes. You could argue that there are outcomes of much greater diagnostic and societal importance, e.g. ability to contribute to society.

But let’s pretend for the sake of argument that this is the best outcome to focus on. Indeed, it is the outcome that parole boards use to decide whether to grant parole. What other problems could arise from the data treatment?

In machine learning we usually use the word bias to describe a property of models – specifically how restricting the class of models may lead to underfitting. Something we think about less often is bias in the training set.

Let’s do a thought experiment: let’s say I wanted to divide data into two classes. Class one is “will commit another crime” and class two is “will not commit another crime.”

If I told you that of my data samples, 80% were labeled “will commit another crime” what kind of model do you think I would learn? A model that labels everything as “will commit another crime” would be right 80% of the time, and therefore my model is very likely to do just that.  This is something most MLers find obvious and is quite intuitive.

What is a little trickier is if certain features are overrepresented in the data set. It can be hard to predict how models will exploit feature-level bias.

This is the case with criminal recidivism data sets. Due to systematic racial injustice, black people are much more prevalent in criminal data than white people. This means that the data is heavily skewed towards black people being labeled future criminals.  A model can easily exploit this and still perform well. The question of importance for recidivism prediction is: how can we ensure our results are unbiased along dimensions such as race?

One answer is to look at the false positive rate. A false positive in this case would be labeling someone as “will commit another crime” when it is not true. A false negative, on the other hand, is failing to correctly label someone as “will commit another crime.” It’s the biggest indicator that a model is exploiting bias or skew in the data.

In the case of COMPAS, the best-known software of this kind (not for good reasons), they optimized overall error (although they refused to share exactly what loss function they used). The unfortunate result:

  • Black people had much higher false positive rates than white people
  • White people had much higher false negative rates than black people

What is important to know here is that by COMPAS’ own error criteria, black and white people were equal. It was only when you looked at false positive and false negative rates that you could see an imbalance. (For a recounting of the full back-and-forth between COMPAS and ProPublica on this you can refer to this summary at 538.)

COMPAS has not only affected people on whom the algorithm is used. A biased algorithm gives racism and prejudice a so-called scientific basis, which can have far-reaching consequences. In fact, an “expert” witness has testified that race plays a causal role in criminality.

Alexandra Chouldechova proves very elegantly that the quantification of fairness depends directly on false positive rates, not more general notions of error. If we want to ensure a system is fair, we need to ensure that the outcomes are equitable, and here they clearly are not. This kind of analysis is extremely valuable and more predictive technologies are following suit.

So, how can we move forward?

  1. We need to stop pretending that data is unbiased. The old adage “Data doesn’t lie” should be amended to “Data doesn’t lie…about who collected it.”
  2. We should acknowledge that feature-level bias impacts results.
  3. When analyzing algorithmic performance, especially for societal impact, we should be doing fairness analysis along common axes of bias, as Chouldechova did for COMPAS.
  4. When developing models, we should aim for transparency and interpretability, which will make it easier to correct for bias. (Think Cynthia Rudin)

As a machine learner and a general believer in the bias of humans, I’m primed to support a data-driven approach to  important societal decisions: whether it’s parole decisions or more general policy-making. However, it’s naive to think that data science can enter the social and economic sphere without contending with major issues. The very bias we are trying to bypass is what created our data set.

This naturally leads to the question: is this where we as a society should put our efforts? Maybe before we focus all our energy into applying a bandaid to a broken system, we should ask ourselves: what is the intended effect of prison and what is the actual effect? It’s possible that going to prison increases the likelihood that someone will commit further crimes. That certainly complicates the problem.

Before we can model who should go to prison and for how long, we should model the prison system itself. If we want prisons to rehabilitate instead of punish, we should evaluate how effective they are. In terms of societal good, I think this would have much broader impact.

To apply machine learning to issues of bias, we need to understand how bias affects our data and therefore our models. But to build models that promote societal good, we need to look at modeling the status quo for potential change, not simply upholding it.