Bias Criminal Assessment Automatization

On 2016, Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner from ProPublica published the article: Machine Bias – There’s software used across the country to predict future criminals. And it’s biased against blacks.

On the article, they explain how some courts in the USA are using algorithms to score the risks of the defendants. That score helps judges to determine, based on the risks of the individual, about the freedom or parole during the trial and even to support the decision of the judge at the moment of the sentencing.

The risks assessment system “falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.”  The system not only errs with different races but also among genders.

The problem is on the data collected from a predictive and legal viewpoint. From the analysis of the questionary that defendants had to answer it is possible to find questions related to their previous behavior but also questions that indicate unrelated elements for the criminal sanction, such as:

  • family (question 31-38),
  • friends (questions 39-44),
  • housing situation (54-64),
  • social environment (questions 65-70),
  • education (71-79),
  • work experience (80-94),
  • psychological (95-102),
  • social support (103-111), and
  • approach to the obedience of the law (127-137)

Most of the questions have a bias towards poor groups in society. Family support, housing, education, work experience, etc. Those are areas where less privileged segments of the population are weaker. If the criminal system sanctions those in a less privilege situation based on their conditions that cannot be controlled by their own will then the system is punishing poverty.

The problem with the discrimination, in this case, has two levels: firstly, certain groups were cataloged as riskier just because their origin, race or conditions that are not behavioral. Then there is a criminal sanction based on personal characteristics different from actions, from intentional action. There is a punishment over a state that does not depend on the subject. This idea is really important because the system was (and still is) punishing attributes of the personality that are not controllable.

Secondly, the predictability of those algorithms was very low, with only 20% of the success of prediction for violent crimes. The model is not only punishing the individuals based on their attributes, but those attributes have no correlation with the commitment.

The question is, how far is this kind of examples to pseudosciences that today seems so irrational such as phrenology where some think individuals convicted by the shape of their skull or trials by ordeals where people were convicted by their ability to heal their skin after burning tests or sink on water if their souls were pure.

A couple of months ago I was having a class with a group of high school students. After explaining this case, one of the students asked me what happened to those convicted by these kinds of systems. He was right, I never thought about it. The question is what happens when someone is convicted by a judge or a “sentencer” that during the decision-making process is under a fatal mistake or terribly wrong? Can we cancel sentences because the judge was crazy during the period when he made the decision? I am honestly not sure but seems like an interesting topic. If you have any paper about it, please share it in the comments.

Leave a comment