Justice by Algorithm: The Limits of AI in Criminal Sentencing

author: Isaac Taylor

The article’s central argument is that although algorithms could bring potential benefits to the criminal justice system in the future, we must establish regulations about their usage to ensure that the fairness and ethicality of justice decisions are not undermined. First, the author, Taylor, explains that the criminal justice system is heavily rooted in human decision-making, which inevitably means that the process is subject to a lot of bias and partiality. For example, according to one study, judges were more likely to make favorable decisions during parole hearings right after lunch. As laughable as that seems, it goes to show that this is a very evident problem in our society today. This is where AI is introduced: one way that human error and bias could be diminished could be to increase the use of algorithms that can predict, calculate, and formulate all of these decisions without the consideration of outside factors that would influence human-produced decisions. It may seem like a perfect plan, but there are a few drawbacks to this solution. One is that a reliance on algorithms may weaken societal condemnation and limit the “expressive function” of punishment. Taylor refers to Joel Theinburg’s theory that what differentiates punishment from mere penalty is the collective negative reaction and attitudes towards the crime, as the social implication of being in prison is very different from that of being issued a fine. For condemnation truly to be effective, a figure has to be in a position to carry out the condemning, and it must be made public to some degree. In cases where the judges do not make sentencing decisions themselves and instead rely on algorithms’ recommendations, a problem arises where the sentences that they give out now cannot involve condemnation. The judges do not review the reasoning behind their sentencing, and then cannot induce any negative reactive attitudes (like condemnation). This leads to the author’s big idea of meaningful public control. The conclusion that Taylor draws in this section is that in order to truly hold moral responsibility for something, one needs to have meaningful human control over the decision. For instance, the same judges that use algorithms to decide sentences and nothing more lack moral responsibility for the sentencing decisions. This is because although they have control over whether or not to use the algorithms, because they chose to use the technology, the judges surrender their own judgment and with that, their moral responsibility for each individual decision. Essentially, these judges may be morally responsible for their choice to employ algorithms’ recommendations, but are not necessarily morally responsible for individual sentencing decisions after this. Furthermore, he suggests that meaningful control should rest with public agents, such as people like judges or public officials who have the authority to act on behalf of the wider community. Finally, he says a little bit more about public responsibility when it comes to these algorithms. He references an externalist point of view in which representatives act in the community’s name if their behavior reflects public standards. He then presents a couple reasons as to why, in contrast, algorithms may not be viewed as actions of the whole community: if algorithms are created using machine learning where no human directly codes the decisions, it can be difficult to hold anyone (including the public) accountable for the algorithm’s outcomes; and if private companies develop these algorithms, it may raise concerns about whether they can function as public agents, leading to doubts about whether their actions genuinely represent the community. Taylor ends this discussion by affirming that while it’s possible to incorporate the private sector in generating sentencing algorithms, maintaining meaningful public control might mean restricting the freedom that these companies have in the design process. If this isn’t enough to preserve public supervision, then the task of developing these algorithms might need to be assigned to government agencies.

I enjoyed this article because it discussed a unique aspect of the ethicality of recidivism algorithms and focused more on its broader impact on the community and struggles of public accountability. It was interesting to see an author’s perspective on this, and I feel like I learned quite a lot about this topic from reading.