Algorithms pervade our lives today, from music
recommendations to credit scores to now, bail and sentencing decisions. But
there is little oversight and transparency regarding how they work. Nowhere is
this lack of oversight more stark than in the criminal justice system. Without
proper safeguards, these tools risk eroding the rule of law and diminishing
individual rights, reported Wired.
Currently, courts and corrections departments around the US
use algorithms to determine a defendant’s “risk”, which ranges from the
probability that an individual will commit another crime to the likelihood a
defendant will appear for his or her court date. These algorithmic outputs
inform decisions about bail, sentencing, and parole. Each tool aspires to
improve on the accuracy of human decision-making that allows for a better
allocation of finite resources.
Typically, government agencies do not write their own
algorithms; they buy them from private businesses. This often means the
algorithm is proprietary or “black boxed”, meaning only the owners, and to a
limited degree the purchaser, can see how the software makes decisions.
Currently, there is no federal law that sets standards or requires the
inspection of these tools, the way the FDA does with new drugs.
This lack of transparency has real consequences. In the case
of Wisconsin v. Loomis, defendant Eric Loomis was found guilty for his role in a drive-by shooting.
During intake, Loomis answered a series of questions that were then entered
into Compas, a risk-assessment tool developed by a privately
held company and used by the Wisconsin Department of Corrections. The trial
judge gave Loomis a long sentence partially because of the “high risk” score
the defendant received from this black box risk-assessment tool. Loomis
challenged his sentence, because he was not allowed to assess the algorithm.
Last summer, the state supreme court ruled against Loomis, reasoning that knowledge of the
algorithm’s output was a sufficient level of transparency.
By keeping the algorithm hidden, Loomis leaves
these tools unchecked. This is a worrisome precedent as risk assessments evolve
from algorithms that are possible to assess, like Compas, to opaque neural
networks. Neural networks, a deep learning algorithm meant to act like the
human brain, cannot be transparent because of their very nature. Rather than
being explicitly programmed, a neural network creates connections on its own.
This process is hidden and always changing, which runs the risk of limiting a
judge’s ability to render a fully informed decision and defense counsel’s
ability to zealously defend their clients.
Consider a scenario in which the defense attorney calls a
developer of a neural-network-based risk assessment tool to the witness stand
to challenge the “high risk” score that could affect her client’s sentence. On
the stand, the engineer could tell the court how the neural network was
designed, what inputs were entered, and what outputs were created in a specific
case. However, the engineer could not explain the software’s decision-making
process.
With these facts, or lack thereof, how does a judge weigh
the validity of a risk-assessment tool if she cannot understand its
decision-making process? How could an appeals court know if the tool decided
that socioeconomic factors, a constitutionally dubious input, determined a
defendant’s risk to society? Following the reasoning in Loomis, the court
would have no choice but to abdicate a part of its responsibility to a hidden
decision-making process.
Already, basic machine-learning techniques are being used in
the justice system. The not-far-off role of AI in our courts creates two
potential paths for the criminal justice and legal communities: Either blindly
allow the march of technology to go forward, or create a moratorium on the use
of opaque AI in criminal justice risk assessment until there are processes and
procedures in place that allow for a meaningful examination of these tools.
To read more CLICK HERE
No comments:
Post a Comment