Like in the U.K., AI is proliferating at all levels of the
U.S. justice system. Risk assessments assist in bail,
sentencing and parole decisions. Police
are dispatched into communities with an algorithm’s insistence.
And facial
recognition is being rolled out by law enforcement at the federal,
state and local levels.
The challenges and harms of these technologies are
well-documented. Facial recognition and risk assessments show racial
bias. Complex
algorithms are not built to “explain” their conclusions, which closes
a part of an otherwise open court process. Even if AI software is
“explainable,” private companies shield their software from scrutiny by
claiming it as a trade
secret—despite being used by a public agency.
These challenges are compounded in the U.S. because federal
and state lawmakers are using algorithms as a public policy crutch.
At the federal level, the
First Step Act, passed in 2018, expects to release more people from federal
prison with the assistance of a risk assessment tool. In a similar vein,
California passed a major bail reform bill–SB-10—that
is now on the ballot. If ratified, the law would require local agencies to use
risk assessment tools in lieu of cash bail.
In both cases, drafters of these otherwise decent laws made
the bet that an algorithm can stand in for existing processes and policy
choices. At the same time, neither law provides legal standards on how the tool
should be built, what oversight and transparency are needed or how to assess an
algorithms efficacy, including its impact on the legal rights of the accused.
Handing off this type of rule-making to an agency is
standard in legislation; however, there’s evidence that agencies are also being
deferential to the technology. In New York, for example, the Department of
Corrections and Community Supervision has put in multiple layers of
bureaucracy to limit human override of the agency’s risk assessment
tool.
This legislative and regulatory trend outsources decisions
traditionally made by publicly accountable individuals to private companies.
Algorithms are opinions expressed through math, and when an algorithm is used
for a public purpose, everything from the problem definition to the data used
to build the algorithm are public policy concerns. As the report from London
notes, the value-based decisions that ultimately make up an algorithm are
“usually not between a ‘bad’ and a ‘good’ outcome, but between different values
that are societally held to be of similar importance.”
Take for example, defining fairness when using a risk
assessment for bail. As University of Pennsylvania criminology professor
Richard Berk wrote with
colleagues in 2017, there are six types of fairness and not all are compatible
with each other. A government could decide that fairness is achieved when a
tool provides the same accuracy for two protected groups, like men and women.
Or fairness can be attained if the error rates are the same among groups, even
though it might mean more men than women are incarcerated.
These two outcomes are not compatible, so a choice needs to
be made. Depending on the community, either could be the “right” decision, but
it’s not a decision to be left up to a software company.
Beyond the legislative and executive branches, unjustifiable
trust in these tools extends to the judiciary. The Supreme Court of Wisconsin
in 2016 decided that the lack of transparency of a risk assessment tool used at
sentencing did not infringe on a defendant’s due process. A California appeals
court in 2015 made a similar conclusion regarding DNA testing software, which
was used to convict a man of rape and murder.
Collectively, these approaches to legislating and judicial
decision making are regrettable—but fixable.
To be clear, the use of algorithms is not fundamentally the
problem. The problem is the lack of accountability, effectiveness, transparency
and competence surrounding these tools, as defined by the IEEE’s
comprehensive principles on the ethical use of AI in legal systems.
To read more CLICK HERE
No comments:
Post a Comment