Sara Fathallah writes in the Inquest: In prisons and jails across the United States, authorities now use automated systems to transcribe phone calls and visitation videos and to flag words or phrases deemed risky. For decades, correctional facilities recorded and reviewed calls manually, but AI-driven systems now allow authorities to scan millions of minutes of conversations in real time.
In the
2010s, prisons began using a biometric technology called voiceprinting, which
identifies individuals based on the unique characteristics of their voices. It
allows correctional facilities to identify who is speaking on any given call
and to search for other calls featuring the same voice. Texas-based Securus
Technologies, one of the largest providers of prison phone services in the
United States, supplies sophisticated voiceprinting services to hundreds of
correctional agencies.
There is
no scientific consensus on the validity of automatic speaker recognition, and
experts recommend exercising extreme caution when using voice recognition as evidence in
court. Even Securus’s 2016 patent acknowledges that “each given person’s vocal
tract characteristics actually vary in a number of ways depending on time of
day, how much the person has been talking that day and how loud, whether or not
the person has a cold,” and other factors. But prisons continue to collect
voiceprints and build growing databases; at least 200,000 voiceprints have been
stored thus far. Sometimes, prisons pressure incarcerated people to give up their voice
samples by threatening a complete loss of communications privileges to those
who decline. In other instances, they enroll incarcerated people in voice
recognition programs without their knowledge or consent. New York alone, for
example, had already enrolled 92 percent of its incarcerated population by 2019.
In some
jurisdictions, voiceprinting systems can be used to identify both incarcerated
people and the individuals who speak to them. As representatives from the Electronic Frontier Foundation point
out, such technologies can potentially be used to “profile anyone who has a
voice that crosses into a prison, including all the parents, children, lovers,
and friends of incarcerated people.” Advocates are afraid that authorities
might flag individuals who are in touch with multiple incarcerated people,
searching for patterns and ways to crack down on prison organizing.
Today, a
growing array of wearable technologies—ankle monitors, bracelets that measure
blood alcohol levels, smartphones themselves—are used to track people at nearly
every stage of the criminal legal process.
A new
generation of compulsory biometric devices, however, pushes far into dystopian
territory, raising questions about how much biological information the carceral
state feels entitled to collect. Some of these tools, already being tested in
U.S. jails and prisons, take the form of rigid wristbands that monitor heart
rate, skin temperature, cortisol levels, and so-called “activity” or stress
indicators. According to the ACLU, they represent “not just a privacy invasion
but an assault on inherent human dignity and autonomy.”
In some
research initiates, the data gathered by biometric devices is already being
analyzed and operationalized. In Indiana, a team of computer scientists and
developers at Purdue University utilized such data in 2020 to train an AI
algorithm to predict recidivism. According to the team’s press release, the project—funded by the Department of
Justice and conducted in collaboration with county-level corrections and law
enforcement agencies—harvested data such as stress and heart rates via wearable
bracelets and smartphones. The stated goal was to determine which physiological
indicators are linked to an individual’s “risk of returning to their criminal
behavior.”
But as
scholar Brian Jefferson notes in Digitize and Punish, algorithms used for carceral means are
not “simply mathematical objects” but rather “artifacts of governance designed
to achieve specific objectives.” By focusing on internal, physiological states
rather than structural conditions—such as access to housing, employment, health
care or social support—these models dismiss decades of work investigating
recidivism and its social and economic causes. Those causes, as AI
researchers Os Keyes and Chelsea
Barabas have noted, are already well understood. What remains
unsettled is why emerging technologies continue to search for answers inside
the body, rather than in the systems that shape people’s lives.
Across
these examples, a shared pattern emerges: the encoding of the body as evidence,
often without the knowledge, consent, or recourse of those involved. This
process strips people of their autonomy, dignity, and right against
self-incrimination. Whether through DNA, eye movements, or physiological
indicators of stress, these systems recast human bodies as sites of suspicion,
deception, threat, or risk. Rather than eliminating human bias, they
redistribute and reinforce it.
“Crime
prediction algorithms,” Ruha Benjamin aptly explains, “should more accurately be called
crime production algorithms.” Biometric tools are likely to expand
further across the criminal legal system as police departments, courts, and
prisons increasingly turn to A.I.-driven surveillance and predictive technologies. These
tools are being deployed most aggressively in communities that are already
heavily policed and disproportionately criminalized. Preparing for—and
resisting—this expansion requires a broader understanding of biometrics beyond
facial recognition alone, including the many ways bodily data can be collected
and put to use. Fighting to ban facial recognition is not enough; it must be part of
the larger fight to stop carceral biometrics and advance digital
abolition.
To read more CLICK HERE
