Sunday, March 23, 2025

Fingerprint evidence not infallible

Fingerprints have been police tools for a long time, more than a century. They were considered infallible for much of that history, according to Science News.

Limitations to fingerprint analysis came to light in spectacular fashion in 2004, with the bombing of four commuter trains in Madrid. Spanish police found a blue plastic bag full of detonators and traces of explosives. Forensic experts used a standard technique to raise prints off the bag: fumigating it with vaporized superglue, which stuck to the finger marks, and staining the bag with fluorescent dye to reveal a blurry fingerprint.

Running that print against the FBI’s fingerprint database highlighted a possible match to Brandon Mayfield, an Oregon lawyer. One FBI expert, then another, then another confirmed Mayfield’s print matched the one from the bag.

Mayfield was arrested. But he hadn’t been anywhere near Madrid during the bombing. He didn’t even possess a current passport. Spanish authorities later arrested someone else, and the FBI apologized to Mayfield and let him go.

The case highlights an unfortunate “paradox” resulting from fingerprint databases, in that “the larger the databases get … the larger the probability that you find a spurious match,” says Alicia Carriquiry. She directs the Center for Statistics and Applications in Forensic Evidence, or CSAFE, at Iowa State University.

In fingerprint analyses, the question at hand is whether two prints, one from a crime scene and one from a suspect or a fingerprint database, came from the same digit (SN: 8/26/15). The problem is that prints lifted from a crime scene are often partial, distorted, overlapping or otherwise hard to make out. The expert’s challenge is to identify features called minutiae, such as the place a ridge ends or splits in two, and then decide if they correspond between two prints.

Studies since the Madrid bombing illustrate the potential for mistakes. In a 2011 report, FBI researchers tested 169 experienced print examiners on 744 fingerprint pairs, of which 520 pairs contained true matches. Eighty-five percent of the examiners missed at least one of the true matches in a subset of 100 or so pairs each examined. Examiners can also be inconsistent: In a subsequent study, the researchers brought back 72 of those examiners seven months later and gave them 25 of the same fingerprint pairs they saw before. The examiners changed their conclusions on about 10 percent of the pairings.

Forensic examiners can also be biased when they think they see a very rare feature in a fingerprint and mentally assign that feature a higher significance than others, Quigley-McBride says. No one has checked exactly how rare individual features are, but she is part of a CSAFE team quantifying these features in a database of more than 2,000 fingerprints.

Computer software can assist fingerprint experts with a “sanity check,” says forensic scientist Glenn Langenburg, owner of the consulting firm Elite Forensic Services in St. Paul, Minn. One option is a program known rather informally as Xena (yes, for the television warrior princess) developed by Langenburg’s former colleagues at the University of Lausanne in Switzerland.

Xena’s goal is to calculate a likelihood ratio, a number that compares the probability of a fingerprint looking like it does if it came from the suspect (the numerator) versus the probability of the fingerprint looking as it does if it’s from some random, unidentified individual (the denominator). The same type of statistic is used to support DNA evidence.

To compute the numerator probability, the program starts with the suspect’s pristine print and simulates various ways it might be distorted, creating 700 possible “pseudomarks.” Then Xena asks, if the suspect is the person behind the print from the crime scene, what’s the probability any of those 700 could be a good match?

To calculate the denominator probability, the program compares the crime scene print to 1 million fingerprints from random people and asks, what are the chances that this crime scene print would be a good match for any of these?

If the likelihood ratio is high, that suggests the similarities between the two prints are more likely if the suspect is indeed the source of the crime scene print than if not. If it’s low, then the statistics suggest it’s quite possible the print didn’t come from the suspect. Xena wasn’t available at the time of the Mayfield case, but when researchers ran those prints later, it returned a very low score for Mayfield, Langenburg says.

Another option, called FRStat, was developed by the U.S. Army Criminal Investigation Laboratory. It crunches the numbers a bit differently to calculate the degree of similarity between fingerprints after an expert has marked five to 15 minutiae.

While U.S. Army courts have admitted FRStat numbers, and some Swiss agencies have adopted Xena, few fingerprint examiners in the United States have taken up either. But Carriquiry thinks U.S. civilian courts will begin to use FRStat soon.

To read more CLICK HERE

No comments:

Post a Comment