The Georgetown Law Center on Privacy & Technology issued a report (with its own vanity URL!) on the NYPD’s use of face recognition technology, and it starts with a particularly arresting anecdote:
On April 28, 2017, a suspect was caught on camera reportedly stealing beer from a CVS in New York City. The store surveillance camera that recorded the incident captured the suspect’s face, but it was partially obscured and highly pixelated. When the investigating detectives submitted the photo to the New York Police Department’s (NYPD) facial recognition system, it returned no useful matches.1
Rather than concluding that the suspect could not be identified using face recognition, however, the detectives got creative.
One detective from the Facial Identification Section (FIS), responsible for conducting face recognition searches for the NYPD, noted that the suspect looked like the actor Woody Harrelson, known for his performances in Cheers, Natural Born Killers, True Detective, and other television shows and movies. A Google image search for the actor predictably returned high-quality images, which detectives then submitted to the face recognition algorithm in place of the suspect’s photo. In the resulting list of possible candidates, the detectives identified someone they believed was a match—not to Harrelson but to the suspect whose photo had produced no possible hits.2
This celebrity “match” was sent back to the investigating officers, and someone who was not Woody Harrelson was eventually arrested for petit larceny.
GARBAGE IN, GARBAGE OUT: FACE RECOGNITION ON FLAWED DATA
The report describes a number of incidents that it views as problematic, and they basically fall into two categories: (1) editing or reconstructing photos before submitting them to face recognition systems; and (2) simply uploading composite sketches of suspects to face recognition systems.
The report also describes a few incidents in which individuals were arrested based on very little evidence apart from the results of the face recognition technology, and it makes the claim that:
If it were discovered that a forensic fingerprint expert was graphically replacing missing or blurry portions of a latent print with computer-generated—or manually drawn—lines, or mirroring over a partial print to complete the finger, it would be a scandal.
I’m not sure this is true. Helping a computer system latch onto a possible set of matches seems an excellent way to narrow a list of suspects. But of course we should not be permitted to arrest or convict based solely on fabricated fingerprint or facial “evidence”. We need to understand the limits of technology used in the investigative process.
As technology becomes more complex, it is increasingly difficult to understand how it works and does not work. License plate readers are fantastically powerful technology, responsible for solving really terrible crimes. But the technology stack makes mistakes. You cannot rely on it alone.
There is no difference in principle between facial recognition technology, genealogy searches, and license plate readers. They are powerful tools but they are not perfect. And, crucially, they can be far less accurate when used on minority populations. Using powerful tools requires training. And the benefits are remarkable. But users need to understand how the technology works and where it can break down. This will always be true.