Coello stresses that New York police do not use facial recognition matches as conclusive evidence to arrest someone. “It’s only a lead for detectives,” he says. “We point them in the right direction.” The people who work in facial recognition are all detectives, and they do legwork beyond the photos, conducting detailed searches of a possible suspect’s background, like their address, to aid the investigation. “No one is going to go four towns over to hold up a liquor store,” Coello says.
Even with clear matches, though, the facial recognition team says the person is only a possible suspect. The department says it has misidentified someone via the technology just five times, most recently in March 2012.
“It’s just a tool. It’s not DNA, and it’s not fingerprints,” says Stephen Capasso, the former commanding officer of New York City’s Real Time Crime Center, which includes the facial recognition unit. Still, “I think our usage of facial recognition is going to be increasing.”
Most of us encounter facial recognition in perfectly law-abiding modes, like photo tags on Facebook and on photo apps like Google Photos, where software algorithms parse our pictures and suggest names for the people in them. Facebook launched its photo-tagging tool in late 2010, and it’s become a routine feature for many users. This is certainly the first mass consumer use of facial recognition. It likely won’t be the last.
How We Got Here
Travelers, for instance, might encounter facial recognition algorithms at airports. In Australia at the end of 2013, P. Jonathon Phillips walked through SmartGate, an automated border control system being used in Australia’s eight major international airports to speed customs processing for people from eight countries, including the U.S.
Phillips put his passport in the kiosk and looked at a camera, which automatically matched his face with the image on the passport. He was through SmartGate in five minutes. He knew about the system, but still, “I was amazed when I saw this!” he says. “I’ve been in the facial recognition field for 23 years. We started out with ‘can you recognize?’ algorithms. When you go someplace and it actually happens …”
Phillips is arguably the most influential scientist in facial recognition. He started his work in 1993, launching the FERET (Face Recognition Technology) program for the Army Research Laboratory, the first such program. Back then, they were testing algorithms against a database of about 1,200 faces, mostly college student volunteers from George Mason University. He’s now an electronic engineer at the National Institute of Standards and Technology, and he manages NIST’s facial recognition challenges.
When he started, verifying passport photos presented a difficult problem. Now, many facial recognition algorithms are better than humans when it comes to recognizing a person looking straight ahead under good lighting conditions.
Facial recognition algorithms don’t “see” anything, of course. Faces and their features are broken down into strings of numbers representing individual pixels, their colors and their place on what will mathematically correspond to a face. Algorithms must first find a face, and then find the eyes and other features that human brains take in at once. One early technique came via a linear algebra representation called eigenvectors, which let researchers compare similar objects as long as they are precisely aligned. Think driver’s license and passport photos, or mug shots, which feature a face looking straight ahead. Researchers used these techniques to create eigenfaces, which to human eyes look ghostlike, but they give algorithms a reference representation of a face to compare with a new face.
It helps the technology that faces are relatively straightforward to analyze. Eyes and mouths are in the same places consistently, and face shapes don’t vary much — you’ll never find someone with a face shaped like a square, star or a hexagon. By the mid-1990s, facial recognition was a hot technology, and several startups formed to commercialize it.
“That was just fun, the first stage of a new technology sprouting up,” says Brian Martin, senior director of research and technology at MorphoTrust, the dominant provider of facial recognition software to government and law enforcement. In 1998, he received a Ph.D. in condensed matter physics from the University of Pittsburgh. A year earlier, he started working at Visionics, an early facial recognition startup. Its first product? A biometric screen saver that used your face as your computer password. Martin says it had two big selling points: You didn’t have to remember your password, and it would take pictures of anyone who tried to break into your computer. But that day’s low-resolution cameras meant the technology was especially prone to issues with image quality.