select all

What Happens When Your Face Is Your Password?

Unfortunately, the answer doesn’t involve the Faceless Men.

Security breaches are weekly news at this point, sometimes barely covered outside of the tech press. We’ve all gotten used to regularly changing our passwords, and using different passwords for different accounts. But that’s the advantage of passwords: You can change them, quickly and easily. What are you supposed to do if suddenly that password isn’t just a password? If that thing that’s now out there on the darknet — or god knows where else — isn’t merely a string of random letters and numbers, but rather, something much, much more personal. What happens when that password is your face?

In the wake of Apple’s announcement yesterday that its new iPhone X could be unlocked with face-recognition technology called Face ID — rather than the now-ubiquitous fingerprint-scanning Touch ID — this question is suddenly, and more than a little creepily, real. We’re facing (sorry) a major shift in security technology as we know it, one where we must paradoxically entrust the majority of our most private data to one of our most public attributes.

The problem isn’t, necessarily, Apple itself. Face ID is — the company assures us, fairly credibly — incredibly secure. Even more so than Touch ID. The problem is that countless apps — which hold everything from your personal photos to your banking — will soon begin adopting facial verification as their primary mode of authentication. The rest of the digital world has a much less commendable track record on security than Apple, and not every company that’s looking to add this hot new feature will have the capability (or, frankly, the drive) to spend the extra money and time necessary to design a safe, secure, privacy-protecting system. Terrifyingly, this dynamic is already unfolding: Samsung, the world’s largest phone manufacturer by volume, has introduced a shockingly insecure face-recognition system on its new phones, seemingly only to compete with Apple.

But it can get much worse than badly implemented software. Imagine a breach at a company that used centralized facial authentication — meaning, a form of facial verification where your identity info is in a central location, usually one of the company’s servers. If, say, Equifax had used biometrics as a means for their customers to access their credit information — which seems like something that could easily become a rather common practice in three to five years — and that information was hacked alongside the rest of its supposedly “secure” data earlier this month, up to 143 million faces could be out there ripe for the taking. Although the specifics of what could be done with this data depends on a number of details about the specific source, in most scenarios, hackers could use this information to re-create your likeness and gain access to your other facial-authentication-protected accounts.

This touches upon the crux of the issue with facial verification: You can change your password, update your phone number, or even apply for a new Social Security number — but you can’t get a new face.

How the hell do you secure your face? 

People have written novels’ worth of words dedicated to the many ways you can go about maintaining password security, but how on earth are you supposed to go about protecting your face? We upload pictures of our faces everywhere. There are countless apps dedicated to satiating our thirst for selfies — not to mention the countless apps not dedicated to them to which we upload a stream of selfies — and that makes it more than a bit difficult to talk about the notion of facial security in any sort of serious manner. (We also, you know, walk around with these things attached to our heads, which is not exactly conducive to the whole privacy thing.)

This actually poses somewhat of a serious issue for makers of facial-verification technology, as many versions can be duped through the use of photos of the owner that you can easily find online. If you’re looking to break into a Samsung Note 8, for example, it’s as easy as holding up a photo of the owner, but even for advanced models like Apple’s iPhone X, our digital presence creates a number of security risks.

I asked Premkumar Natarajan, the executive director of USC’s Information Sciences Institute, to walk me through some hypothetical hacks:

“Let’s say I steal somebody’s phone and I sort of know who the person is,” Natarajan began. “If I go to Facebook and get a few of their photos, I can, in principle, create a pretty high-fidelity 3-D model of that person, because [modern] graphics technology has come to a point where we can print really high-quality 3-D faces — or even bodies of people — even when only given a few different exposures. And now, you can also use reasonably off-the-shelf technologies to prepare a 3-D rendering and print out a 3-D mask.”

This sort of endeavor would easily pass the vast majority of liveness tests — that is, a common test that the software does to check to make sure that the subject is alive, and/or real — and grant the spoofer easy access to the victim’s phone. Apple, for its part, insists that Face ID was trained for thousands of hours on lifelike 3-D printed faces to ensure that only a real live human face would work — but it’s hard to say for sure until third-party professional hackers and security experts take a crack at breaking the system.

Facial-verification software works fine … if you’re a white male.

I’ll admit, the first time I heard this argument, even I was a bit skeptical. It didn’t seem possible that an algorithm for face-mapping could be biased against a particular race or gender — I mean, that’s the beauty of mathematical models, right? But, as I spoke to a number of experts from across the cybersecurity field, the reality of this deficiency became painstakingly clear.

“We know that all of these algorithms do have racial bias,” Natarajan explained to me. “And the racial bias comes because of (at least) two sources: One is that your training data [for the facial-verification technology] may not reflect the diversity of users in an appropriate way — because collecting diverse training data is often hard, and any slight bias in the training data will affect you.”

“The second piece has to do with skin tone,” he continued. “I am darker-skinned than a Caucasian person, so my skin reflects light less than an object that is ‘brighter’ than me. So obviously there will be issues.”

In other words, if you’re different from the standard pool of data that is used to test the system — which, let’s face it, is likely to be rather white-male oriented if it’s coming out of Silicon Valley — then you’re going to likely have a harder time using facial verification. And if you think of this system not as we know it today, but as we likely are going to interact with it in three or five years from now, it graduates from being just a minor nuisance when unlocking your phone to the beginning of what could easily become an extremely problematic form of systematic discrimination.

There are basically no laws whatsoever surrounding this. 

I reached out to Neema Singh Guliani of the ACLU to learn a little bit more about the legal parameters currently in place surrounding the use of facial-recognition and facial-verification software, and was honestly shocked to hear that there are essentially none.

“In terms of law-enforcement access to face recognition [technologies],” Guliani told me, “one of the major concerns has generally been that there really isn’t a good legal framework around when and how the government can use it.”

Guliani likened the severity of the situation to the security crisis our country experienced back in the ’80s over unrestricted wiretapping by both government officials and private citizens. However, while that spurred the creation of the Electronic Communications Privacy Act, there’s no such legislation that exists (or that even has been proposed) that would lay out protections for the sharing of our biometric identifiers.

This is an issue that is only further compounded by the stark reality that the most comprehensive facial-recognition databases in our country are maintained by governmental agencies and state law enforcement — two groups that are not exactly known for their top-notch cybersecurity practices. What happens if an incident like last year’s breach of the U.S. Office of Personnel Management’s databases — wherein the fingerprints of over 5.6 million individuals were obtained by an unknown source — occurs again, but with one of the countless offices that has access to American face maps? How on earth are consumers supposed to ever put faith in a system that uses facial-verification software again when a single breach means you’re compromised for the rest of your life?

These kinds of concerns are likely why it has taken biometrics this long to become mainstream.“Centralized biometrics never caught on until now because big companies said, ‘I don’t want to store everybody’s biometrics. I mean, if we get hacked that’s it. We can’t tell all these people to make a new fingerprint.’” George Avetisov, CEO of HYPR, told me. “So, the shift that we’re hopefully going to see now is that a lot more companies are going to start to adopt decentralized security — the way Apple is doing, the way Samsung is doing — and I think from the companies that don’t … I don’t want to be a fearmonger, but I think [from them] we’re going to see a lot more Equifax-type scenarios.”

What Happens When Your Face Is Your Password?