Curse and Blessing of Biometric Authentication
Facebook X LinkedIn
Blog Post

Curse and Blessing of Biometric Authentication

Andrea Beskers
Published on Apr 28, 2022

Everybody wants the benefits of biometric authentication but nobody wants to pay the potential privacy price. What is the risk for individuals and society at large? To address these challenges Mike Kiser, Director, Strategy and Standards at SailPoint, is giving a keynote speech New Face, Who Dis? Privacy vs Authentication in a World of Surveillance on Tuesday, May 10, at the European Identity and Cloud Conference 2022.

To give you a sneak preview of what to expect, we asked Mike some questions about his presentation.

Your talk describes a tension in how facial recognition is used: authentication vs privacy concerns. Can you give a high-level description of this split?

Sure. Facial recognition technology is like a lot of technology. It's neither moral nor immoral in and of itself, right? But it depends on how it's used. Really, there are two sides to this kind of technology that often depends on machine learning underpinning. It could provide a means of authentication, preferably on a locked-down secure device, maybe in a secure enclave. In that case, it gives kind of a one-to-one match of a person, to an identity and a pretty great form of biometric authentication and opens up new opportunities, new markets. And I'll talk about that in my session. The flip side is the one to end matching, and by that I mean more what people think of maybe when they think of facial recognition, picking an individual out of a crowd or identifying you if you're walking through a shopping center or an airport or some other kind of surveillance type of scenario, especially with some of the technology and the flaws underlying it, these can be problematic. And so there is tension between like a lot of technologies making things easier for us to use and more secure, and then having a side that, if it's misused or abused, can be dangerous, particularly for particular populations.

Various providers have halted or abandoned facial recognition research and products over the past few years. Does that mean that research into this technology is too problematic from an ethical perspective? 

Like I said, technology is not necessarily moral or immoral, but it's how we use it. And like a lot of these new burgeoning rapidly developing ideas, we're kind of outstripping the thoughts about how it could be used. And so part of this is, you know, needing an IRB (Institutional review board) to oversight some of these projects. Another interesting split is a solution versus a toolset development. What I mean is if you are a vendor developing a solution for a particular use case like authentication, for instance, it can be somewhat... have some boundaries around it to make sure it's only doing what you intended it to do. If, however, you're providing technology, that's a different thing entirely because you're not really sure how that componentry will be embedded in other systems. And so there's an interesting usage and deployment ethics that are involved here as well. So I don't know if it's too problematic in and of itself, but like just about anything from facial recognition all the way back to say fire can be used for a good purpose or an evil purpose.

Do you believe that the public at large – particularly the younger generation – is actually concerned about their biometric privacy? 

I think that yes, I think that a lot of people are concerned with their biometric privacy, including younger generations, who are much more accustomed to having their faces out there, their biometric information out there. I've done some research, I have a mobile app I've developed to kind of demonstrate some of the ways to potentially get around some of this with some adversarial research. And no one says, I don't want that. No, they all say, Yes, I expect to have a right to privacy independent almost of their actions. And so everyone wants it. But the question is, what's the cost and how are you going to define harm for individuals and for society at large? So those are kind of open-ended, huge questions that are worth discussing.

Is it "too late" for securing biometrics – particularly in a generation of "selfies" (aren't most people's photos and faces already out there?) 

Yes and no. Oddly, in my research, I discovered that if you really want to hide yourself, the best way to do that is to actually put yourself in stock photography, because then you're everywhere and nowhere at the same time. People have a hard time figuring out who the real you is. That said, there are lots of photos out there with people tagged in it, and so there may be problems in hiding your facial biometric. But that said, there are still use cases that some of this adversarial research can address. Think about it. If you can identify someone and you can identify their location, that's also a dangerous use case. So if you think of stalkerware or people pursuing other people, it's not about foolproof prevention. Even if you didn't have any photos out there at all, some of these techniques you shouldn't rely on to always protect your biometrics, it's always a defensive depth chart approach. But the idea is more to keep the discussion going, show the path forward and advance the good uses of this technology and a community approach.

What is the one thing you would like people to take away from EIC?

The one thing I would like people to take away from EIC is that there's always something more to learn. Always some boundary to push, always some new tool, new approach that we haven't thought of. And the way to do that is in community because we learn from each other. And that gives us boundary checks on new ideas and new technology to make sure that we haven't forgotten some of the boundary cases or potential side effects that we wouldn't have thought of if we were acting on our own. So learn new things together.

Subscribe for updates
Please provide your email address