We're gonna start tonight. As most talks about facial recognition do with Naval warfare. This is how Naval warfare used to be fought to fleets right across from each other, firing a point blank range until one was destroyed or one retreated. All this changed.
However, in 1805 at the battle of Trafalgar, Lord Admiral Nelson chose instead to send his fleet directly into the enemy lines. Does anybody know what happens when you send a large fleet directly into an enemy fleet, you get a hot mess, right? Friend and fro mixed together. Hard to know what's going on smoke in the fog of war. So Nelson knew this was going to happen. And so he installed innovation before the battle took place. Legend says that he painted all the ships in this color scheme. Now known as the Nelson checker and really what this was, was innovative authentication, right?
Because in a single glance it worked and it worked because it was accurate and it was easy to use. It was accurate because at least originally only his ships were painted this color. And it was easy to use because like I said, in a single glance, every ship, every crew, every sailor down to a man could look up, no, if it was foe or friend and take immediate action, it worked so well that Nelson raised one flag, hoist, the only instruction he gave in the entire battle, he said, look, you're all empowered. Everyone needs to do their duty.
Now, if you know history that didn't last long, right? It soon faced issues in both areas, accuracy and ease of use accuracy suffered because other Navys weren't dumb, they started painting their ships similarly to disguise them. And also this is almost too easy to use, right? Because it wasn't just the British sailors that could use this. As soon as five, 10 minutes passed. Every other sailor from the enemy ships could do the same thing. Facial recognition is in a similar position. It it's an innovation that when it works well, it's fantastic when it's accurate, right?
When it's easy to use most of the time, this is in a one to one use case. Think of border controls that you sail through. If you're a European citizen, sorry, some of you had done Brexit.
I know fighting words, fighting words or, or face ID. When you look at your phone, right? It's gonna be something you're going to do anyway, by looking at your device, easy access, instant authentication. Now facial recognition though has also faced some challenges in both of these areas over the past few years.
And it's come about in addition with the rise of additional use case, not that one to one relationship, but more of a one to end relationship. In other words, instead of taking one photo that's already known and then comparing it to a photo that's submitted either from a, a camera or a border control, that type of thing. Now one image is submitted and then it's checked against a host, a database of others. And instead of just returning, yes, this is this person. You get a response back of a top five, top 50 top 100 potentials with certain percentages, right?
And so this use case runs into trouble because of difficulties in those two areas. First accuracy was realized to be suffering in 2018 doctor named joy Bwin he had MIT stared into a facial recognition system and realized it couldn't detect her face at all. She was a dark skinned female, and it wasn't until she put a white mask, like what you see here directly in front of her, has it finally clicked in and recognized her? She and her team did research and realized that a lot of these systems were trained on data sets largely consisting of white males.
So as a result, if you are a white male, facial recognition can work pretty well for you. She discovered that it suffered with dark skin people and especially females. She published a report in 2018 to their credit organizations responded, right? They went and they fixed their algorithm.
They did research. They improved how the machine learning worked. And that was great. But these systems had already been being sold to law enforcement result.
Again, multiple false arrests, three black men in the states by 2020 alone. Cuz what was happening is law enforcement would get these results back, take the top, hit and say, this must be our guy and go and arrest him right now.
2020, there was a large description about this. Several reports came out, several companies put on hiatus, selling this technology to law enforcement. IBM left the space entirely. And in 2021, Facebook realized that it had a treasure trove of photos that are already identified faces. You remember that feature in Facebook where you click on the photo of all your friends and you say, this is Larry. This is Mary Beth. This is whoever, no permission, no asking your friend.
Now there's a whole trove of pictures out there that is waiting to be used preed and that became important.
And Facebook realized this cuz it wasn't just bias. Right? Just like the innovative scheme, the Nelson checker was really easy to use. And to misuse the same was true with facial recognition. Technology started having companies like clear view AI go out to social media sites, Facebook, Instagram, also Venmo and LinkedIn take publicly available photos from that and build a database to do facial recognition against the latest estimate is about 20 billion images in their database.
And they're still selling this to, to law enforcement in the states, some international governments and in the us, the government is starting to catch up slightly, right? Several cities have banned the use of some of this facial recognition technology, several countries, such as Canada and Italy recently Australia said you're violating the rights of our citizens, having their images in your database and demanded that clear view.
And some of these other companies retract them and remove them from their database. So we've seen that enterprises start react.
The government is starting to, to take notice and react, but it's a long process, right? And to illustrate how complicated it is right now, you know, the European union has passed the artificial intelligence act, which splits live camera, face detection from afterwards face detection. In other words, if there's a live camera in a public space, it shouldn't be allowed to recognize you in real time. But if you're suspected of a crime, it can go back and, and use some of that data to check you.
But as part of those discussions, they were discussing a ban on facial recognition technology entirely at the same time, broom two is being discussed, which is the agreement between law enforcement agencies in Europe. And they're discussing sharing photographic data, facial data between countries through a broker.
So if you are suspected in Spain, the idea is that your face can be used to check who you are against databases throughout Europe. It so it's both and people realize there's a, a fundamental right. And yet they're moving ahead with some of these use cases, right?
So we've seen the facial recognition. Technology is not necessarily good or bad, right? Just like the Nelson checker.
It's, it's an innovation that's complicated and it, it can be when it's good. It's great. And when it's dangerous, it's obviously negative. And we've seen that enterprise is playing a role and has a role to play securing their technology or any databases they have full of facial data. We've seen that government is coming into play, but you know, most of us in this room have some control over those policy decisions, but not a ton. Right? And so there's a third group of people that can be involved in advocacy for enforcing privacy of their own faces and that's individuals.
And this is where something called adversarial research comes in now just like joy, Winnie that story. I told you historically, it's been focused on dealing with bias, making the technology more accurate, making sure it works equally and fair for everyone and addressing accuracy issues. The recent research though, is moving into dealing with that too easy to use that misuse of facial data.
And it's an attempt to give individuals agency, if you upload your photo to social media, how do you protect it against being used by people that you don't intend to be used by to surveil you or to used with law enforcements to suspect you in short, how can we give individuals agency? I'm gonna take you through about four brief papers over the last several years. The first one is particularly interesting. It's not a formal paper. It came about when I was doing my own research in terms of privacy, via obfuscation, which I talked about here at EIC.
And I made a fake identity for my research.
She's a graphics designer in Austin. Her name was Janet. She likes jogging. She was all over social media. I had to find a photo for my fake identity. I did what he wanted to. I went to open source photography and took the woman I called Janet's picture, right? My friend called me up several days later and said, you don't want to use Janet.
I said, what do you mean? He said, have you done an image search on her?
I said, no. And so I went and checked and I didn't find a thousand hits where she'd been used on webpages. I didn't find 10,000, 25 billion instances of her picture on the web. And this is because any bought any of fake ID goes out and does what I did, right? Goes out to stock photography and steals pictures and crops it and uses it. So the first method, the moral of this first story is that if you really want to protect your, your facial privacy, put yourself in stock photography, you will be everywhere. She was a lobbyist in Washington, DC. She was a novel writer in the south of France.
She was a sex worker in Vegas. She was everywhere. And nowhere, I don't really recommend this, but it's an interesting idea to open source yourself.
Now,
More practical approaches, right? And, and I'm giving you going to give you a grid so we can work through these really quickly. My opinion is all of these techniques need to be five things. They need to be mobile based because who doesn't take pictures with their camera primarily and upload them directly has to be fast for the, the same reason. I'm not gonna wait to have things processed for a day and upload to Instagram needs to be dynamic. In other words, every picture needs to be uniquely protected. If I do the same mask again, everything, it's not gonna work out for me.
It actually needs to protect from the big three facial recognition systems. And then it needs to be transparent. In other words, I can't put a horrifically disfigured picture of myself online because that will scare my mother. Right? There's some use case that you want to retain.
The first one came about in 2001, this was very simple. This is just geome geometrically, modifying pictures. Ideally single pixels. Sometimes there's a grid pattern super fast. I worked with something called fast sign ingredient and it actually worked.
But facial recognition systems catched up to it, caught up to it quickly, rather. So it's just fast, right? It wasn't on mobile. It wasn't dynamic. Wasn't really protective. Wasn't transparent. Cause it's doing artifacts. Second technique was a bit later. This is generative adversarial network. What this does is it takes inputs and uses some machine learning. I won't go into math, a discriminated and a creator. Basically it gives you a lifelike image, but also a lifelike image of no one. The website, this person does not exist is where every picture that's not of me or Janet.
And this presentation came from they're all fake people. They don't exist.
It's it's it creates it. It retains some of the high level attributes and generates all the low level ones. This might work out, but it's really expensive to do. If you wanna get it really a photo realistic. And it's also basically just like an avatar, right? An icon for you. And then you would still be trackable. So I'm gonna call this mobile cuz it is on mobile. It is fast enough to be usable on icon level, but it's not dynamic. It's not truly protective. And it's definitely not transparent cuz it's not you.
The next one, this is in rep to 2020. Now this is camera advers area. This is an Android app that adds procedural noise to mess with facial recognition. The one thing you need to know, facial recognition detects faces, right? That's a face. And then it classifies faces whose face is it makes sense.
Two steps detect faces, classify faces. This approach messes with the detection. So you can see here that the screwdrivers misidentified as a Quill, that's a great example, but it's easier to see when you use someone like me cuz you know what I look like.
You can see that as I originally, my suit was identified as a suit, right? But over time as you increase the procedural noise, it's now I'm wearing a chain mail or a field of corn or my personal favorite, a leather back turtle. Right? So it works at least on that level. It's mobile. Therefore it's fast enough because it's an Android app. It's mobile. It's fast enough. It's dynamic because the procedural noise, same thing is used for terror forming for video games, it's dynamic per image, but it's not truly as protective as it could be.
Especially if it's not generated correctly and it's not transparent to really affect it.
You have to Jack it up high enough that it affects the picture.
Now 20 20, 20 21, there were two studies by different universities called Fox and low key I'll there are links here. You know, Lynn, the deck later on, if you're interested in reading through all the math upshot is what they would do is they would take your original photo or photos. And then it would steal from an online database of other photos, another person's set of images. Then they would do some machine learning so that your features, how you were classified was cloaked. So at the same time though, they would use a loss function so that the changes were minimized for human vision.
So it still looked like you, but systems would identify you as someone else. When the models went out and got images off the, the interwebs and trained themselves, the upshot was when your original was submitted, it would classify you as someone different.
And this works pretty well. Fox did particularly well against Microsoft until Microsoft adjusted their strategy. A bit low key actually claimed to have success rates or protection around 90, 90%, which is phenomenal for this kind of a thing. Now that said Fox only disrupts classification. So that's a hit against it.
Fox is a Mac windows application, or you can compile Python on your own. It's open source, which is great. But per image, it takes like a solid six minutes on low setting, right? So it's not very practical cuz who takes pictures on their phone, downloads them to computer, adjust them and then put, you just don't do that. And the other is low key, but again, that affects detection and classification, but it's a hosted web service. So you're kind of back to your original problem, sending your photos out into who knows where, right?
To give you a sample of what this looks like.
Again, you have to stare at me for a bit. This is my unaltered photo. And what I'm about to show you is the low, medium, high settings for cloaking. Originally, if you keep it small, there's some changes, but it might be passable, right? But as it gets higher and higher, I start to resemble someone that's been hit about the head with a lead pipe or a cartoon villain. Right? And so this is not something you would want to put in large format up online. So it's dynamic, it's protective. It could be transparent, but it's not mobile or fast. Right? And so we're going for all of these.
We still don't quite have a good answer. And over the last year and a half, I've actually done a research project on the side. You can go out and look at it.
I'd love to hear your feedback. It attempts to use some faster methods and a combination of some of these approaches to mess with detection and classification. I talked about it at black hat and DEFCON last year because there is this something that's needed. You don't have to explain this use case to people. They all want it. If they could have it. Now you should be asking yourself, is this practical? My answer is no, not yet.
And yes, the technology isn't there yet on mobile devices, the processing power isn't achievable at the same time. Some of these approaches could be used by enterprises to protect the, the faces that they do have similar to hashing data at rest or something like that.
And, and really ultimately we all have some role to play, right? Enterprises need to secure their technology and, and the, the facial databases or the faces they have, even if it's just an HR system, government needs to address our rights, human rights for privacy and your biometric information.
And then individuals need to be given agency. They need to be give given empowerment and they need to be made aware of the possibilities of protecting their own faces.
Even as you upload them, especially your children and other people who are unaware unprotected a and as we do that, as we move towards that kind of environment where facial recognition is, is a tool for elevation and good use cases, as opposed to a weapon for exploitation. What we're really doing is raising a flag just like Nelson did right. And saying, it's expected that everyone will do our duty. And that's what we need to do. If we were to use technology and innovation like facial recognition wisely. Thank you.