Great, thank you.
So, hi everyone. My name's Andrew Hughes. I'm the VP of Global Standards at Face Tech. We do 3D face liveness and biometric verification. So I'm a standards guy, right? Those view have worked with me before.
Know this, I work mostly on ISO standards, focusing on identity management, digital credentials, mobile driver's license, and now for the last six months on biometrics. So my travel schedule is full to going to all these different standards meetings. I also contribute as a member of the CANTERRA initiative. It's an international association whose mission is to grow and fulfill the market for trustworthy uses of identity and personal data. You'll hear more about this later on in the talk today.
I'm gonna give you, give you an update about a new group at Cantera that's looking at deepfake threats to traditional ID proofing and verification systems.
Okay? We have not a problem, an opportunity. We here at this conference in the, around the world that we're gonna get digital wallets, digital credentials, digital everything, digital transformation. It's awesome. It's great. It is really good. You can hear about E-I-D-A-S two and the EU digital wallet, a RF in other presentations. I heard a bunch today. It's all very exciting. There's opportunity.
Well, with opportunity also comes some problems. As you can see on the, the, the new headline there in the universe where everything's digital, while it's credentials and services, how are we gonna onboard humans and not deep fake bots? How will we prevent impersonation and account takeover? How are we gonna prevent fraudulent account creation? If you roll out digital everything, wallet, its credentials and services without sufficient care and tension, all you're gonna do is increase fraud. You're gonna make it easier for the fraudsters to steal stuff. You can pick your own example here.
You know, this one is the UK basically had massive welfare fraud because they didn't have sufficient controls in place for their identity verification, and now they're gonna look at biometric verification to help solve that.
So it's actually an opportunity, but probably not for everyone here. There's an opportunity for the bad guys. So it's Christmas Eve for fraudsters and criminals. They're determined, they're inventive, you know, the world is their oyster. Lots of things to choose from.
This headline, if you recall, it is about an employee who got on a Zoom call with executives from their company who told the employee to transfer money to different accounts. And it seemed all normal. The only problem was the employee was the only real person on the call. It was $25 million. Okay? So goes for the session. It's all very exciting and interesting, right? You can tell by the exclamation marks.
That's, that's the defining characteristic. So basically alerts you to the problem. There are solutions that are evolving and developing, and also hopefully to persuade you to join the work group, the work at Cantera. We need companies and individuals that use ID proofing services and verification services, vendors of those systems and public sector and regulators to help out to get a, a well-rounded view of, of the topic. Everyone take a deep breath. It's probably not as bad as this presentation makes it sound.
So again, a call to action.
So we've, since 2023 in September, we launched this group, deep fake and AI threats to ID proofing verification. The first goal was to collect and curate reliable content about the topic. So what is ID proofing verification, because it's not necessarily known. What's a deepfake?
How, how are proofing services vulnerable to DeepFakes in ai? Basically to, to collect true and real information about the topic. As ironic as that might be, we have some future work that we're starting to shape up for what goes beyond the, the base of content that we have. Okay? The group is very lively.
We have, we have a, a very good mix of, of experts, individuals that are articulate and like to debate points. So even those that don't speak up on calls tend to get a lot out of them.
So, just a quick detour for canter, the Canterra mission, the CANTERRA initiative mission is to grow and fulfill the market for trustworthy uses of identity and personal data. We offer certification programs in the US and UK for credential service providers and identity service providers to demonstrate conformance to N 863 or the UK D-I-A-T-F trust framework scheme.
The reason the deepfake and identity proofing and verification group is in the scope of canterra is because many of our stakeholders are providing proofing and verification services, and they're seeing these new challenges, new threats to their businesses, and they're having to adapt.
This year, the boards formulated some organizational beliefs to help communicate where we fit in the multitude of associations and consortia that are out there today, and what value we present to different stakeholders. I'm not gonna go through these, the slides are downloadable.
I'll let you review at your leisure. So enough about cantera. So why now? What's the new elephant in the room?
I, I think that's the metaphor for the, from the keynotes last yesterday. Why is the ID verification industry and credential management industry buzzing about DeepFakes and threats? My view is simply that the broad alarm about generative AI has spilled over into many other contexts. Making people think that gen AI is gonna destroy the world or create the world or whatever, and causing industries to take a look at their own stuff and see if they're vulnerable or not.
And I, I, I think this has something to do that with why we're starting to see more focus on proofing, verification, onboarding and how it's vulnerable to, to DeepFakes. You're gonna hear lots about content authenticity, because that's one of the, now that everything is a doubt, content, authentic proof of content authenticity is, is rising in importance. DeepFakes fool humans and systems because they're realistic fakes of the things that we transact with, right?
Documents, voices, photos, videos, and AI adds automation and scale to the, to the threat.
Again, these are all downloadable. This slide is actually from a, a very good presentation. I highly recommend grabbing it about d fake threats. And this particular one is, I, I like this definition. So it's any believable media, right? So it's not voice, it's not video, it's not documents, it's just anything that's believable that's generated right in your specific instance. Maybe some of these aren't relevant, but as you'll see in proofing of verification, these are all relevant to, to that industry.
So this is a typical picture of proofing and verification. How many people have seen this picture or similar ones?
Well, okay. Yeah, you guys Okay, good, good. It has pretty much all the elements you'll see in any diagram of proofing and verification. This one's from a Anisa report, the cybersecurity agency.
Anisa has published a series of reports about threats and risks for remote, remote ID verification systems. And they're doing it as the rise of the EU digital identity wallet comes into play. EI desks comes into play because it is a high concern on the onboarding and biometric verification fronts.
So, just very quickly, what, what are the elements in traditional ID proofing and verification? So I want you to put yourself in the mindset of in person, right? So the initiation, a person reads the rules, what documents they have to produce, privacy policies, different acknowledgements and so on. The verification service takes a look at the evidence, you know, make sure it's valid checks, any attributes that they collected that were self assert by the person.
Then they validate the evidence and attributes, hopefully against authoritative sources. Check that the evidence matches the person.
And then you get your certificate or your proof or your, your access. Everyone I hope is fairly familiar with, with that. So there's a bit of a problem.
So I, I I respect NIST in the US National Institute of Standards and Technology are respected people that work there. This slide is not some sort of knock on their work.
However, I took a look at NIST 863, which is guidelines for digital identity and 63 A is about proofing and verification. It tells you how to achieve different levels of proofing.
And this, this, as you're reading it, is permitted in the guidelines. No one in their right mind would implement it this way, right?
This, this is insane to believe that this proves anything. But you can show up with a plastic photo id, paper utility bill. The verification vendor can detect physical security features.
That's, you know, you have to choose one of three methods. But one is look at the physical security features, take a look at the core attributes and compare with credible sources, whatever those are.
Kind of have to make sure the documents match each other, right? That's a good idea. And then compare the person's face to an ID photo, right? If you do this in real life, you can't identify anybody, right? Imagine if you have to do this when the person's on the other side of their mobile device and you can't see them, it's not in person, right? This is remote proofing. Okay?
So many trust frameworks and many schemes are similar to this. Everyone's got different additional protections that they put in place, different controls they put in place to compensate for some of the weaknesses. But there's always a way to get to certain levels without doing very much checking. So why is this important?
Well, what's a deep fake? It's any believable media.
So video, okay, well, you're on the other side of a phone. It's a video. You can't see the person. You have to trust that the video is real. Audio images, the documents, fake documents are really easy and cheap to produce, right?
You can, if you choose to, you can pay $5 to generate a picture of a fake ID for many US driving license or any state, or just do it free. Do it for free for yourself.
Dfas are create, are easy to create and cheap or free to create at scale. So back to the Anisa report. So
If all documents, images, photos and videos, look believably real, what are we gonna do? Right? So many implementations have simply moved the gathered document, step to the remote location and put it on the other side of the user's mobile device. They don't really add more strong controls to address new risks.
They put the burden on the relying party, on the verification provider to do the right thing, manage the risk. But in the standards and guidance they have not necessarily being updated. So they're, they're getting quite out of date. This diagram is an updated version of the ID proofing and verification slide from, from Anisa. Notice that it, it now includes biometric matching of the individual to a reference biometric, either a image or fingerprints or a template.
Also buried in there somewhere is acquisition of a genuine ID document.
So an electronic signature or from A NFC chip on a passport, let's say the, the green boxes are protections for the operating systems and devices to prevent against injection of deep fake videos and documents. And they recommend obviously going back to source source systems, which, okay, so again, this is a very good report, links there. Highly recommend that you take a look at it. Okay?
So you, you can tell I like the report 'cause I keep taking pictures from it. This is just one example of a camera injection attack method. So in this method, you know, assuming that you have the ability to play back a video, a deep fake, what they indicate here is that instead of using the real physical camera in a mobile device, the attacker has installed a camera emulator, virtual camera or the entire device is an emulator, right? And quite often the OS can't tell the difference.
Injection attacks are where all the leading remote ID proofing and verification vendors are researching and focused. So we know that prevention against pres presentation attack, like rubber mass and that sort of thing, it, it, it works, but it's expensive and slow. You can do it one at a time. Injection attacks are the next phase of attacks that we have to, we have to worry about. So these are the companies that work on the biometric, liveness and biometric verification kinds of solutions to bind the, the person to the documents. So is biometric linking the solution?
So I come from the standard, the world of standards around digital identity and digital credentials. And I I kind of noticed that there was this missing link, right? The link between the real human being and their digital stuff, which is why I moved over to face tech. 'cause companies like ours are solving this problem.
The linking of the human to the, the digital, the digital credentials world. We have focused on, you know, the electronic records part, the binding to cryptographic keys part, believing that control of the key proves the person.
Well, of course it doesn't, it proves that you have the key. It doesn't prove that you are the person who was issued to, you know, you can do things like memorize secrets to link to a person like passwords. But let's not, please don't. You can also use biometrics to bind the person strongly to the digital records if you do it right. Okay?
And in the remote proofing context, the strong binding using biometrics is essential to prevent fraud.
We are, we're, I think we're shifting beyond the point of whether or not it will help it, it certainly does. So in the anesta reports, it describes five categories of good practice for remote identity proofing. These are environmental controls, like ensuring good lighting conditions, technical controls like presentation, attack detection controls and injection attack detection controls.
You know, things like make sure the cameras are not bypassed, they haven't been tampered with. You haven't just plugged in HDM like a mini HDMI cable into the back of your Android phone, certainty over the identity documents, making sure that they're electronically signed if you can. Or at minimum checking back with the originating source if they're valid. And procedural and organizational practices. Many pages of detail in the report there.
Okay, so back to the work at Canter. We talked about the, the current work in the near future work of the, of the work group.
Well, I'm starting to propose to the wor So in the cantera process starts off with someone has an idea of something good to do and to improve the trustworthiness of information sharing, right? As the vision and mission, say we started discussion group to examine the, the topic area, do some research, you know, try to see if it's a real thing we should be working on or, you know, if if it's already been done, you know, we can investigate it and stop.
But if there's something here, if there's actually a business need for, for requirements and cor conformance programs around this area, we converted into a work group. And that's kind of where the stages we are now. We're trying to decide what the scope is for the next piece of work, for the work group that will pick up this core body of knowledge, the curated body of knowledge that we've, that we've created.
Define some good practice requirements, do conformance assessment criteria, and start certifying providers of ID verification services to make sure that they're taking DeepFakes and AI into account as they do the remote ID verification. I mentioned before that the international standards are lagging badly because it takes time to write standards. Let's just, you know, and the, the threat environment is moving faster than standards writers are moving.
I, I, I kind of personally, I have a personal stake in this. I I wanna fix the international standards. The time is right, right now in ISO and other organizations like Anisa is doing this great work. The pressure to have solutions because of the EU digital identity wallet is rising. The panic level is, you know, don't waste waste a good crisis sort of thing, whatever the stars are lining up inside ISO to do this now.
So what I'm trying to convince the discussion group as we scope up the work group to do is to, while we're addressing the deep fake threats to identity proofing verification, why don't we fix the international standards that define how to do good identity proofing verification. It's not a small task, but standards are updated all the time. And now is the time to do this for this one. Canter has a liaison with the ISO work group that, that does the work and we can push the work through there. Happy to talk at length about this, by the way, as my friends know.
So, so far with the canterra, the Cantera group outcomes to date.
So yeah, we've collected this body of knowledge, that's great. But actually what we've done is we've become a group. So we now know each other. We talk weekly on zoom about things that we care about and our expert about different areas. And we're teaching each other about the topic, about the technologies, about the approaches challenging each other. I see a couple of people from the call in the room here. And really there's two themes that have, that have come out of the group.
One is the, you know, the proofing verification processes, which I'm mostly focused on, but also this the, the broader topic of biometrics and digital identity, right? So we're gonna see what else we can do on the biometrics and digital identity side. So are there missing requirements for using face biometrics to match people or not? Right?
That, that sort of thing. So if you want to be part of an opinion opinionated expert group that has lively discussions, you know, I, I know where you can join. QR code is a link to our wiki page. It's open access. Should be open access.
Now's if, if you're interested in this topic, now's the time, right? Because if you join right now, you get to help decide the scope of work for the next piece of work where we define the requirements and criteria for a good practice.
And that's what I've got. So this is my contact information if you're, if you're interested. So thank you.
Thank you very much. Before we let you go, we do have a question for the audience. Okay. Thank you for being faster this time. Now I can actually get your questions answered. Yes.
Regarding injection attacks, are there any initiatives looking at C two, PA and, and con content providence?
Content providence is a persistent topic in our, in our group. So you know, the CTPA association of the characters here talking about that. Right now we're not focused on document authenticity or, or signal authenticity as a specific element. As we get into the work group and define requirements, we may get to a point where we, we include those requirements.
Having said that, because those solutions don't exist yet, 'cause they're under development, it would be very difficult to put them into requirements to certify against. But we'll definitely keep a spot open for them.
Great. Thank you.
Yeah, thanks Annie. Thank you everyone, and
Thank you.