I will have only very limited time, unfortunately, Ann, cause I have to rush off to a panel session immediately afterwards in the, in the main, in the main hall. So let's, let's go through this. I've got quite a lot of slides with quite a lot of information. By all means take photographs of the slides cuz I'm not gonna read them to you and they will be available online anyway. But feel free to take photos. Just a few words of introduction. Ipro is a 10 year old company focused on the delivery of biometric verification. We supply companies worldwide.
We've won loads of prizes, we have lots of intellectual property and we are a very well-established, robust and well-funded organization. We're participating one of the EU digital identity wallet projects. We are a market leader, a statement which is, which is qualified by the breadth and depth of our customer base.
What you can see is that we are today providing what is effectively critical national infrastructure to some of the largest and most demanding customers worldwide.
These are very large scale deployments by governments, by banks such as ubs and increasingly now for border control with people like the Department of Homeland Security and f in a few weeks time going live with Eurostar. We're also major players in the EI das in the i i Das regime. If I want to you to take away anything from my presentation today, it is that biometrics today provide an extremely usable, inclusive, highly secure way of verifying identity at scale, both for enrollment and for ongoing authentication.
To do this, you need to use cloud-based biometric verification for a number of reasons, which we won't have time fully to discuss. And it address, it solves a lot of major problems, both cost and non-cost related.
It when done well, it is the answer to a lot of your creation and operation of identities. But let's first address the problem of why biometrics at all.
Well, there are three principle reasons. The first is that when you build a digital identity, you can accumulate all the attributes and credentials that you like, but they're always facts about a person. They're facts. But trust doesn't reside in facts. Trust resides in what goes on between people's ears. So in any true identity PR, establishment and operational process, you have to bind those facts to a human being. It is inescapable. If you don't, you are falling down on your, on your responsibility.
So biometrics are the way that you bind a human to the, to the attributes of their digital identity. Well, quite a lot of, of, of modern identity verification relies on people. Video ient, calling up agents, maybe in person meetings.
That's all far better than this newfangled biometrics, isn't it?
Well, that's not correct. Biometrics work much better than people. Let's take a couple of examples. Face matching, nobody could be better than people. Surely wrong. A study which is referenced here showed that skilled, the passport officers did false matched 10% of the time and false rejected 25%, 25% of the time. Whereas modern biometric face matches. And this is a revolution compared to 10 years ago, which is where many people are still stuck, have a false accept rate of about one in a million. So that's a hundred thousand times better than people. It's false.
Eject rate is, is only a hundred times better than people. So people are rubbish compared to machines. And if you introduce a person into the loop, all you do is measure the performance of the person.
Secondly, when it comes to digital spose, it used to be possible, you know, if you had a picture in front of your face, people, somebody on the end of the end of a line could probably tell it.
Today they can't because people don't do that anymore. Digital spoofs are undetectable to the naked eye. We know because we've seen them and you can't tell. So people are over. Not only do they make systems worse, but they're also incredibly expensive. They're all the slow and they're biased, that naughty word. Whereas biometrics aren't. So biometrics are much better than people at doing the job.
Secondly, they are compliments to device factors. It's really important device factors can be shared. The key to biometrics is that at least without a scalpel and a great deal of blood, you cannot share, lose or steal your face. You can't do it deliberately cuz complicit device, complicit credential sharing is a major issue, particularly in privileged access management. I hand over my credentials to my brother-in-law and go and get a second job or to my more frequently to my secretary.
So she can do this to the, the, the expense of the major treasury operation, not through social engineering, which as we know has led to incredible numbers of frauds through the sharing, through the social engineering of OTPs and not through what is undoubted, what has happened and will undo the happen more and more through the compromise of devices, through shims and malware and man in the middle interceptions and all of that is true. Providing, providing that when you do biometrics, you assure the genuine presence that is the core of the, of the security model for biometrics full stop.
And I'll tell you what exactly that means in a second. It is the core providing that you have silly things like the ability to bypass biometrics with a pin, which completely invalidates its security. But the point that we need to make is some people say, oh, there, there is a common canard that biome, well, my face is my password.
I can reset my face, I can reset my password, but I can't reset my face. That's just wrong because the security model of a password is, it's a shared secret, but my face is not a secret. It's on LinkedIn, it's on Facebook.
I can take a photograph of you sitting there and I've got it. There is no secret involved. So the security concept that says my face is my password is just false. The key to the security model of a face is that my genuine face is unique and therefore the core of it is to assure that, that the, that you are looking at a genuine face. So in order to, to the security model for genuine for biometrics requires three tests to be passed.
Firstly, is it the right person is or am I subject to an impersonation attack? Well, as I've shown before, the F r v rvt Pro Rvt services today show that the, it is that the performance of modern face matching is so far beyond what is required, that that is tick a solved problem.
Secondly, is it an artifact? Is it a photograph or a mask or an image on an iPad, which has been put in front of the camera of the device?
Or is a, is it this 3D model of of me that my wife absolutely refuses to have in the house?
That's a presentation attack. And la largely that problem is today solved. It's pretty easy to detect, to detect presentation attacks, but that's okay because it doesn't matter because increasingly presentation attacks aren't the threat anyway. The threats are digital injection attacks. Digital injection attacks are data streams containing deep fakes or other synthetic videos which never went anywhere near a camera.
In fact, they probably never originated on a handset. That's the problem which we have to solve. Why? Because there is a massive and growing threat to these mechanisms from artificial intelligence. What the challenge we're dealing with is not mission impossible masks. The challenge we're dealing with is $10 face swapping and face emulation. What happens is victims.
So if somebody steals a passport and produces a perfect facsimile of a moving breathing victim from the passport photo or else they doctor an identity document with a synthetic image of a person who never existed and produced a leave it breath breathing, living version of that person, person who never existed.
So that when the fraud is discovered, there is absolutely no recourse to the perpetrator because the perpetrator doesn't exist. Real time generative AI creates complete scenes which are undetectable to the naked eye. That's what we're up against today.
And mostly the defense in most biometric systems against this is that people are putting obfuscated code into the device software, which detects whether the camera has been bypassed or not. That when people, when providers of biometrics normally say we are def we have defenses against injection attacks, what they mean is we detect when something that isn't a camera has replaced the thing that is a camera, which is fine until you, until android emulators are used, at which point that entire mechanism for defense is nullified.
And we are seeing huge numbers of Android emulator attacks on the rise because they work and bypass those mechanisms. So that doesn't work either.
Here are some, I'm gonna show you some in a minute, some, some imagery based upon a, a victim or the, the the source image and the moving template, which would be the attacker just so that you know, I dunno who the bottom two are, but that's me on the right in case I unrecognized it. And that's because my face is after all a deep.
And that on the left is Isabel Miller, the president of the, the chairman of the, the CEO of the Biometrics Institute, who kindly agreed to be spoofed in this way. We are seeing now, this is not fiction or futurism. We are seeing now large scale repeatable attacks to subvert the genuine presence of a user. This is an example of a small scale repeatable attack, which we, which my team put together one afternoon.
But
I think the destruction of trust, which I think DeepFakes gonna, is will, will perpetrate, will mean that there will be a flight back to quality and they will be brands emerging that people go to because they know that they're not being lied to.
If you know Isabelle Miller and maybe you do, you would find that extremely spooky. So that's Isabelle Miller's face pasted on onto me and it would pass a face matcher that looked for, for, for Isabel. These attacks are happening now.
We pub we recently published something called the Biometric Threat Threat Landscape report, which had a number of findings in it. You, you're welcome to ask us for it and we'll send you the whole report. It contains an enormous amount of valuable information about what's going on. Lemme just tell you briefly about three trends we saw.
Firstly, we saw 149% increase in, in em in attacks o originating from native devices. That's a very, very important statistic. Up until now, such injection attacks were coming from PCs cuz it's dead easy to attack through PCs and it was generally held that handsets were nice and robust and you could detect the bypass and it was app apps were locked down, so that was all fine.
Wrong 150%, 149% increase in attacks from handsets. When you we see growth like that, it means something is working.
Another thing that's working is a quadrupling in the number of a particular kind of face swap swap attack. So we see various forms of sway face swap attack, but this version, this, this particular, this particular d machine learning classifier works so well that in the space of once six months, it's, it's incidents quadrupled somewhere.
Some, someone somewhere is being violently broken by that. That's why the increase has been exponential. There she is. You see that's, that's victim A pasted onto template B as I showed you before. And finally we're seeing spray attacks, bursts of up of a hundred, 200 attacks, usually based upon motion based, live attacking, motion based liveness. They start in one place and then they spray every possible victim worldwide.
It's a classic symptom of a, an attack kit that has been proven to work and they're just trying it out on every possible attack entry point to see where the vulnerabilities are.
Again, it's a classic symptom of a vulnerability that has been commercialized. Now you may, well, how, how do we know this? The answer is we, we, and I'm embarrassed to say we alone operate something called an act active biometric threat intelligence system.
So what happens is that every, every transaction, every biometric tech check that takes place for any customer on any of our instances anywhere in the world, is transmitted back to our, to our, our security operations center where it is triaged. And when I would give you a sense of the scale, a few weeks ago we did a million transactions in a day. So the volume of these things is really, really big. And our triaging system has been developed over the years to be extremely sophisticated and to pull it down to the handful of attacks each day that contain some novel element to them.
Either a, either a new mechanism and an experimental trajectory, something new. And from that, we extract knowledge about what the attacker is doing. We measure, analyze it, look at it, write, recount them, write reports about them. And then if they're doing something we hadn't expected, we learned from it and we update our systems accordingly. Sometimes we update our systems twice in a month. You may say this is ordinary, this is quite normal.
I mean, an anti, nobody would think of buying an antivirus system that didn't have something like this behind it, would you? It's bizarre. IPU is the only biometric verification company in the world that operates such a system.
How does our technology, we solve this problem when we continue to solve it, even as the hamster wheel runs faster and so faster? How do we do it? Our technology is based upon some very advanced science. It's a very hard problem. And for us, not for you. We make it easy for you.
For us, this is a very hard problem. It is the hardest single science problem in the whole of the identity ecosystem. And it's getting harder because we and we alone are up against AI with a, in its fullest sense. There is no accepted test regime. So anybody who says, oh, there's a good standard test regime for, for testing, liveness is not telling the truth. There are a couple of companies that kind of purport to do things, but they're so far away from anything real that it doesn't tell you, doesn't give you any information.
The only way that this is current, that this technology is currently being assessed today is through bespoke programs by people like the US government or ubs or very large organizations. That's, that's why the only credible way in which you can assess the quality of a liveness provider today is to look at their efforts clients and whoever helps, has certified them. The IDAs is interesting because the i a s has created the world's first certification program.
Not, not not for the science, but for the business processes that lie behind it for the business processes that keep the science up to date. That feedback loop that I showed you before HA has been certified by T UV SUD in order that we could get qualified trust service provider status and IDA s level assurance high. And we are the only peop, the only module, the only biometric module with those certifications in the world, right?
But right today, the result of the result is we have a system which is ubiquitous. We have a system that is inclusive and we have a system that is secure.
I can, we can explain to you later how it, how it does it, it creates something called a one time biometric through the use of controlled illumination on the face. It's a very difficult technology to work, but it works fabulously well to the extent that the number of false accepts that we get every year can be numbered. Tap on the hands of on, on, on the fingers of one hand. This is what it looks like in in, in in practice. This is my, i, my colleague Darren, who is about to go through an entire onboarding process. And I'm not gonna blow through my time because it's fast.
The first thing he does is he does a fan standard NFC scan of his document.
So there he is putting his, putting his document to the, putting his phone to the document. It's doing an NFC read of the I icon 9 3 0 3 information, which gives all his document information plus his face in high resolution. Then he holds his face up to the, to the camera. It shows a, an abstracted version of his face that avoids selfie anxiety.
It flashes a sequence of colors which are reflecting off his face and giving in, in a one time code, but giving us key information about both, about the reality and the timelines of his face. And that was it. That was the process. And that is the process that's relied on by governments. This is highly secure. We offer a range of different liveness solutions, some of which are, have lower ceremony and are less secure because sometimes you don't need that level of security.
Matching biometric security to the risk posed by each individual transaction is an important part of optimizing the user journey. There are things that are dangerous and you cannot afford to get wrong, like the creation of an account at the beginning because secure enrollment must be secure because the value to a, to a money launderer or a fraudster of a controlled account is enormous. So you that you have to make secure.
But if somebody's get just gaining access to check their balance, the amount of harm that they can do is very limited and therefore you don't need such high security and a lower security form, such as I proof's liveness assurance, which is quick and easy and vulnerable in some cases can be used. You wanna do high risk things, you'd use high security solutions, you wanna do lower risk things, you use lower ceremony solutions.
It's important to have that flexibility across the whole spectrum of liveness.
Finally, what's really important, what's really important in the end is that biometrics should deliver for the user. You need to look at, you need to look at the success rates for good people. Most of the industry is currently running at a, at, at about 85%. IPO runs north of 95, 90 8%. You need to look at the number of attempts to succeed. The industry runs at about 1.6. We run at about 1.2. These are important metrics because they determine the quality of the user experience. You need to look at the can set coverage. You need to look at the bias.
You need to look at whether this is truly inclusive on a large scale. You need to look at whether people like using it.
Is it a, is it a good feeling when they do it? Have they worried? Have they nurtured the user experience? Because this is something that your users, whether they're employees or consumers, are gonna go through regularly. And finally, you need to look back at it and say, this is going to be a core root of trust for my organization. Can I and my customers trust it? Biometrics can be usable, secure, inclusive, and and highly scalable, providing that you make the right choices. And when you do, you will transform the user journey and your own cost base. Thank you very much indeed.
Thanks Andrew.
Andrew, do we have, do we have some time for one question? Yes, absolutely.
Do we
Have a question? I I can ask one question coming from our virtual attendees. What about face recognition? And that's you, then you can almost always be tracked. Fingerprints are not exposed that easily. Very
Interesting. Thank you for that question. There is a fundamental difference between face recognition and face verification. A fundamental difference, let me explain what it is. In face of recognition, the face is used to identify you after the 7 billion people in the world. I know that that is Mike.
He hasn't consented, he wasn't informed it was gonna happen. His privacy was v was violated and he probably got no personal benefit. It face recognition was used to identify him in face verification.
Mike, the process begins with Mike identifying himself in some way. Maybe he asserts a username or part, he or he presses.
He, he, he, he, he identifies himself as an account owner. And then with his permission, his face is used to confirm and corroborate his assertion. He was asked for consent, he gave it and collaborated. He got personal benefit because he, he was both made safer and given access to a service more quickly than was otherwise possible. And his privacy was certainly assured because we did not identify him, we, we authenticated him.
So although face matching technology happens to be used, be used for both of them, there is a fundamental difference between face face recognition, which is put in my opinion, a bit creepy and face verification, which is, which. It produces nothing but good for people and they recognize that fact themselves.
Yeah. .