Hello everyone. What if I told you that there is a simple technique that you can use to pretend to be at two places at the same time? My name is PA and I'm one of the co-founders and the CEO of Dr. Guz. We are a startup based in the Netherlands and we work on developing, thank you, developing explainable AI solutions to uncover realistic looking, deep fakes.
Of course, deep fakes and generative AI has a lot of, have a lot of positive implications, but aside that also a lot of negative implications that can disrupt the whole society. And one of those is identity fraud.
And today we're going to talk about that and yeah, go a bit into what the dangers are. And here you can see the agenda of today. First you'll start with two short demos to let you see what defects are actually capable of of doing when it comes to identity fraud. Then we are going to take a look at what kind of dangers are out there and how you can misuse deep defects.
From there on, we'll continue to see how our solution works and what the benefits of that are. And of course we will close with the future expectations because this field of generative AI is not going to stop, at least not in the near future. But first things first, let's start with a live demo in which we will be showcasing how a Deepak can be used to join. In this case we say zoom, but we are going to be in the team scale in a bit to show that you are in a team scale with someone else's face on. And for that I would need to switch to the teams goal.
I think that my colleagues are ready for that.
It's coming up.
Yes. Here we go. As you can see, there is someone in this team skull, and if you look at her face, she has exactly the same facial characteristics as I do. I will take off my glasses for you to see it better. And now she's not my twin nor lookalike.
In fact, as you some of you might have guessed, she's a deep fake of me. And this deep fake model has been made by our team, which only using a five minutes video recording of me and an open source deep fake technology called Deep Face Life. I'm going to interact a bit with my Deepak so that you can see it's neat life and out there. By the way, her name's Andrea and she's one of my colleagues, as I said.
Andrea, can you hear me? Yeah. Okay. Could you please wave to the public and could you please touch your face a bit to show what happens in terms of artifacts? As you can see, even when she touches her face, nothing really strange happens. And could you also please show us your fake identity card? That takes a bit time for focusing for the webcam, but as you can see also the face of the passport picture is being deep defect with my face.
Thank you Andrea. We've seen it and last but nearly not, but not this. Could you please show your own face? You've seen an artifact there.
So she's turning off the defect model. Yes, that's Andrea. Thank you so much Andrea.
So this is just an example of how, for example, German banks try to identify their new customers. You need to go into a live session with them and then you need to show your face, your ID card, turn it a bit. Of course there was some artifact that you've seen here, but I want to emphasize that we are not hackers nor criminals. We've just done this to showcase how easy it is to do it this, and it was based on open source software.
So the second demo that I will be showcasing is about spoofing a cryptocurrency platform. It's a prerecorded video of of a screen of one of our laptops. And for spoofing such a cryptocurrency platform, you would only need this laptop with two virtual environments on it, a deep fake and a fake passport or identity card. I will play the video and explain the steps to you.
So that's my co-founder Mark, and he's seeing himself in a virtual camera called os. It's a pretty well-known one actually.
And what he's going to do in a bit is that he's going to show you how an identity card can be used to log into a virtual smartphone environment. As you can see now, I can see himself in the, in the smartphone, in the Android phone and also he can show his own identity card with a deep fake face on it to the virtual phone environment. Now of course, if you want to commit deep fake fraud and open an account first we need a deep fake face for that. We go to a pretty well known application called Face App. We choose two celebrities, random ones Johnny Depp and maybe Robert Downey Jr.
And we morph their faces. So we have a new person that looks like both celebrities at the moment, at least for biometric systems. And here you see by the way, an identity card with a random face on it. It's also a deep fake. And the scariest thing here is that you can generate these kinds of identity cards really fast and also in a really high volumes as you can see, within a couple of minutes we'll generate hundreds of those ID documents and you can see them along with the selfies, the corresponding selfies to open accounts number.
And now the whole process will start because we go back to the virtual phone environment. Here you see identity cards once again.
And we have chosen this deep fake together with a real identity card, a picture of it by the way, it's not an identity card itself. And we pretend that we are taking a picture from the smartphone actually while we are taking a screenshot basically. Then we go to the selfie step again, we pretend that we are taking a real selfie, but in fact we are injecting a screenshot into the phone and after just a couple of seconds, the identity verification step was done.
As you can see, your new account is ready and then now you can buy and sell crypto on it and also store it there.
And I want to emphasize that we've done these things a couple of times, but no one ever has warned us. No one out there said, what are you doing guys? There is a deep fake account in my database or in my client base. And that really scares us because that means that there might be a lot of other people trying the same things with bad intentions. And because companies are not actively monitoring, they are not ever of the danger.
And also there are not real, there is no real data or numbers backing up how big this problem is.
Now let's continue with a bit of theoretical background. So I think that all of us know that defects can be misused to commit identity fraud and roughly there are two types of them in the identification process but also in the authentication process. So let's take a look at the identification process. One of the use cases in there is a synthetic identity use case and of course also the identity theft use case.
These are of course not new, but with the emergence of defects, it's easier to make, for example, such an synthetic identity and use it for identity fraud. Another example is that let's say that you give your ID card to rent a car or to to your hotel stay to identify yourself and someone secretly takes a picture of your identity card. What they can do next is that they can use this gray scale picture here, the small one with defect technology to regenerate your picture, actually full HD as you can see in full color as well.
And then using defect technology, they can make it move, make it talk even and use this for example, to open a bank account on your name.
Another example that I've also seen in the demo is that you can easily create a lot of synthetic identities. Here you can see my identity card and we are replacing my face with a lot of deep defects here. And this was done within maybe less than one minute.
We we, we generate autonomous of them. And this is how you actually make a synthetic identity by combining a fake face with real information.
And if you take a look at the authentication use case, it's of course once you have an account somewhere you can use your biometrics to log into the system. And aside deep fake faces, there is also this coming trend of deep fake voice. There is an example of a journalist that used his own voice to make a Deepak out of it by using an open source platform.
Again, free for everyone to use got 11 labs.io and he could successfully use that Deepak face to log into his Lloyd's bank account, which was again, of course shocking for everyone who has heard of this matter.
And this is not only on the news headlines because we at Dr. Cruz do lots of research to see how big the scope of the problem is. And we see that in the past 10 months.
If you look at random sample data sets from our customers and partners when it comes to selfies for identity verification processes, that there is an exponential increase in the number of defects that are being misused open. For example, a bank account. In the first use case that we had, we had less than 0.1% defects found. But in the last use case that we did, again with potential, sorry, with production data from from our customers, we found almost 5% deep defects, which was terrifying. And these deep defects were not necessarily caught by the traditional liveness check.
And therefore we from Dr. Kos believe strongly in multi layering solutions. We know that there are a lot of liveness check providers out there that do a great job when it comes to detection of, for example, virtual cameras or external devices when you use them to inject a video.
However, let's say that someone uses another method than the OBS method or one of the use cases that we also heard was that specifically on the dark net, people are talking about developing their own virtual devices for the liveness check to be not recognizable.
And that's exactly where we think that we can add value by combining defect detection software with the liveness check that is already out there and being used, we think that you can create a deep fake proof solution that is good enough to cope with a deep defect problem and also with the growing scale of it.
And let's talk a bit more about how our solution works. As I told at the beginning, we develop explainable AI solutions for deep defect detection and there is, there are three characteristics of this software that makes it unique for and also suitable for for use for the, as an additional security mechanism to the liveness check. First of all, we make our tools explainable, meaning that it substantiates why something has been classified as a deep fake in a bit. I will show an example to you, but this has two benefits.
First of all, the user can understand why something's a deep defect and also tell for example, the owner of the selfie why they're, why, why there's rejection in case of a deeplake if they ask for it. And the second benefit is that via Dr. Kos can see how, what the patterns are of our known network and how it classifies something as a deep fake. And that serves for us as an improvement method to improve our accuracy rates.
Second of all, we keep our software up to date, meaning that we have advanced monitoring mechanisms to be able to see what types of deep defect are new and immediately we add them to our coverage and protect our customers against new types of deep fake attacks as well.
And last but not least, our software is automated and scalable, meaning that we can cope with high volumes of selfie data and we can analyze them in a pretty fast speed. Here is an example of how our tool works there you have a real video of one of our mentors, by the way, Joe Al.
And as you can see in the video next to it, Joe has a slightly different face. I'm not sure if I can recognize the face, but it's a deep fake of Jim Carey. And if you look carefully, what you can see is that the forehead and the chin of Joe is the same, but only the middle part of the face has been replaced. And that's the deep fake part. So what happens if you give a real input to our tool, it's basically that it'll of course say that this is a real video and there is nothing to worry about.
But if the video is a deep fake, then of course you see this red rectangle there and the explainable part in the corner pointing out that the middle part of the face is the most suspicious part to the neural network. And those are also the parts based on which the decision deep fake has been made by the neural network.
So this is basically how our, our tool works in a nutshell. And if you take a look at the future expectations, we see that the accessibility of deep defect generation software is increasing enormously.
There are tons of online and even free tools out there that you can use to defect yourself, defect someone else and do lots of things with it. And they don't even, you don't even need to have a lot of technical knowledge to be able to use the software. Second of all, the quality of defects is increasing as well, meaning that you can make hyper realistic fake images and videos that are really hard to recognize for the human eye at least.
And there are even super resolution networks that let's say you have a bad quality pick, you can use another AI system called super resolution to make your Deepak of high quality.
And it's not all about faces and pictures and videos because generative ai, of course we all know about that. It makes it possible to fake it all. It starts with faces, goes up to text voices and anything actually. So the multi-modality is also increasing and it's also a bit unpredictable because the speed of the detection or developments is really high. An example is 11 laps at io.
The software just suddenly popped up on the internet and everyone started using it. And before, before that we didn't even know that it was possible. And aside all the developments with ai, there is also, of course there are some trends going on in the role of identity verification. We see that there is more and more usage of digital identities everywhere in every sector in a lot of industries. And that's of course really a good thing because it brings a lot of opportunities but also a lot of dangers if you look at it from the perspective of, for example, person with bad intentions.
And of course also some trends when it comes to increasing the usability and user friendliness of this kind of software for digital identities. Cuz one of the trends that we are seeing is that there is a shift into web-based tools and solutions to make it usable for everyone. And that's even more dangerous because web browsers are not as good protected for examples as applications that you use on your mobile phone yet there is a trend of this and we see that this trend is of course good for the usability, as I said, but it's dangerous for the security side of the story.
And with this I came, I come to an end of my presentation. If you're curious to talk about deep fix and if you want to know more about our research or our solutions, please reach out to us. Thank you very much.
I think that, I think that we can have two questions if you, if you have any, any questions from our audience.
All right,
Well first of all, thanks for the awesome presentation. Just about an hour ago, Martin Kuppinger was just giving an overview about I identity threats, detection and response. And I was asking him how he considered the, the, the risk factor of, of, of the upcoming and never getting better deep fakes. And his claim was that all the major vendors of, of like conferencing software and stuff like that will very soon. So he said there may be a short peak, but they will very soon react on and implement measures into the solutions that detect this kind of stuff.
So what's your, I I'm really keen to to know what's your take on that, on that claim?
Thank,
Thank you for the question. I agree with the fact that companies out there that develop license check are really doing a great job, as I said. And with the emergence of defects, we also see that there is a bigger attention for, for example, injection detection or virtual device detection.
However, we also see that from the business side that these companies are of course huge and they have lots of internal processes going on that they need to solve first. Of course, they have an army of engineers if they would like, they can let them work on it. But what we always say is that we at Dr. Guz are already specializing in this matter for three years. So we are the specialized forces and we can, by multi layering solutions, we can make actually this lifeness check even better than how it is already at this moment. So that's my view on it. I'm not sure if this answers your question.
All right,
Last question. Good. Is this not a matter of always play, playing a game with the people creating the deep fakes that they will then again make their deep fakes better so that you can't detect them anymore?
This is what we describe as the Kara Mouse game.
And this, this goes actually for, it's not only for deep defect detection, but for the whole cybersecurity industry, I think. So therefore, as I said, what we do is that we have developed specialized mechanisms to be able to track these new types of defects as soon as possible and add them to our protection coverage. And of course there will be a certain point that these defects are not recognizable anymore. But research shows that when that point comes, there will be AI systems and based on research of course, that we'll be able to cope with the, with these types of defects, defects as well.
So that's also why we are developing these solutions. Does that answer your question?
Okay, great.
Yep. Thank you Paia.
Thank you. Yep.