We'll now move on to our next speaker, who is John Erik Setsaas, who is here, I think, yes. He'll be talking more about AI, sorry, AI in relation to the Digital Identity Wallet from the EU. Presumably, we'll have the lights down again for this, so thank you, welcome.
All right, thank you. Okay, so... Welcome to the Cyber Revolution Conference in Frankfurt. I'm honored to introduce my good friend and expert in financial crime, John Erik Setsaas.
Today, he'll explore how criminals are leveraging AI to assume any identity, posing a serious threat to initiatives like the EU Digital Identity Wallet. Remember, trust what you see and hear with caution. The stage is yours, John Erik.
Thank you, Jean-Luc. So where do I start with this one?
Well, one, I'm a Star Trek fan, obviously, and I always wanted to be introduced by Jean-Luc. I wanted to see how difficult is it to create stuff like this. And spoiler, it's actually quite simple to make things. This is not perfect.
I mean, if you look closely, you can see, you know, the lip sync wasn't perfect. I didn't spend a lot of time on it. And clearly, I see this as one of the big challenges going forward, that fraudsters, will be using AI to trick us. I've already seen this. I'm sure others have as well. We're getting the phishing emails.
Normally, they are very generic. But I received this one, very personal, addressing me by name, referencing a relative of mine that happened to be in the US, etc. So they are getting better at this, at targeting individuals specifically. A lot of this conference is about hacking systems, where criminals are getting access to the systems. And of course, we need to prevent that. And this is complex. It's resource intensive. You need to have good people, good technology to do that. But if you manage, you get a high return from it.
The criminals have realized there's an easier path, and that's to hack people. This is simple. You can automate it using AI. And you don't get a lot from each hack of the people, but you get a high return in general because you can hack so many. And you can automate these processes.
Harari, Author of Sapiens and other books, he mentioned this hacking people. It's about knowing a lot about the person. Then you can manipulate them by using all this information. And of course, with everything we're now spreading, it's easy to profile people to get this information necessary to hack us. I see two different scenarios. One is the short term and one is the long term. Olga Fraud, I think that's a very Norwegian term. The name Olga is a female name given to girls about 70, 80 years ago. And that's why this has been named Olga Fraud.
Frauds that are targeting specifically elderly women, tricking them into transferring their money. Typically like a safe account fraud. That means you call, pretend to be from the bank. You're saying, hey, somebody's hacked into your account. The money is rolling out as we speak. You need to hurry to transfer this to a safe account. That's a typical example of Olga Fraud. And it was interesting because out of the old cases that were looked at, actually there were seven women that were actually called Olga in this dataset.
CEO Fraud, we know. Same thing. You're pretending to be the CEO to trick somebody into transferring money. The long term frauds are interesting. The investment and the dating frauds. The dating frauds can now be automated to a bigger degree. I don't know if anybody's tried Replica. If you don't have a real boyfriend or girlfriend or you wish you had another one, well, you can download Replica. It's an AI boyfriend slash girlfriend. And it's actually quite good. Frauds can now use the same technology for these dating frauds to manipulate the victim over time without having to use people. Right.
They just use the AI to get everything going and building up the relationship. And then they go in for the kill. They start transferring money. And then they get people involved.
Sad part, side note on the dating fraud. We have a defense center, so we reach out to people. We're saying, you know, this transaction of, you know, 10,000 euros, 100,000 euros, this is a dating fraud. We want to stop it. And the victim says, no, no, no, I'm not being frauded. I'm sending this to my, you know, partner over in Singapore or something for their surgery or whatnot. So that's what we struggle with on that. That's how good the fraudsters are at manipulating. And of course, they're playing on emotions. They're playing on urgency. Some examples here.
You're getting a text message like a grandfather getting a text message from a grandchild, you know, saying, hey, I lost my phone. I need money or other scams like this. And a fake emergency making something sound urgent. And that's the trick. That's how they play on our emotions.
Hi, this is John Erik. Hi, Dad. It's Shelby.
Oh, hi, Shelby. Good to hear from you. I missed you. You're on vacation in Spain. How are things going? Not so good, Dad. Somebody just stole my purse with my phone and all my cash and cards. I need to pay the hotel and the money I owe to my friends. Could you please send 1,000 euros to my friend's account? And this is how they do it. And this using a tool called 11 Labs. It's the same tool I used to create Jean-Luc Picard's voice in the beginning. And this was shown real time how it's done. So you can upload any voice.
So anybody who's been on a call with me, if I have your voice, I can upload it and I can make this your voice. I can pretend to be anyone.
And again, playing on emotions, if you're thinking fast and slow, by Daniel Kahneman. System one is the one that acts immediately. When a tiger jumps out of the bushes, you run. That's not the time you set up an Excel to decide, you know, where you're going to go.
And, you know, people like me, we always do Excel to make decisions. You don't do that in emergency.
You run, right? Are you sure you can't be manipulated? I'm not. We're all targets. I've used this a lot. This is now more than 30 years old. On the Internet, nobody knows you're a dog. I've added my own addition to that one. Now with AI, you can be anyone you like on the Internet. And this is the challenge. We see two ways the criminals operate. One is identity theft, or more correctly, identity infringement, because they're actually not stealing. You haven't lost anything, but they are infringing on your identity.
This can be done through technical attacks or they're asking for the credentials. So, typically, the fraudsters will call and ask for the credentials, and then they will log in or set up, you know, a mobile banking app, et cetera. The criminal has control of your account.
However, we have been very good over these last few years to create mechanisms to prevent that. We have the two-factor authentication. We have the biometrics. It's increasingly more difficult for somebody to pretend to be me. We added recently something we call user present, where we do a short in the critical transactions, do a short video, very quick, compare with a trusted photo from the passport we collected earlier.
So, this is becoming more difficult. So, instead, the frauders are doing what we call app fraud, authorized push payment fraud. This is where the fraudster will trick you into transferring money or trick you into doing something.
So, the typical safe account fraud is this. The fraudsters hack you and get you to do whatever they want you to do. And they are good at this.
And, of course, these mechanisms, they don't work anymore. It's not the criminal that's doing the transaction. It's the person themselves. And that's the challenge we're facing right now.
So, the challenge has gone from who is doing the transaction. Well, we pretty much control of that one to are they doing it for the right reason? And how do we find out that? How do we detect that? We cannot install software in people. We cannot install firewalls, monitoring software in people.
So, what we do, we monitor behavior. So, in financial institutions, all your transactions are being monitored. If there is something fishy, there's going to go a flag. And that's how we detect the dating frauds, for example. Problem is we call people and say we want to stop the transaction and say, no, it's not a fraud. We have to let it go through.
So, how is this related then to the wallet? How is this related then to the wallet? I live in Norway. I use bank ID every day for many different purposes. And that's a typical target for fraudsters. They will typically trick me, try to trick me to use my bank ID to authorize a payment, to take up a loan or something. The identity wallet potentially will be much more valuable. You can store much more information in there. And I'm still trying to get an idea how a fraudster could trick me into doing things.
Of course, transfer assets. If I have central bank digital currencies in there, I could transfer that. They could trick me into sharing information. They could potentially set up fake relying parties.
So, get information and use that, etc. So, there's more value. There's going to be a more attractive target. And initially, I see the wallet as an identity card without a photo.
Of course, now we're adding that video to make it more difficult to use as well. The challenge is, if you cannot trust who is using the wallet or if they're using it for the right reason, how can you trust the claims? If an individual claims they're over 18 or you claim something, if you don't have trust that this is the right user or they're doing it for the right reason, they're not being tricked. How can you trust the information? How can you act upon it? And there are several reasons that this may be a problem. There was a survey done back to Norway.
One in five Norwegians are sharing their bank ID. Bank ID is my identity online.
And still, one in five are sharing it with close relations or if they have problems with a public office they visit or something. That is a huge problem. There are also problems with close relations.
Typically, if you live in the same household, you will have access to the same device and you can impersonate someone. There will be technical attack where you're attacking the infrastructure. I think that's the least of the problems.
But then, as I've talked about, hacking people, which is one of the challenges. So how do we do this?
Well, this is a very simplified model of doing risk evaluation of events. So we're getting some event. The user is doing something. Transferring money would be typically, in our case, using their credit card or something. We have some metadata that we attach to this.
You know, geolocation and what kind of device, etc. And maybe other signals that we can look up as well. The risk engine are using rules and AI to analyze every transaction to determine is this risky. We also have an overview of rogue merchants, which was really interesting on Black Friday. Black Friday, always, there's a lot of rogue merchants popping up. And we're stopping transactions to this based on this information. And based on this, a decision is made. Let the transaction go through or stop it. And our system, we deliver this to banks.
We stop about 90% of transactions with the combination of AI, rules, and people. So these are all the fraud attempts we never hear about. That's what we stop before they even happen. Okay. So that's how we do it with financial systems. What about the identity wallet? What's one of the core ideas behind a wallet? You should not be tracked. And the idea is specifically, you're not allowed to profile. You're not allowed to collect this information. And this then poses a risk. How do we then determine these fraudulent wallet transactions when we're not allowed to implement these mechanisms?
This is going to be a challenge unless something is done about this. And I'm pretty sure the fraudsters are having a conference right now and looking at this and saying, wow, we're going to have a field day. This is going to be great. It's going to give us so much opportunities. Nobody's going to find out about fraudulent transactions. Okay. So there is no 100%, obviously.
I mean, we need to do as good as we can. The lock analogy, when you create a better lock, the lock pickers are going to get better. You create a better lock, you get better lock pickers. At one time, the lock pickers are saying, we're going to break the window instead. And that's, again, what happened. People going from attacking systems, which are complex, to attacking people, which is simple. We did a survey asking people, this was, again, Norway, Sweden, if they were okay with sharing financial information to fight fraud. And 75% said yes, which is interesting.
What I also find interesting, when we added, well, can we let AI analyze the data? Well, only 50% said yes. So there is a perception about what can AI do with my data? Will AI then figure out, oh, I was in this store and I bought a really nice coat for a woman, which was not in my wife's size? What will they derive from information like that? Is that fair about this? And who has control of information? We want to educate people, and that's also a topic that's been on several presentations here. We want to teach them how to avoid these problems. This campaign was launched earlier.
It's called Swindle, which is fraud, swindle.no. And three keywords, stop, think, and check, which is good and, you know, in general, very good advice. Problem is, when the tiger jumps out of the bushes, you don't do that. Then your emotions kick in, and you act upon the emotions.
Hi there, Jordan Belfort here, straight from Wall Street. Jordan Belfort here, straight from Wall Street. I hope you were paying close attention to John Eric's playbook, because that's how we operate, and it works like a charm. I'm eagerly awaiting the EU digital identity wallet rollout. Just think of the possibilities, even more people to trick, and all without those pesky fraud detection tools keeping tabs on us.
Soon, I'll be relaxing on my new yacht, living the dream. See you on the flip side, suckers. Thank you. Thank you. Do we have any questions? Yes.
So, the topic that you're not, there's unlinkability, so you're not, you shouldn't collect a list of all the transactions someone did with their wallet, because that's also giving information. But that's on the data and transaction layer, which is not allowed. And it's the same with the payment service directive too, when payment service providers are allowed to get to the back door through the bank and get information with consent of the user, where you also lose sight on the thing.
But I think that if you look at the fraud detection we have in banks, that's not just detection on level of transactions, but also on behavior, on device, on IP, on even the target bank account. Well, that's also a transaction, but there's a whole host of fingerprints that you can take from any transaction. And I think it's not a full 100% loss of all the options to detect fraud.
No, of course. I mean, any monetary transactions will still monitor the same way we do today. That's not going to change, obviously. But I'm looking, there's a lot of other things that the fraudsters can do. And of course, we would like to share information between banks to get even better at doing this. We're not allowed to do that. But I think there are some challenges here that, besides monitoring, I mean, if you do direct payments, if you have wallet-to-wallet payments, for example, how do you handle that?
How do you handle sharing of an asset if you have property ownership in your wallet and I transfer that to you? How can that be more?
No, no, no, I should never, because you're a non-fraudster, I shouldn't transfer it to Jacobo. Well, I don't think you're a fraudster.
Thank you, Erik. Thanks very much for a great presentation. Thank you.