All right.
My name is John Luke Picard, and I am the captain of the USS Enterprise. I would like to introduce you to my good friend John Eric Setso, who is working with financial crime prevention here at the European Cloud and Identity Conference. He will talk about using AI to commit and prevent fraud. Be aware that anyone can be fooled. Do not believe what you see or hear. But now over to you, John, Eric.
Thank you Jean-Luc, I appreciate that introduction.
So what did I start with is, well, one, I'm a Star Trek fan, obviously, I want to play around, see how easy it is to create deep defects and spoiler. This is not difficult to make. I'm not an expert. This is not perfect. Imagine what experts can do in this, and this is one of the challenges I see going forward. We will be fooled by ais. I personally received the first AI generated phishing emails. We know the phishing emails were recognized, them are very generic. This one specifically address to me referencing a relative of mine, et cetera. This is coming. We will be fooled.
So traditional hackers will attack the systems, the system infrastructure. This is complicated, it's resource intensive, but of course it's high return.
And, and this is not going away, especially since hackers now are starting to use AI to do this as well, which means AI is being used for defense as well, and it's becoming a battle of ai. What I see as the big worry is hacking people. This is a much easier way to get into the systems. You're gonna hack the people, you're gonna trick them into doing what you want them to do. And we see this happening all the time.
This is simple. This is cheap. You can reach many in reducing the ai. Each one probably has low return compared to the the big hacks, but you can do so many of them.
And we see this in the news almost every day. People fall for phishing emails. They're being tricked into transferring money, et cetera. And this is what I call hacking people. I see the count on timer is not running here. So Harare mentioned this, hacking people, you know enough about people. And I mean, it's easy to know about people today, right? You just check Facebook and then you know enough about people to be able to hack them, and then you can manipulate them.
And, and this is a challenge. I found some research on online versus online doctors run by AI versus physical doctors. And it was quite depressing to see that, for one, the, the AI doctors gave better answers, but also they were perceived to have more empathy. And of course, an AI does not have empathy. Empathy is a human trait, but they were perceived as having more empathy, which means they will be really good at tricking us. The criminals will use smart profiling to know your weaknesses, know where to attack.
And this going forward is what I see as one of the biggest challenges that we will be hacked.
They're playing on our emotions. They trigger something. When you get an a phishing email or a phone phone call for somebody pretending to be from the government or the bank or something that triggers something, either a trigger for some gain or something or some threat, and you react to that.
Some typical trends on this, some scams.
I mean, using fake emergencies is very typical. You know about the safe account fraud, right? Than kids being in trouble, things like that. These are typical frauds that trigger your emotions. Something is urgent kidnapping scams. I haven't seen that really coming to at least Nordic region where I'm from. But this will be coming. The fraudsters will use the children's voice to call you. That means you get a call from your child recognizing the voice saying they are in trouble, they're being kidnapped. And then the fraudsters take over and say, you demand money.
And of course this triggers emotions and you want to react fast and you only need one minute of voice to do stuff like this. The deep fake I created initially, I, I took a voice sample from Star Trek t and g Enterprise episode. I used about one minute to input John Luke's voice into that. And then I wrote a text I wanted him to say, and this, this is easy stuff to do. Okay.
Hi, this is John Eric.
Hi John. This is Domi from the bank. How are you today?
Oh,
Hi Domi. Good to hear from you. It's been a long time since last talk. So how are things with you? I'm doing fine. So why are you calling me today?
Good to hear.
Listen, there's a problem with your bank account and I want to ensure that your money is safe.
Oh wow. I'm sorry about that.
You know, I'm really glad you're calling me. What do I need? What do you need me to do?
I'll guide you through the steps.
Okay. See what I did there? I could have got anybody's voice and I tell people, anybody I'm in a voice call or a video call with right now. I have your voice samples. I can do this to anybody. This is using 11 labs, by the way. Really simple stuff. Upload a voice and you type what you wanted to say. And it's available in I think, 15 different languages. And this is what the fraud are doing. And like I said, they're playing on emotions.
If you have read Daniel Kahneman Thinking Fast and Slow, if not, I recommend it. It's, it's a really good book on people, how we work, how we react. And he talks about the system one and system two of the brains. And this is of course a way of describing it. Our system one is the always on system that triggers when there's a tiger jumping outta the bushes.
System one tells us to run. We don't sit down, activate system two and do a rational well, is this really a T tiger? Is it dangerous? Et cetera.
No, we trigger automatically by our system one, and in most cases, our system one makes our decisions. It's really scary. We think we have a rational brain and we do rational decisions. No system one makes our decisions long before we have rationals away. And of course not a lot of tigers running around today. So this is not the threat, but the threat is domi from a bank calling me. There's a problem with my bank account, I'm gonna lose money.
Or somebody calling, oh, you know this great investment, you're gonna make a lot of money investing this, this, you know, cyber currency or you know, they're showing you some golden diamond out there that they, they want you to fall for. So system one triggers this. So are you sure you can't be manipulated? We all can be manipulated. If you've seen me before you sit, I've been using this one in a lot of my presentations. Nobody knows your dog. I've taken the liberty to writing another text to it.
With ai, you can be anyone you like.
So we see this for, I call the short term frauds.
We, we have a name for this is which name The Olga fraud. Olga is typically a name of a woman who's 80 years older or older. And the fraud's targeted towards elderly. And it's typically about, you know, I'm from the bank, you know, your money needs to be transferred to safe account, you know, use your bank audience stuff. So they trick the person to transfer the money or the CEO fraud.
Again, typically the CEO sends an email telling you to, to transfer money. And then you have the long term frauds, the investment frauds, the dating frauds. And imagine now the dating frauds used to be very people intensive because you needed people at the other end to have the dialogue with, with the victim over a long time.
Hey, you can leave all that to ai. Now you don't need people, you just set up AI to have all that initial dialogue. And when you feel, you know, this is going somewhere, this relationship seems to be good, then you get in people, then you start asking for money. And I mean, this is not lost on anybody. This has been mentioned several times already. The CEO fraud. Imagine the CEO is not sending you an email, but you see a video call with the CEO saying, you know, this is urgent, you need to transfer, et cetera.
Most people don't have that close relation with the CEO, but they will know who they are and recognize their voice and will be fooled by this. Of course, you could argue, well this is a systematic problem. It shouldn't be that easy to initiate a transfer. It should be, you know, more complex. But so we need to put defense mechanisms there. But the point is, it's so easy now to pretend to be anyone, to hack people. And that's what I call it, you hacking people to convince them to do what us de fraud also want 'em to do.
So how do we protect against this?
Well, that's a challenging part. There are huge numbers. This was a report I read the last week from nasdaq. These are the global numbers we're talking about. This is big money. And they recognize that the dating fraud is the fastest growing one. Lot of loaner people, easy to get them hooked.
You know, you have confirmation bias. You so want to believe that somebody loves you and then you got them hooked. I tried to make an ex sample here showing the problem we're up against. I tried to make an AI generate a stack of money, so I wanted to, to show, you know, a stack of money. For some reason I was not able to do that. But imagine this red block here is the fraudulent, the value of the fraudulent transactions. The numbers are from Norway, but I think the, the ratio is gonna be the same.
So we are looking for these red numbers and the total is this one. That's the total transaction.
We can't even see that little red dot anymore. That's what we're up against. We don't want to block all these blue transactions. We don't want to have a lot of false positives. We're looking for those red ones. And that is the challenge we're up against. And how do we do that? We are unable to install firewalls and software in people's brains.
I mean, we have been programmed over so many years. We are prone to emotions. We will trigger on emotional situations like a danger or, or a promise of, of some, some value. So we need to monitor people's behavior, right? To have systems in place. And this is being done every day in your banks, your credit card companies, you are being profiled, your behavior is being monitored. And if there's an abnormal behavior, there's gonna be a trigger, a flag on that. And AI is increasingly being used to that. AI is very good at finding pattern, finding deviations on this.
So it'll trigger, I mean, if I logged into my bank at two o'clock in the morning, you know that that should be a flag because I never do that. Or if I logged in from a Mac, I never do. I log in from my, you know, my phone or my, my pc. So you will have triggers.
You could also have deep fake detections.
So I mean, there's been some discussions on that. Should deep fake detection be implemented in your video call software? So you can get a flag up saying, you know, are you sure you're gonna trust this question? Is if you are going for a romance fraud, are you really gonna care about that? Isn't your confirmation bias gonna say ignore that?
You know, he, he really does love me and you, but it's possible. There are some good companies out there doing defect detection. So it would be possible.
And then, you know, doing transcripts, summarizing the calls we have with customer. Interesting thing with romance frauds, by the way. We have a defense center and sometimes we detect, you know, this is definitely a Romans fraud. We call the victim and say, you know, you're being frauded. And they insist they are not. They insist, no, this transfer needs to go through because you know, I have this, you know, friend this romance and this is for the surgery or his mother or whatnot. And that's what we meet, you know, people won't even admit it when we show it in the face.
Of course, challenges with ai, we know about those privacy. We have false positives. It's a challenge. Bias regulations are coming now. And also explainability when transactions are blocked, we want to know why, what happened. And that's also a challenge. We are are working with just a quick, with our solutions now we're blocking more than 90% of the attempts. So we have pretty good rates still 10%. And we also have now in the situation where more have to blocks are done by ai. So we're getting really good help on the AI side as well. But we would really like to get help from people.
And like I said, we have a descent center with people and using AI and getting all these tools to, to get stuff done. Of course there's no such thing as a 100%. I compare this with locks. I mean you have a lock, somebody's gonna pick that lock, you make a stronger lock, well you get smarter lock pickers, et cetera.
And this is gonna escalate until the lock picker said, you know what? Give up. I'm gonna go through the window instead of, you know, break the window instead. And that's what happening. We constantly need to be on our toes.
And that's also why we need a combination of AI and people, because people can easily adapt to new patterns. AI is gonna be slow on recognizing new, new patterns, but people will recognize that much faster regulation. We have the first AI act, which is definitely good. AI is a race. Our disadvantage working with financial crime prevention is that we actually do follow the regulations. The criminals don't care, right? They don't care about regulations. And it has been mentioned earlier. I mean we're using chat GPT.
If I, you know, ask chat d pt, I want to create a phishing mail to you know, steal somebody's money.
It's gonna say, you know what, I'm not allowed to do that.
Well, of course there is a fraud GTP that you can go that will create that for you. So the fraudsters are of course not complying to regulations. They are sort of, they, they have an advantage there. And if we slow down, we're just gonna give them a bigger advantage.
So that, that's a challenge with that. Okay, so I've talked about now money and financial transactions. I also want to touch on the iris and the wallet, which has been a topic on this conference a lot, which is good. I've been working with this for years myself with the identity, the wallet. So what's the challenge there?
Well, I see the identity wallet like a passport without a picture in most cases. You don't really validate who's using it well, you're using some two-factor authentication to to know that you have access to the device, but you don't know it's the right user, right?
And if you don't know it's the right user, how can you trust the claims? If you don't know that the user is the owner? How can you trust over 18? And this is one of the big challenges going forward with knowing who is actually using it.
And of course we are going more and more with biometrics now that with high risk transactions, you will have to show your face to prove that it's actually me sitting there. Well, if my brain has been hacked by fraudsters, that doesn't help either. All the technical means we put in place with multifactor authentication, et cetera, no help at all because I'm the one doing the transaction.
So the wallet, the point is I'm gonna be in full control of the wallet. That's sort of a main premise. So if you're gonna trust information I deliver from the world, you need to know that I'm in phone control.
Well, we're gonna have technical attacks, there will be hackers attacking the infrastructure. How do we handle that coming from Norway? We use bank. I all the time, one in five actually share their bank ID for different purposes because you want to get, help your parents or you know, you are living in a household and figure, you know, hey, my spouse can fix my bill, pay my bills, et cetera. So one in five, this is against the terms and conditions, but still this is a fact. It's also due to lack of good authorization mechanisms. So you have to do that to get access.
But yeah, that is bad.
We also have what we call close relation fraud, where people living in the same household will know each other. They will have access to the device, they will know the password, et cetera. And then being able to access, there was a report in, in Norway year and a half ago or something about EID infringement, which talks specifically about the amount of close relation fraud. And then of course hacking the brain, convincing people to do something that they really don't want to do.
And of course, with a wallet having so much, potentially so much more information, it's gonna be a much more interesting attack for frauds than the EID. So, so just looking at how do we do risk evaluation?
Well, we're getting a transactions, we have some metadata, we get some other signals, we collect all this together. We know what time of date we know your IP address.
You know, we, this met information is really essential to, to detect and know if this is fraudulent transactions. So we make a decision based on that.
Well, the challenge is, if you read, it says specifically, you're not allowed to collect data with a purpose of profiling, which is exactly what we're doing in fraud prevention. We're collecting all this data, we profile that's, I mean, you're all being profiled with your bank by your credit card company for the purpose of detecting fraud. E IDA says specifically, this is not allowed. Even though there was a feedback saying, well it must be possible to prevent fraud. But this second part did not make it into e iida. This was one of the feedback on e IIDA standard.
So as it stands right now, and I verified with one of my legal contacts, as it stands right now, it's not allowed to do this profiling. And that makes me really worry because when we're hacking people, the only thing we have to go on is the profiling to detect abnormal behavior. So on one side we want privacy. And I do understand the iis, you know, you should not be profiled, we should not collect all this data about you. But then what about fraud prevention? How do we handle that if we're not allowed to collect all this data and this needs to be sourced out before we go live?
So that was the end of one presentation. Thank you that,
Thank you John. We have time maybe for one question. There's one online, so maybe you can briefly address that. Yep. So the question is, how can we prevent a romantic fraud other than raising awareness? What can you suggest to a divorce Olga looking for love?
I mean, this is a challenge. I mean, awareness is important. We are running a campaign in Norway now that says, stop, think, check, stop, think, check.
Which, which is good. I mean do that. But I mean a Romans fraud is sliding in carefully. It's not about the urgency as the safe account fraud, it's sliding in urgently.
You, you have the confirmation bias. You so want it to be true. But I mean it's back to the check.
You know, check with friends, check with others. Is this realistic? And if your bank calls and tells you this is a fraud, believe them. Because most cases we do know.
Thank you, John. Alright, thank you. Thank.