Hello. I'm John Tolbert, director of cybersecurity research here at KuppingerCole. And today I'm joined by John Erik Setsaas. Welcome, John Erik.
Thank you. So I'm John Erik Setsaas, I work with financial crime prevention at Tietoevry Banking and I'm really happy to be here. Thanks for having me.
Yep. Glad you're here. Long time friend of KuppingerCole and an attendee of EIC, welcome. So, yeah, I thought we would talk a bit today about fraud. it's a subject that, unfortunately, we have to discuss, but it's become more and more prevalent in many different industries. And over the last few years, with the, uptake of PSD2 and strong customer authentication I think there have been some real gains that have been made in terms of trying to help prevent account takeovers, but, that's not the end of the story. There's still lots and lots of fraud out there. And the fraudsters have started to shifted directions and shifted their techniques. What have you seen in that area?
So it's true. I mean, PSD2 is good and the upcoming PSD3 and PSR. I mean, they’re strengthening the strong customer authentication, which is a really good step in proving you are really the one doing this transaction, whatever the transaction may be. So that is important. I think the challenge we see now, and we can see it in the news every day almost now, is attempts of tricking people into doing things. We're seeing the phishing attempts, tricking people into clicking on links. We're seeing the quishing, the QR code scams were, I mean, which is basically just a link, right? And now AI is coming, right? You can use the AI to impersonate anyone. And that's becoming a really big challenge now in what I call hacking people into doing transferring money to the fraudster’s account, doing things that you really don't want to be doing.
Yeah. There are so many uses of AI by fraudsters. There's generating likenesses of people and trying to use that to fool facial recognition. So if you don't have good liveness detection on your facial recognition system, deepfakes can certainly help with that. There's synthetic fraud where if you have bits and pieces of information about someone, you can use something like an LLM to generate similar information that sort of fits with that, so that you can then make it easier for somebody to obtain a fraudulent account. Yeah, there's lots, unfortunately, lots of use of AI by fraudsters these days.
Absolutely. I personally received the first AI generated phishing emails. I mean that general phishing emails, I mean, we recognize them. But this was targeting me by name referencing a relative of me using information about me and targeting me specifically. And then you mentioned deepfakes, which of course is also a big challenge. Take the CEO fraud, for example. Typical CEO fraud is somebody in the finance department receives an email from the CEO, Hey, we just signed this really important contract, we need to transfer this money. Of course, it's not from the CEO. It's from the fraudster. What if the CEO calls you on a video call? You can see their face. You can hear their words. You recognize them. I mean, CEO is not typically someone you meet every day, but you know them by face and the voice and everything. And this is done by deepfakes. And we already saw at least one case in in Asia now where they did that and got $25 million.
Yeah. The consequences are substantial. What sorts of technology do you think are the most important for trying to help prevent that?
I'm working with financial crime prevention. So we're monitoring financial transactions. And that's the main tool we have for that to define any deviations in normal behavior. I mean, I use this example if I was to transfer money at 2:00 in the morning. That would be a signal. I never log in to anything at that time. Or if I logged in from a Mac, that could be signals. If I suddenly transfer a large amount of money to in a different country. We’re monitoring rogue merchants that are out there. So we're blocking them. We recognize different kind of fraud patterns to do that. So that's sort of one tool that is really important. Because, I mean, you're tricking people. You can't install a firewall in people's brain, right? Monitoring how you behave and based on that can make a decision. Is this fraudulent or not. So we can flag the transactions.
You know, you mentioned something there that's really interesting. And I think there's a lot of room for further development. And that's around the signals. You know, if... you're right, I mean, you can't put the firewall in someone's head, but as a community, as an ecosystem of different service providers, if we were able to share those signals amongst ourselves, different companies, different organizations, so that if, let's say, a fraudsters attempting to break into my account, a bank account, that information can then be shared with retailers or others to know that, okay, this credential has been compromised or it seems to have been or we're getting signals that elevate the risk level. So, I mean, I think there are a lot of... I did a report on fraud reduction in intelligence platforms, and many of them use what I call in network compromised credential services so that they know that amongst their customer base, if there's been an an attempt to use a credential fraudulently. But I think the real work comes in finding a way to share that information between fraud reduction Intel platforms, between other service providers, so that we can help bring down the overall level of fraud out there, because processing that risk information, I think is really essential.
Absolutely. And also, you mentioned more signals. If the banking platform could get a telco signal saying, hey, you've been on the phone now for 20 minutes and now you make this transaction, you're still on the phone, and this is the phone number, by the way, that originates from some third country that would be an interesting fraud signal to bring in.
Yeah. You know, and there are behavioral biometrics solutions out there that, can make very good inferences about things like that. If you've got that telco signal about a phone call that's originated outside your country, but then also looking at how you enter the information, let's say during that call, you go and start entering information into a website. You know, looking at the keystroke mouse usage, you can infer whether or not a person is doing it under duress.
Exactly. And that's also an interesting thing. Then you mentioned the sharing of the information. And that's been an interesting challenge and discussion because we have GDPR, which is good, obviously. I mean, it's it's about protecting me and protecting my information. The challenge is that that has sort of blocked that sharing of information. Because as a bank, it would be interesting to share information about fraud with other banks or as you said, other entities. But then the privacy officer says, no, no, no, we can't do that because of GDPR, which is pulling out right because you could say there would be a legal obligation to show you information. Fortunately, this is now being addressed in the upcoming financial regulations that we need to share. We need to cooperate and work together on this.
Right. We definitely we need that overall sense of privacy, but we don't need privacy versus security. We need to find a way to make the technologies. They're not mutually exclusive. So we need to be able to make them work together to not only improve privacy but improve our security.
Exactly. And and now I also see a concern now. I mean eIDAS is hot, is all over the place, right? And it's all about privacy and you know protecting individual which is good. The challenges that eIDAS is very clear. It's not allowed to collect information about people, about behavior for the purpose of profiling people. And how about fraud detection then? Because that's exactly what we do when we do fraud detection. We profile. We know what's normal behavior for you and we use that to detect fraudulent behavior. But eIDAS specifically forbids that. So I see that's that is a challenge going forward with that regulation. And and also with the identity wallet, which potentially is going to hold a lot more information. It's going to be a so much more interesting attack vector for fraudsters.
Well, that's exactly what I was going to say. That's going to be the thing that fraudsters go after. And multifactor authentication is great. But like like we just said at the beginning, if you can phish credentials away, and that's why it's really important to have unphishable credentials and to raise the overall user awareness level of what fraud these days looks like. But it changes so quickly, it's difficult to keep up with the new vectors and the new techniques that fraudsters are using.
Right. And and of course, raising awareness is important. There's a campaign, I come from Norway, there's a campaign now, which is Stop - Think - Check, is what you're saying. Because we see it in the news, every day somebody is being, I call it people being hacked by fraudsters to do something. And as you said, in that case the [...] doesn't help because you are doing the transaction and you are entering the [...]. I'm just tricking you into doing that. So Stop - Think - Check, a very good advice.
That's great advice. And that's something we all need to be vigilant about.
Absolutely.
Well, thanks, John Erik, and glad you're here at EIC and looking forward to your presentation on Friday.
Thanks again. Thank you for having me here. And, good to be back to EIC again.
Thank you.