So, thank you for joining us here. In this track, we're talking about trends and innovations with a heavier lean on things like deep fakes. How does identity verification play into this? How do we prepare the right people to make the right decisions about this?
So, there's several really interesting tracks that we'll be discussing today and there's always an opportunity for you to be involved in the conversation. So, please use your app. If you go into this session, you can enter in your questions. I'll receive them up here. If there's an abundance of time, I'll scan the room and if you have a question, you can raise your hand and I'll come to you with a microphone to ask those. But on the safer side, send those questions through the app and I'll receive those. That goes the same for our virtual attendees as well.
So, just because you're not here in the room with us doesn't mean you can't participate. So, we'll get started with our first session, which is a panel. Really a great treat. We've got a lot of really interesting people all in the room to give an executive alert. We need to navigate AI-driven security threats for boards and C-suites.
So, we'll come over here to the panel and get to know everybody. Would you begin? Sounds good. I am Patrick Parker. I'm the CEO and co-founder of Empower AD. Nice to be here. Absolutely. My name is Joseph Carson. I'm the Chief Security Scientist and Advisory CISO at Delineo. And I'm Andrew Hughes, VP of Global Standards at Facetech and Chair of the Board of Cantera Initiative. I'm Alexander Koch, VP Sales and EMEA at Ubico.
Thank you, all of you, for being here and bringing your different perspectives here. And, as Andrew, I think you pointed out first, it's good to make sure that either we're on the same page or we understand where we're coming from when we look at this challenge.
So, AI-driven threats are here to stay. But how does the nature of AI security threats differ from the familiar ones? And how does that impact the organization's response? I can jump in on that. I'd say the first thing is that you want... You're not going to be able to stop the employees from using AI.
I mean, none of us are going to be writing out documents anymore. So, if they're going to use AI, provide an approved and hopefully secure and governed channel in the organization that they can use, where you can at least know how they're using AI and implement governance policies. And then the other big thing is just the dynamic nature of AI, which we can get into later, is that you have new identity types.
So, when you're talking about zero trust, least privilege, zero trust, least privilege, there are all these dynamic identity types that might exist for a fraction or a few minutes and then disappear. It may be related to a human identity, may not be related to a human identity, could be a hive of agents.
So, lots of new challenges that we just need to start talking about, make sure we have worked into our governance plans. Mm-hmm. Yeah.
Yeah, absolutely. I just want to add to that as well. You're absolutely right. We hear a lot of times that, you know, how the attackers are using AI, but they're not using it that much. The biggest risk for most organizations is employees going and taking company data and putting it into public AI engines in order to create new content. And that means that we end up with a lot of things where company data is now being exposed publicly. That's one of the major risk organizations.
So, to your point, absolutely. They need guidelines and policies around what's acceptable use.
Now, in the threat landscape, attackers, they're not using it in the way that many people in the media makes it assumed, that they're making these, you know, malware has been updated in real time and generating in these accelerated attacks. The way that they're using AI today is primarily around identity compromise. It's around deepfakes. It's around looking at how to do things like business email compromise or making the phishing campaigns much more realistic.
So, they're using it to complement the existing techniques they use today. Absolutely.
Yeah, Andrew. So, in the last year or two, I've been focused quite heavily on identity assurance and proofing and onboarding, and now with the company I'm at, adding biometric matching and liveness verification into that mix.
So, of course, we see the rise of threats in faking real people. So, if you're doing remote identity proofing where everything looks real, your systems have to be able to detect it, and your humans can't anymore.
So, people are starting to have to shift the way they do remote onboarding because if they had human agents in the mix, the humans can't cope because everything looks real. What's your thoughts on the Zscaler hack where the founder, they used clips of his voice and his image and they defrauded, they talked people into transferring money. I can't remember which attack that was. There's so many of them. That's sad, isn't it? Yeah. I can only agree to what was previously said. I think now the individual is getting more in the focus, getting the center of attacks.
Because if we look back at phishing attacks maybe two or three years ago, there have been grammar mistakes in emails. There was wrong spelling.
So, you immediately look at the email and say, oh, this is a spam email, so you delete it. Now it's getting more difficult for individuals really to understand what's going on. Is it a real thing? Is it not real? And with IE, it's getting much more in detail. It looks more true than it did before.
So, identity, identity proving, individual and making individuals safer is much more important now. Also, looking at voice. Voice recognition can be easily copied now.
So, we've seen different examples in the past in media. So, where a voice or a quote was made from someone, especially some politicians, and it was AI.
So, it seems like it's true. Absolutely. Just to add to that.
So, I'm based in Estonia. And one thing for many years, the good thing about it is that the Estonian language is so complex. It is so challenging. It's very difficult. And for many years, the language has protected the country from phishing campaigns and social engineering, because it was very difficult to translate it. You would need to pay someone to do proper translation. And it was interesting that I was listening last year to the country's CERT statement. And the CERT is their inter-response for the country.
And they said that the language is no longer protecting society today because of the advancements in generative AI. It means that translations are so perfect that it's even better than Estonians' own language capabilities themselves.
So, it's interesting how advanced it's become. But it's the point where it's really focused on those areas where it's things like defakes, where it's about business email compromise or financial fraud. Those are the specific cases where it is being used. Because in most attacks today, the basic things still work. Password compromise, reusing passwords, credential compromise. The basics still work.
So, the majority of attacks are still using non-AI techniques. And even though it's not common, everybody loves to hear about a good new hack. There was a recent one called Black Mamba, where you got to download an innocuous executable that it would pass any scanner because it didn't have any malicious code in it. And all it did was call out to a well-known trusted open API endpoint, which seems trustworthy. But then that would generate dynamically code that could execute locally. And every time it ran, it would regenerate, so it's polymorphic, different code on every run.
So, literally, you couldn't really get a signature for it. So, it's not common yet, but it's basically a proof of concept for all the hackers out there to see what they could do.
Yeah, so that's... To pull in the different concept or, yeah, ideas and contributions from each of you, it's a question on reality. And are humans in these settings a good judge of reality?
And so, with that, how do you make an organisation resistant to that? How do you equip the people so that they are a good first line of defence, but also put in the safeguards to keep their errant and fallible decisions and behaviour from causing too much harm? I think anyone who has parents on Facebook knows that's impossible. LAUGHTER A friend of mine asked me about things, or we share them all the time, and I'm like, how did you possibly believe that was true?
So, it's a tough one. I mean, I had an idea which, yesterday during Mike's keynote, but he shot it down, because that's something I hadn't thought about, was that maybe everyone could have to digitally sign any content they create so you could verify the identity of the content producer, but then he rightly pointed out, well, what about people in countries where dissidence or where free speech isn't allowed? That would really just shut all that down.
So, think about those types of things. I think the good news is, for everyone in the room, is that the advancements in defensive AI capabilities are accelerating beyond the ones that's being used for attack. Attack will come in the future, using AI to do attacking from the cyber-criminal side of things, but the good news is that today, it's the defensive which is accelerating beyond that, which means that organizations are looking away in order to use AI, in order to do things quicker, more scalable, to make decisions much faster.
So, that's the good news, is that defensive is, at this point in time, looking to be ahead of the attacking perspective side of things. So, I'm hoping that that pace will continue. Absolutely. I think from our point of view, customers are changing a lot.
So, we come from a more technically point of view, saying, okay, we have a phishing-resistant MFA, and it's important to have a phishing-resistant MFA. But now, with these AI-based attack factors, companies are moving to phishing-resistant users. They put the individual in the middle of protection, which is important now, because the individual, in the end, needs to be protected. And if we look at organizations now, the last years, probably they would start with the most exposed individuals in their organizations, like privileged access users or others.
Now, they say, now we need to protect the entire organization. We need to take care of any individual, because especially AI uses the weakest point of entry. And this is a change, and this is also important for organizations to understand that they cannot simply select some populations. They need to protect them all. This is important. Absolutely. And... I got nothing. You got nothing. For the next one, I think he got something. As Patrick pointed out, everybody loves a good hack. But what's not impossible, but sometimes a little harder, is pulling out, like, can we find good prevention?
What organizations are out there? What models are they following? What solutions are they using to have good prevention? I think recently, OWASP released the top 10 list for LLMs, which is good.
I mean, and one of them ties into least privilege. If you're going to have autonomous agents, definitely, and I hate to use a buzzword, but you want to reduce the blast radius based upon how much damage they could do based upon the permissions that they have at any one point in time. So you're going to have, in your zero trust, lots of new different points where you have these agents that are spawning dynamically and using different types of tools.
And you want to make sure that they are authorized to do that, how they're authorized, and that they have the least privileges possible to accomplish just the tasks they're authorized to do. Yeah, so it's a lot more to audit and encompass and try to control.
Yeah, absolutely. It's all about one of the things that we can use today is that reducing understanding about the ability to quickly make predictions, about reducing the amount of data we have down to things that we can actually take real action on. And that's what's critical, is being able to, for example, if you're looking at, you know, lots of auditing and SIEM information and logs and data, what's the things that really matter most? And using basically AI algorithms, we can apply those. And that's what organizations are going to be providing, is algorithms.
They're going to be providing with algorithms that will allow you to take your datasets and analyze them in order to fundamentally understand what's important things for you to take action.
And that's where we get into the real return on investment here is reducing waste of time, is being able to allow employees to focus on the things that matter, to be able to get also to the point where you can apply zero trust principles or principle of least privilege to get the point where if you're specifically taking an action, that is there other things that indicate that that action you're taking, that access, authentication, authorization is actually justified, is approved.
And that's where we're really getting into a point where that can be done in real time today because of the power that we have with AI. That's a good point, this zero trust. This brings us back again to the individual because with the zero trust, you need to prove that it's a real individual and that it's the right individual.
And this, yeah, this is important for getting access to resources. And then...
Yeah, I was talking with a government recently about their services card applications. And they use human agents to interview applicants to activate their app. And they can't have enough agents and they only work business hours. And they're starting to try to figure out how to deal with the onslaught and the scale of fakes, right?
So if the agent is intermediated by remote devices and networks and all that, and they can't tell the real from fake anymore, if you can put basically biometric verification in front, if you can automate, say, 80%, 90% of the applications, then your human agents can deal with the exceptions and maybe phone them up and use other channels to verify the human. Let's them focus their expensive time on the cases that can't be automated.
So that's part of the discussion is using new tools, better tools to offload the burden from the humans so that they can focus on things that people are actually doing. Which actually reminds me of that. We talked about the project with the EU, Lisa, where they did the password Schengen area. And that whole time, if you ever go through the passport controls in the EU, what allows them to do when they scan the passports is that they can automatically detect whether you have authorization to enter or not. And the only thing now the border guards need to do is look for the anomalies.
They look for the flags that say that this is something that's suspicious. And it allows them to focus on who should they have additional questions for. And in that same process, when you think about border guards and immigration and customs controls, we apply those same principles in identity, authentication, access, and authorization. Did you want to jump in? It's one thing.
So I mean, the vision of AI is that you pump all your critical business data, including logs, including authorization, business information, into some vectorized store that you can pop a rag onto for retrieval augmentation. And then you can talk to AI. And AI can access all your data and give you insight.
Well, if you have a hacker who's trying to map out your network, it's not the old bloodhound where they're trying to track through your active directory. Now they have insight that they can query to know who's doing what to figure out who has the access that they need. So it's a goldmine treasure trove for hackers to instantly map your network with you doing all the legwork. So you have to really think about, how are we going to control authorization of who can access which data in this big, amorphous, vectorized data landscape? And there's cases of that, exactly.
So to the point where if an organization becomes compromised and they're able to extract that data, what now they actually, it's interesting I've seen cases where the attackers have analyzed the financial records of the organization to know how much that organization can afford to pay in a ransom. I've also seen it where they've analyzed the data and determined that that organization has been actually in fraud. And therefore, they actually filed the 8K form to the SEC themselves. The attackers filled it in for them and filed it directly.
So you have the ability to say, yes, we're creating the data to make it valuable for us. But that also data becomes an attractive target for the attackers as well. Absolutely.
Somehow, we're approaching the end of our time. It's been such an interesting conversation that we dove right in. So let's wrap it up in something that people can hold on to and remember. For the C-suite, for the C-levels, what recommendations do you have for them? What are their major takeaways? I'd say form a team now. Start mapping your vulnerabilities, your technologies, approving what you're going to use and what users are not allowed to use. And then start thinking through your red team, blue team to try to get prepared for it. Absolutely. I think that's an amazing point.
I think addition to that is, while you get your team together, is also define your guardrails and how you're acceptable guidelines of using within the organization as well. I think it's really important to set that right now, because it's really difficult to make changes later once your organization employees are already used to that cultural change. And differentiation, I think it's important to set the boundaries today.
OK, I'm a standards guy. So my answer is, fully fund your standards people to go write this. Please.
No, it's the techniques are changing, the standards are out of date because they're not keeping up pace. And there are many of us working on fixing them so that we can sort of have the rising tide, lift-all-boat scenario, especially on the proofing side.
Yeah, so my final note on this one, I think it's very important for a CEO level to understand. Make it easy, easy to adopt and easy to use for your employees. That's the most important thing. And not only looking at office workers that are used to work with IT, also if you look at production areas, blue collar workers. So they need to have a technology in place that helps them to protect their environment, their access, their identity, that is easy to use and that will be accepted. Absolutely. Thank you so much to all of you. Thank you. Thank you. Thank you.