So obviously we're staying with the theme of cyber resilience and we're also staying with the theme of AI because our next speaker is going to be looking at the steps that you can take to prepare for the impact of AI. So please welcome the Regional Director of the Information Security Forum Joshua Hunter. So just as a very, very brief introduction, so my name is Joshua Hunter. I'm the Director for the Information Security Forum. So very much I come from a ex-military background, very much sort of spent about seven years of my life a lot in Eastern Europe.
No kind of guesses where I might have been but it rhymes with Ukraine. And then I spent a lot of my time in the Middle East as well on counter ISIS operations against Daesh. Fast forward to today, I spent a bit of time with startup businesses and then now I find myself at the Information Security Forum. And I'll go on to the next slide just to give you a very, very brief overview of the ISF. We're just over 500 member organizations. In the DAX 40, just over half of them sit within our membership.
We're very much governed and led by them and very business focused on those critical solutions but very much that global network. So understanding if this is a problem for me in this country, what are you doing in Europe? What are you doing in America? What are you doing in the Far East? All those kind of questions coming up. One of the big things that we do is we look into the future, at future threats, future things that are coming out. And just to kind of give you a bit of a context, so this is a report that we do every year. We look out to 2027 this year. It's due to be released in January.
I thought I'd give you a couple of things, identify kind of what we're looking at in terms of key themes. Identity being one of them, geopolitics. I don't think that's a massive surprise to anyone here. You only have to look at your news apps this morning and see what's going on in France, what's going on in South Korea, all those kind of different places. But very much that's going to shape everything that goes forward from our supply chains to everything that we look at and the impact of non-state actors against organizations as well.
Really, really resonates with what Mike said as well. Data, again, that integrity of data, the fact that it sustains every business and protecting that is going to be a key thing. We categorize these threats. So we look, monitor, assess, prepare, act. But very much as an idea of, for your organization, how do you approach it? How do you deal with it?
We then, we went to, so about 140 CISOs from across ISF members are involved in this piece of work. And we very much went to them and said, what are you looking at? What are you acting on now? What is the focus for the next two years? These are kind of the main ones that came out. But culture wars eroding that security education. For the first time, we have three, four generations in the workforce with different views, different understandings of what is sensitive data for my organization? What differs? How do we compare that and that release of it?
Especially when we talk about sort of the loyalty of them and all these kind of different aspects within it because, you know, the polarization of things through social media, all those different things. The fact that the center ground is disappearing and we're looking, you're either left or you're right. The in between doesn't exist in the same way that it once did. The constant cloud service movements, that push to have greener data center, having to move that data, all those kind of things as well.
AI software add-ons, co-pilots, the amount of organizations I speak to when from a very senior board level, they will come down and they will say, we're using AI in the business, make it safe. That's obviously a huge, huge focus and kind of understanding a framework where you can do that. Cyber criminals, AI as a service, you know, we have ransomware as a service now already. That's only going to compound and get worse. And then the other one being threat intelligence, failing, misinformation there, fakery, acting on completely the wrong thing because there is so much noise out there.
We then sort of went down and looked at, right, what are the priorities? What do we see that you're really focusing on? The security culture, company loyalty training being a massive one. Deepfake awareness, finance executive level, you know, we hear all the stories. We do a ton of cyber simulation exercises for our members as well where we use deepfakes and even the most seasoned people in the room will fall for those as well. And that's a really, really common thing we're seeing quite a bit.
Global events, regulatory change, NIST 2, CRA, all these different things coming out, it's only going to expand. And then really that big focus on resilience, on what does that look like, what does the minimum viable operation, so what's the minimum I have to have as a business? We talk with lots of banks, what's a minimum bank that we can move going forward? Once you sort of focus these, understand where you sit as an organization within these and with these threats, it's then the really good time to talk about kind of what the strategic challenges are.
I think everyone in the room is quite aware of this, but it's really understanding that good practice, what does it look like, how can I be in a position where I'm ready to face these? Which then brings me on to the meat of kind of what I'm talking about. So this was written actually by a former colleague of mine who's now since retired, but this was put together by Andy Jones. So Andy Jones, if you didn't know, was the CISO at Maersk when NotPetya happened.
So we sat with him, a number of different organizations were hit with this real extinction-level attack to really understand what could they do before that, what could they do during it, and what could they do afterwards to get the business back up and running, to achieve that idea that we have resilience, we know what to do, we can work through this and move forward. So as you can imagine, it's that case of prepare, you have the attack, it's then respond, resume, and kind of we looked at what are the different things you can do within that.
The first thing we did was, well, let's break it down into the five aspects that we want to look at, and for us this was the relevance. So what governance, leadership, direction, all those kind of things, what processes do we have in place, what does that look like, who are our stakeholders, what technology you have, and then of course data being a really key one there. So everything we do, we then look at under these lenses.
Within that prepare stage, we very much had this, right, build that threat profile, understand what it is, what's coming at the business, what's most relevant to you from all that information you have. The classic being the cyber hygiene, that list of assets, patches, all those kind of things. Becoming more resilient, so that just understanding what that looks like, what those processes are, what we're ready for as an organization, maybe what we've dealt with before, and kind of learning those lessons.
The other big one, which I'm a huge fan of, I think it's probably the ex-military background that you exercise, exercise, is very much that train hard, fight easy kind of mentality. But we do a ton of these, and really, really go for like extreme scenarios, tons of injects, different things, looking at what other key stakeholders are getting involved, what the press is asking, what that means for your shareholders, what that means for the organization, all those kind of different aspects bringing into it.
But that builds that confidence in the team that if this does happen, if this does go really, really bad, really, really quickly, I know that X there is doing that, I know I can rely on the SOC for this bit, I know I can rely, and everyone knows what their role is within it, because they've rehearsed it, because they understand, because they're prepared for it, and that's really, really important. And then, again, it's looking within that, like, what are the critical questions that you need to ask? So what if we don't get any help? What help do we have access to? Who do we have on retainer?
Who can stop doing this? If it's a ransomware, for example, do we have a company that could potentially broker that deal?
I mean, no one pays ransomware, you know, no one pays the ransom, but it's a very big business, so they have to be getting money from somewhere. So it's really understanding that, and then just what is the backup, huge thing that Mike was talking about, what does that look like, how quickly can we get that, what are the steps we would have to take to get that where it is, and how we can validate that.
Responding to a crisis, so that third stage after it's happened, very much that establishing roles, right, what are we doing, what is my responsibility, where does that sit, and everyone knowing what they have to do with that clear idea of this is the aim, this is what we're focusing on, this is how we're going to get through it.
And then very much kind of that communication piece, so there was a vendor outside as well that we were talking about earlier, about having that secondary communications, that safe communications where you can talk through, you can actually say to people if they don't have that ability to use email, to use Teams, all those kind of things, what does that look like to save using things like WhatsApp and all those kind of other areas. The other big one as well is that manage the human reaction.
I think people from a more security focused background, you know, if you get that ransomware, you know, the skull and crossbones on the screen, all those kind of things, we kind of get it, but someone maybe working in marketing or HR or other parts of the company, that's actually quite an upsetting thing to happen, so really understanding how you need to deal with that, using HR with that as well. And the other kind of aspects to look at is kind of that hero culture, that one person taking it all on on their own.
I was speaking to an Austrian company not that long ago, and they had a really big nation state actor attack within their system, and it took them about six months to clear that out to make sure it was safe again.
And for them, they understood the roles, but it was that management of people, making sure one person didn't take it all, because that burnout, that inability to keep going, you know, to pass on that information to other members of the team, that's really something that you have to manage and really think about, because if you get hit today, someone might hit you tomorrow, and so on and so forth, and famously when you're down, people will come and start kicking even more, so really do that.
And then organizational governance, and then as we said, assessing that technology, identifying what is a viable system, what is safe, what is secure, what we can move on with, and very much with that ultimate focus of ensuring that the company still exists at the end of it, and everything that you can do to keep things running normally works.
The last bit is that kind of regime, so really thinking about the systems restore, everything that Mike spoke about, really focusing on that to get back up, that good governance, resetting the behavior, and then really that seizing initiative is a big one. Very good opportunity to get more budget for your department is always something that's quite critical there, but it's that refreshment of that awareness, refreshing what was that, looking at how did that incident occur in the first place, and how we can step forward from there is super, super important.
And then that plan to survive, like what's the short term, what we've got to do to kind of ensure that that attack scenario we can learn from, so we can take that, look forward, and very much sort of work from there. So in summary, it's always a case of when, not if. It's very much that case that collaboration is always key.
We have lots of conversations between different people within the organization where they very much talk about, we learned this, this is, Max mentioned it earlier if you heard in his earlier speech where it's like, you know, talk about when you messed up, talk about when you didn't do things right, when you weren't prepared, when you didn't have that ability to respond, is very much a thing.
And then practice, test, exercise, those table-top discussions, maybe do an immersive one here and there, but really to get things where it's everyone's working through my role within this plan, this is what it looks like, this is where I sit within it, so it's really, really clear. And also taking people out of the equation is also a really, really powerful thing to do. So typically in a lot of organizations, you might have that, the hero or the rock star, you know, the fountain of knowledge that understands everything within the organization.
So really having that ability to work through and to take them out of the situation to say, well, okay, that person doesn't exist today, that person's ill, that person leaves the company tomorrow, what does it look like, where are our shortfalls, where can we learn and expand from there? And then classically, never waste a good crisis. So never find yourself in that position where you don't take those lessons, you learn from it, you build those action points, you get your incident response plans out and you build on that and go forward.
I rattled through that relatively quickly, but yeah, that's very much it in a nutshell. Joshua, we don't have any questions online just yet, but we have some time in hand. Are there any questions in the room?
Of course, Mike, please. So it will be interesting for you to describe some of the ways that you are using AI, gen AI. I'm asking for some examples of how the bad guys are using gen AI to make things worse during the crisis.
Yeah, so it falls into that exploitation. So understanding when there is a crisis, we've seen examples of when they've worked out who's likely to sit with different responsibilities, and then it's engineering those responses that they might expect. So as I was talking about using a deepfake voice of a CEO or a CFO to say, look, if the ransomware demand sits between X and X, that's okay, I authorise the payment of that, because ultimately, they're trying to get the money.
So it's basically exacerbating and playing on that, you're stressed, that weakness, you've got people asking questions, especially if it goes into the sort of the open sphere where people are understanding and know about it. Another area you might talk on was how, because of your ex-military background, people in the computer industry are often very reticent. But how do you make sure that you set objectives and describe them in a way which is unambiguous and motivational?
Yeah, that's a really, really good question. I think it's that clarity, that this is where we're trying to get to, it's that understanding. And it's part of the rehearsals, it's part of we always called them rock trails, rehearsals of concepts. It's effectively saying, at this point, this is what I expect to see. If I don't see that, then I have to go left, right, do whatever I need to do. It's very much, it's exactly the same concept, right? If everyone is bought in, everyone understands where they sit, things are delegated well, you can really make that move forward.
And then it becomes a point of, yes, I know what to do here, I don't have to ask for permission to do this, there's that real mission command, because that focus is on the ultimate goal. And that takes time to build, right?
In the UK, there is this concept of gold, silver and bronze command. Now, is that any good in these circumstances? What is your experience of all of this?
I think, yeah, so yeah, the gold, silver and bronze is classic. So for anyone that doesn't know, it basically, it's an escalation, right? So it's the classic, at what point do I go and wake up the CEO and say, this is big, you need to make a decision here? Those are really useful, because again, it's a delegation, it's understanding that. It also shares the workload, right? And it's all about, again, having that conversation beforehand, making those clear parameters. At this point, then we go wake him and get him out of bed and he has to make those decisions.
So it is, yeah, it's, if you use it in the right way, it's really good. If you don't, then it becomes a cop out, and it becomes, oh, I'm not going to act because that's beyond my pay grade, et cetera. So it's a really, it's, I really like it as an idea. It just has to be maintained and understood at all levels and briefed up and down. So at their heart, though, an AI accelerator, is it any different to kind of the other threats that organisations face, other than it's just kind of faster?
Or it's, you know, are there any metrics that Caesars can track to go, well, yeah, we were kind of coping with this whole new brave, new world of AI accelerated attacks? Or is it just kind of just business as usual, just having to be aware that you just have to be all that more prepared?
Yeah, I mean, there's, there's an element of business as usual, for sure. There is that, there is that ability to go deeper, right, to really hit, and it comes again, at the human factor, right, to be able to fall, to be able to disengage, discombobulate, you know, really mess things up.
But again, for everything new that comes in with AI for the adversaries, there's some new AI tech to support Caesars there. So again, it's, it's looking, as we spoke about, what tools do I have, what access do I have to them, and working from there. And in terms of support from the ISF, I mean, have you got any kind of places people can go to interact with them, as Mike was advising, to look at the best practices?
Yeah, so we have, so the ISF does a preliminary framework, which is kind of the skeleton of everything we do. That's called the standard of good practice. So we have huge error on that, on AI as a topic as well, but it very much breaks it down. So to understand that everything within that is linked directly to NIST 2, DORA, PCI DS, all of those major frame, ISO 27000, it builds all of it in, but it looks at what is best practice, and then that is cross-referenced against our members to make sure that that does align to what they do, or what they think is best practice.
Okay, thank you very much indeed. Round of applause for Joshua Hunter.