We just go fluently over into the panel. So you heard Thomas Tschersich already talking about AI. I have two more very, very qualified CISOs in the room that will join me for the panel over here. Andrzej Kawalec from Vodafone Business, Head of Cyber Security, welcome Andrzej. And then Max Imbiel, and you saw him yesterday already being on stage, the Deputy Group CSO for N26. And virtually we have Thomas joining as well, but we are used in the meanwhile to do those things virtually and physically.
So building, Thomas, on what you just said in response to Max's question, Berthold's question as well, let me start with a bit of a warm-up question to all of you. Let me start with you, Max, over here. And you are allowed to say more than one word, but AI, threat or opportunity? Both. Two words. So I definitely see and understand why AI can be a threat for certain things, like just Thomas also said, for, yeah, there will be potentially jobs being impacted by the rise of AI. But on the other hand, we will have other opportunities that AI will bring, right? New job opportunities.
We heard this also during the last two days, basically, you will have new jobs like prompt engineers now that will basically try to come up with the right terms to feed the AI so that you will really get the results that you will really need. And of course, other opportunities that we have in technical areas, technical advantages like writing a program, but then also writing a secure program by an AI, that will be a whole different area where we will see, again, threats, but also opportunities. So I think both.
And Trey, threat or opportunity? Massive threat, huge opportunity. And I think it comes back to something Thomas talked about, right, was actually the differences in the scale. If you think about how AI is being used by cybercriminals at the moment, they are automating the early stages of the kill chain, right, to drive noise and volume of attacks. It's also being used to drive huge information operations, misinformation, and fake news.
Next year, in 2024, we've got 40 major elections around the world, 50% of the world's GDP, starting with Taiwan, US, India, possibly the UK, right, a series of elections which could be hugely influenced by information operations and fake news driven by AI at scale. And the US elections in 2016, I think the estimation was 25% of all the data was fake news and linked to fake news sources, 15% was bot driven. I think we're going to see that huge threat at scale. I think the opportunity, though, talks directly to something we addressed, which is that talent gap, right.
AI will not take the role of a CISO. But what about organizations who do not have a CISO? We look at it across Europe. If we look at even the SME space, I think there's nearly 20 million businesses, organizations, who do not have access to security advice. So how do they get access to security advice? Can we place, you know, a virtual CISO, you know, a copilot, a bodyguard next to those people who can give them advice about what to do, about how to understand phishing attacks, how to remediate an attack, how to respond in real time.
I think the opportunity is really there, is to help people who don't, not to replace CISOs, but to scale CISO capability to places where, you know, it would never reach otherwise. It's interesting. I was talking to Mark Hofmann yesterday about this, and he mentioned that one of their first use cases is to have an AI telling the IT person what controls to consider for their type of tool. So there is a lot of opportunity.
Max, before you respond to that, Thomas, I want to just sort of conclude the first threat opportunity question. And any additional thoughts on top of what the two guys said and what you said earlier? I'm not so sure whether it's a threatening opportunity or opportunistic threat. So maybe it's both. And it's happening already. That's the truth. So if you see, for instance, a major software supplier, take one out of the US West Coast, providing a software update, what they're telling to the customers is here, this is critical software update, you should implement it very fast.
What they don't tell is how the vulnerability behind works and what is the threat in detail. And what we see these days, and this is totally different than in the past, is that a couple of hours after the software was released, we see full-automized attacks scanning the entire internet. And in the past, let's say five years ago, it was months between the release of a critical update and the first exploits we saw in the wild. And this has become true just because of automation. I won't say AI, but because of automation also on the threat actors.
And now think about what it would mean when using really AI, which is much easier to use. So we need to catch up at the defense side with the technology. But I clearly would say it's also a huge opportunity for us as a community to build a new kind of capabilities here. Max? I just wanted to jump on what André said regarding the organizations that don't have a CISO, right? So you have people out there already that offer their services as a vCISO, right? Like a virtual CISO. So this has a whole new meaning now, right?
You could write a GPT that is a vCISO GPT now and offer it up to everyone out there using it, saying, acting like a CISO, what should I do in this, in this, in this case? And I think that's exactly the point that Thomas brought up, you know, I think hit some of the real highlights in his talk. Part of that for me is, is this a trusted virtual CISO? This is a trusted avatar, right?
And if so, then I can understand, I can believe that, you know, the playbooks, the advice, the options I'm being given come from a, you know, a trusted space. I wouldn't want to just, you know, ask anybody, ask the Microsoft paperclip what I should do with my, you know, with my environment or my controls or remediation. But I think for a lot of people, a lot of organizations, just a simple question, right? Is this phishing or not? And getting an answer with a 90, 95% probability is a really big step forward.
And you need in the moment, prompting and advice, as well as I think some CISO guidance. And that's, that's, that's the one you said before, Andrej, it's super important. Coming back to the deepfake point, I mean, Thomas is one of the major advocates of cybersecurity in the German market. Nearly everybody knows him in that space. And a lot of, not only your clients, Thomas, know you as well. So if you are deepfaked, giving guidance on what to do for those companies who don't have a CISO, good luck.
Yeah, there will be, there will be a lot of damage coming out of that. So that's why we need to be mindful. I think we're all aware about this. Sorry. It's not only that.
It's also, if you ask Chad GPT, just a simple example, how to build a bomb, you won't get an advice. If you ask how to protect against the bomb and ask some more questions around, you get also the advice how to build a bomb. And this is exactly the topic of the trustworthiness of those kinds of models and how to ensure that in the future. This is an unanswered topic in my view today. Absolutely. By the way, anecdotally, if I asked Chad GPT as a bank, whether I should block it or not, it depends on the way of ask the question, you get a yes or no. You can use Chad GPT even to educate yourself.
So we've talked a lot about the threat. I want to have sort of one more round talking about the opportunity. Some of those were mentioned, but where are the big opportunities really for us as CISOs or for us as a sort of cybersecurity community to make use of AI? So you know, one of the topics that we're always talking about and always hearing is this topic of human factor, right? So the human factor in cybersecurity and security in general, and why this is so important.
So I guess one of the main benefactors of AI could really be in terms of training for people, how to train them properly, maybe even train them at certain times when it's specifically needed. So an idea that basically just popped into my head was an AI solution that could maybe be on your laptop, right?
Or even on your smartphone and checks if you're either at home or in the office or not at all and you're on the move somewhere, and then tries to determine what you're looking at on your phone or who you're speaking with and the contents of what you're speaking about is proper for the situation and the location where you are. And then just basically either just shutting down the machine because it's saying, look, you're sitting in a train and you don't have a filter and anything on the screen. So I'm just shutting off now, right?
Of course, not so easy to implement properly. But that would be something I would say is a good kind of more like preventive measure for humans as well to leverage an AI capability for scenario based trainings. We often say there's no patch for people, right? But I think AI is the patch for people, right?
You know, artificial intelligence, human stupidity. There's a lovely link between the two. Who would you most trust, a stupid person or an artificial intelligence, right? And I think therein, you're absolutely right. So upfront, thinking about how you provide real time in the moment guidance to people to make risk based decisions. All of us in the room know that humans are really bad at making nuanced risk based decisions. We jump off buildings, we cross the road when we shouldn't, we drink and eat things we shouldn't.
You know, we're really bad at making risk based decisions, right? AI guidance can give us those in the moment nudges. And I think on the other end, you talk about preventive measures, actually speed of response and automation of response would change the operational heavy lifting for security teams. Our security teams, the more we can automate the playbooks, always has to be a person in the loop, right? The more we can do the operational heavy lifting, right? The better we will be at patching people and systems and networks and infrastructures.
So I think, you know, there is an opportunity there to change the operational way we engage with users, but also how we change the efficiency and effectiveness of our delivery teams. I like the example Anthony from Elastic this morning brought up, right, where he basically was showing that these alerts that were popping up in their Zim site could already be automatically through AI transmitted back to the source to the employee asking, hey, we saw this alert. Was that really you? Do you have like a justification for this action?
And if you put something in and send it back, it goes through and says, yeah, okay, then that's fine. Then we don't treat this as an alert, but as a false positive. That would have been a couple of really huge human steps to take. And it changes the speed of reaction, right? At the moment, the route, I don't know what your organization is, the round trip on a phishing email is, you know, a normal employee looks at it, doesn't know, sends it to the phishing mailbox. I don't think you can get a response back in two days to say, no, no, that was okay.
Or no, that wasn't okay, right? I don't think in today's world, a 24 hour, 48 hour round trip for something critical like that is appropriate, effective, right? But in the moment, direct response is where we need to be. Yeah. I agree. Thomas?
Yeah, I think we don't even have a clue about the entire benefits we get in the future. And taking just different example, think about the surgeon, surgeon looking for breast cancer, for instance, judging on x-ray pictures. So there's already technology existing, which can do much better and see also more tiny details than a surgeon could ever see. So this is clear benefit coming out of pattern recognition, right? It's technology we used in cyber defense as well. So it's same kind of technologies now for good and really for helping and supporting people on their health.
And eventually we'll also learn a lot in this disciplines where we could shift back to cyber domain, which we can use in cyber domain. Like we worked a lot with our T labs in Beersheba in Israel, together with surgeons. They know pretty good when a patient is going to collapse just because of parameters they're measuring like blood pressure and oxygen in the blood and what kind of parameters. And out of that, they build algorithms to predict when a patient is going to collapse and stay able to foresee it five minutes before it's really happening. And so they can save life.
And we're working on taking those kinds of algorithms, also put it into cyber defense back and trying to figure out when or to predict when a malicious activity is starting and it's happening to flow out in the networks and then to stop it eventually at the very beginning and not just when we had the first harm in the infrastructures. So I see this benefit is moving between different disciplines back and forth, and we will learn a lot in future out of those technologies and the usage of these technologies. Perfect. We got prevent, detect and respond. Absolutely.
So then let's take the further one. So we've talked about the threat. We've talked about the opportunity. We've talked about that AI will not do our job. So let's talk about maybe final round in the next sort of four minutes. What is our job? What is our role as a CISO when it comes to AI? Maybe with the sort of aspects around governance versus enabler entry, why don't you start? I think the role is to think carefully about how do we extract the maximum value and bring that value forward?
And I think there are some fundamental things that we can act as the conscience of our organisations and think about that. The value of AI is that at scale, you can deliver personalised and targeted interactions. To do that, we need to create trusted data sets, understand and govern and put rules in place about how those AI models work and then provide that last mile, provide at scale and unlock that advice.
I think if we can do that for our organisations and we can share information between us so we create an even bigger scaled set of trusted knowledge base upon which for our AI avatars or models to train and learn, then we make a big difference for organisations, we make a big difference for our employees and we make a big difference for all of the people who consume our services. But I think those foundational elements are really important and they won't necessarily come from the innovation aspects of a business from rampant consumer adoption.
I think the trustworthiness, the scale and the sharing of information between CISOs is absolutely vital. Perfect. Thomas?
Yeah, for me, it's our role is just to make the world a safer place. Sounds huge, but actually, like you are doing, Carsten, you're taking care that your customers not be afraid to lose their money because of threat actors. I'm doing the same here. And by the way, Max is doing so as well. And I'm doing the same here that people are not afraid and using the Internet. I guess this is our role is really making the world a bit of a safer place and to support our business in trustworthiness.
Max, closing words for the panel. Closing word, I guess, looking back at the topic of AI and how we as CISOs should approach this is, I guess, from my point of view, the I in CISO not always stands for just information, but most importantly, innovation, right? We got to always be the ones who are open for innovation and how to securely and trustworthy implemented and use it. Perfect closing, Thomas, first of all, thank you for you being remote. I know that's difficult. We've done that forever, but then everybody was remote.
But I think it felt like you were sitting next to me on that free chair. So thanks for your speech. Thanks for being on the panel, Max. And it was really a pleasure to have you there. It was as lively as yesterday, Max. I really enjoyed that. And I'm closing sort of the last panel for those three days. Thank you very much indeed. Thank you for having me here.