Very happy to talk about about ai, and I was thinking about, okay, how can I give a little bit, maybe a twist to this, knowing that I think everyone is talking about AI in this context. A little bit about my person. I am with Vectra ai, which not surprisingly has some AI capabilities in their product. I'm the EMEA CTO. In my previous life, I was, I was running digital security with a large organization and I planning to share a little bit of what I learned and what I've done over the past years in this, in this context.
And to set it all off, I wanted to bring you to a little journey looking at humans and technology in the context of ai, starting with the concept of unconscious bias, which I believe is probably known to most of, of you. And I think this is, is interesting to, to always remember yourself about these, these kind of things.
And a funny story was in my, in my early career, I went to an event first time and I registered, and the lady organizing it was looking at me and, and I could tell from her glare in the eyes that she was somewhat puzzled.
And, and I asked her, it's like, is is there anything wrong? And she said, well, you know, I thought cso, large organization, 50 years old, gray hair man, unfortunately I didn't meet, meet these kind of categories, but that this was her thinking, this was her then not unconscious bias anymore because she expressed it to me. But these are the things that are driving and, and delivering us.
And when we're now looking at ai, I think most of us at least, I mean the, the, the older generation is instantly thinking about Terminator, Skynet or other fiction in this case where usually anything that is around AI about robotics is, has some negative connotation to it, which is still some fear into us.
You know, and this is something that's fundamentally is also driving our unconscious bias and belief system. But more so, and more specifically when we are looking at ai, I think there is still a lot of space to look and learn and understand what is ai.
'cause interestingly enough, I mean by the pure definition of ai, a simple lookup table reflex agent is ai. But most of us would say, ah, no, this is, yeah, not some simple automation or some simple things that linking this up. But then when we're looking at things like general adversive, uhers networks, then we think this is ai because I mean, this is really something doing that we, we couldn't believe before, like the example up there with people that don't exist fully generated by ai.
And I think here there is a lot of space to also sit down and learn and, and consciously push yourself in terms of how you look at AI and how you look at the things.
All of this of course also builds to trust. How do you trust people? How do you, trust me what I'm saying makes any sense or is true? Do I have the credibility?
I mean, usually yes, you get an introduction, you say, okay, what you see, your educational background, in my case, AI and psychology, does that make me any more trustworthy? Right? Because I have a certain age to come back to unconscious bias. Am I then this, in this case more trustworthy? And then if you think this further, how do we actually apply this trust questions to technology and specifically to ai, how do we build trust? How do we understand on, you know, that it actually does what we are expecting it to do.
And I think also here it is very interesting, you know, with a small example to think about on how AI operates, right?
I mean the, those models, they have a built-in success criteria. And this is something that they, they're gonna match no matter what. So thinking about, you know, it is as easy as as just looking at the pictures, right? The model will will understand, okay, it's a bus, it's a bird, is a temple.
We can also look at those pictures so we can confirm, yes, this is actually a right categorization, but then if you add some jitter to it, some for us, invisible information, you know, the algorithm in the end will be totally confused. But for us it's still the same. And I think here the, the important bit is also to understand is that of course our visual system is a totally different one than the visual system of a, of a algorithm because the algorithm looks at the bits and bytes, you know, and there's a difference.
Just because we don't see it, we cannot process it, you know, doesn't mean that that it's not there. And there is other things in other aspects as well. And I think, and it goes also in the physical world, as some of you may have know.
I mean, if you have automatic traffic sign recognition and you put some stickers on it, it may actually also already confuse the other rooms and turn this around. So these are things that we need to measure to actually to build trust. And then ultimately it's all about also delegation. I think delegation is one of the most challenging topics.
I mean, once I moved from a, from a technical role into a managerial role, the first thing you need to learn is to let go to delegate, right? You need to accept that, yes, even though you think you do a hundred percent job, you know, and the, the, the person is maybe more junior and will achieve 80% in your mind, again, unconscious bias, you know?
But if you realize that you don't scale, right, you may actually end up in understanding at one point you better have a 80% result and a 0% result because you just couldn't handle the load.
And the same thing is like how it is, what, what is it that technology and AI can do to us, right? So we have our unconscious bias, we have the trust, so we build trust and then we're happy to delegate. But what is it that we can reasonably delegate to AI today? And I think if you look at this, there is something that we as humans bring, which is around culture.
It is, it is something, you know, that we, that we have a different level of judgment and, and reasoning. And I think this is one of the most dangerous parts about large language models and generative ai, that we believe that there is some meaningful reasoning behind it, but it's actually not, there is reasoning behind it.
That's, that brings the algorithms to gain their points and, and move them into the direction of how they have been developed. But this doesn't necessarily mean that the outcome or the output that they're creating is, is what we consider reasonable and something that we've considered being good.
However, there is speed, scale and complexity that is unprecedented. This is nothing that we can do, even if we throw tens of thousands of people at it, this is nothing that we can do. And this is what machines are built for and what they can do better. So from a security operations point of view, if we bring those two aspects together, this is, and we heard it just before and also in the other days, this is where we can actually move forward. We have to move forward, right?
So based on all of this, I mean, these are things that, and I was, was thinking about, okay, what is it that we need to do in security, but also beyond as an IT leader, as a business leader, how do you actually, how actually focus and, and bring this to life and back in the days?
And I still, I still think that this is something that is, that is important today, I was one of the first ones in the security team that hired someone that looked into data problems in the entire, in the entire group. I beat the, I i I beat them by I think two months.
But still we had someone that was looking into this, someone that, that built knowledge and the capability on ai. One aspect also was at the same time we switched on what are key skills that we needed in the security team. For most traditional security teams, they are born out of network security, they have a good knowledge of the infrastructure, endpoint service and so on and so forth. Important knowledge.
However, coding is something that is key and is important. So bringing this in helps you to be prepared because in the end, and this is why it's so important that you're the first mover is if you're not ready to engage with the business, then you're gonna have a problem because then you're running after the fact when they come to you saying, how do I secure ai?
Is that secure? What data can I use for training? What data can I use for learning? And these kind of things. So being prepared and, and being the first mover is super example if super important.
What we did is also we didn't think about ai. We need to have the greatest and biggest AI approach in AI model. We looked at what are the operational tasks that we wanna simplify. So it was a mix of automation, it was a mix of bringing in some intelligence. So we were looking at high workloads, tasks and activities that added little value to the team. And I think in security operations, but also if you do audits and assessments, there is many, many instances of, of low value work, but with a high workload.
So we looked into is to something that we use natural language processing and parsing for emails.
So people reported emails and phishing and so on, so forth.
Yeah, I mean you can look at it, you may have a tool that filters a lot of these out, but there's still something left. And working in a multinational environment, user-centric, you want to have feedback, right? So we developed a tool that was automatically parsing all the emails that were reported was, was trying to see is this, is this a spam, is this a false positive? But also with an automated feedback loop to the user in their own language. So it was actually recognizing the language, French, English, Spanish, German, and was, was then using templates to report them back.
And in automating all of this, it took away a great workload and it helped the team to play with what is automation, what it's, what is something that moves into AI and familiarizes themselves with the challenges around it.
And I think stepping on or pointing on this, on this interaction between technology and user, I think this is, this is really where also AI comes into play to make a, to make a key difference.
So, but moving first also means that you fail, right? You want to be the innovation driver, and I'm confident that security can and should be the innovation driver in a lot of cases overcoming the securities or road blocker burden and so on and so forth.
So if you, if you engage in this and you test this, you will see there's a lot of, there's a lot of gain however you fail. And we also bought a solution, quote unquote buying because it was, it was giving at a, at a, at a, at a low cost, no cost to us. And it just sucked. It was not delivering it towards its value.
The company that was providing it also realized this after some time.
So yes, we failed. And I think, but it's okay because in the end we tried, we hoped that this was actually giving us some advantage, but it was also not a huge drama and strategy because it was something that we could easily onboard and offboard within the security operations process without, without bringing everything to hold. And we also didn't burn, lucky enough, a lot of money in this context. So in the end, what we've learned and what is very important that it's all about outcome.
I think we're talking a lot about AI here, this model there and whether what it does and what it doesn't do in the end, I think we need to be honest here. We want to understand, and we should understand, again, trust should be confident and comfortable delegating it, but it's all about the outcome.
So we should focus on what, what is the outcome that that actually is, is is being delivered by any solution that we buy rather than the approach.
And, and if we're honest to ourself, I, I think dealing with a lot of tools, technologies, topics in our day-to-day life, and we're focusing very much on the outcome. And we're not ratholing every single time in all of the, the details because we, because we trust, right? And also here, I think it's really much about looking for the unlikely champion, the unlikely talent.
It's not necessarily the ones that are the loudest saying, here we go and we have the solution for everything is really talking to your peers, listening to them say, Hey, we're working with this, with this company, with this vendor. Have a look because they're really making a difference in terms of outcome. So if I summarize this, the key is that you understand and you manage your belief system.
Going back to, to how we look at, at all of these kind of things, really challenge yourself and say, Hey, what is the driver behind me?
What, what healthy taking decisions and, and, and pull your, your network in and, and have conversations around this. And then when it's about implementing and picking solutions, test the outcome. Don't test the output. It's easy to throw in a solution, especially in security operations.
That's, that looks like a Christmas tree and shows you all of these kind of things, which might be great. But is that, is, that's firstly output. The outcome is something that you, that you wanna measure. And this is also something that you wanna measure when it comes to AI driven solutions.
And, and from my personal experience, especially when it comes to security operations, any outcome of your entire operations, of your organization, of your processes, of your technology stack, red and purple teaming, this is the way to go.
And I think also when you, when you start selecting your tools, this is something that you should do and you should encourage to do because that gives you a good understanding. Does it do what it claims or what the salespeople claim it will do. And then last but not least, embrace the technology evolution. Maybe revolution and accept setbacks.
I think that's the hardest part for most of us to realize that not everything is, is actually delivering the value it should. And I mean we've seen a lot of topics and a lot of solutions over the year that we're still holding onto, you know, because we implemented it. And I mean a lot of us, if we implement and deliver a large project, you know, there's this tendency that this is my my baby, I don't want to give it away.
But if you challenge yourself regularly, if what you've decided and done yesterday is still applicable today and may also still add value tomorrow, I think this gives a different dynamic in the team and allows you to have the agility and flexibility to, to embrace this and, and be at the forefront when it comes to the conversations also with your business peers and partners about, you know, what is it that we want and can do with AI and what is it that we may not be able to do with it.
On that note, thank you very much for your attention and I'm happy to answer any questions if there are.