Welcome to the KuppingerCole Analyst Chat. I'm your host. My name is Matthias Reinwarth. I'm an Advisor and Analyst with KuppingerCole Analysts. Today we want to cover a topic that many vendors, many news outlets, everybody's talking about. We want to talk about the involvement and the integration of machine learning, of AI into technologies, into almost any technology - we want to focus on a specific one.
When you have listened to this episode, I hope that you know more about the role that machine learning and AI can play in enforcing cybersecurity infrastructures, namely SOARs. And we want to make sure that we understand also this abbreviation afterwards. Not every AI, not every machine learning that we see in reality is actually one. So where people talk about AI, then sometimes it's just
really good technology, but not necessarily machine learning. There's even a term for that that I learned from our colleague, John Tolbert. The term is AI washing. So just to say it's AI because it's more sexy, more nice, more modern. In the end, it is not, but it works still fine. But this is not what we talking about. We are talking about the real use, the real life use, and it's there for machine learning in cybersecurity infrastructures. And for that, I have invited
our expert in that area. Please, I want to welcome Alejandro Leal. He is a Research Analyst with KuppingerCole. Hi, Alejandro.
Hey Matthias, happy to be here.
Great to have you. And now that I've made everybody really learn more about the topic, you are covering the area of cybersecurity, of SOCs, of SIEMs, of SOARs. How do you see the generative AI really transforming the day-to-day responsibilities of people working in cybersecurity, working in SOCs?
Does it go beyond simple task automation? Is that really changing? Is it really a game changer in security?
Well, that's a good question. I guess many people like to say that it's a game changer. So just a bit of background. I've been doing research on SOAR. It stands for Security, Orchestration, Automation, and Response. So for the past one or two months, I've been doing a Leadership Compass on the topic, which will be probably published by the end of September. So during my research, I had the chance to talk to 12, 13 vendors.
And one of the things that stood out to me was that many of them are emphasizing the integration of LLMs into their solutions. And that's something that it's kind of new because I wrote a LC sole report two years ago and there was really no mention on that at all. So we see that, you know, there's a lot of hype in the market.
Not only in SOAR, but overall, as you just said in the introduction. So essentially for SOC analysts, those that use SOAR solutions, you could say that generative AI could potentially offer a remarkable leap forward to improve their operations and be more efficient. It means being able to automate
repetitive tasks and also get more guidance from these LLMs. Some of them are in the form of chatbots, so they're very intuitive. Also, maybe if you are a new user in the platform or you just recently joined a SOC analyst team, many of these tools provide basic guidance on how to use a platform.
By using these LLMs, many of these analysts have more time to focus on more creative and strategic dimensions, such as identifying new threats or planning mitigation plans or creating new defense strategies. For SOAR, the use of generative AI
extends beyond a mere automation. Many of the security analysts can, other than getting basic knowledge on how to use a platform, they can also get help in threat detection, in incident response, in creating and generating playbooks, also summarizing events and creating reports. Although
some vendors are very excited about these new tools. Many of these vendors even use the term hyper AI. And talking to some of our colleagues, like John, for example, we agreed that perhaps it's a bit premature to use these terms, but many of the vendors are pushing it in their marketing, let's say. But there are also some other vendors that are a bit more...
more cautious, they're waiting to see how the industry evolves, and they're more focused on meeting their customers' expectations when it comes to the use of generative AI. So I think that this balanced approach, in a way, reflects a careful consideration of both the opportunities and the challenges when it comes to integrating these tools in SOAR.
Right, and you've mentioned that. And it's a topic that I really cover also at KuppingerCole. And I've done that for many years because when I started at university, I actually started at a institute that is the German DFKI, the German Research Center for Artificial Intelligence way back in 19... So a long time ago. So since then, I'm covering the topic of AI more and less closely. And of course, in the last five to
six years, this is an important topic. And one important aspect that I want to really make sure that everybody, every company, every vendor, every end user organization makes sure is that they do AI right. And that is from very, very different aspects an important part. First of all, it needs to provide value. That is something, of course, that is important.
But on the other hand, are lots of challenges when you integrate generative AI, for example, into a technology. Things can go wrong and there are lots of aspects that need to be covered. So from bias to ethical use to is it effective? Are the results actually beneficial? All of these aspects, I think this is also reflected in the feedback of the vendors that you just mentioned that some are more hesitant, some are more pushing that.
In general, from your research, what are the specific challenges when you integrate generative AI into these SOAR platforms regarding, for example, data security, compliance, but also efficiency?
Yeah. So I agree with you. These tools need to create value and that reflects why some vendors are more careful. And that also shows that maybe there are some other vendors that are really pushing these terms. But in a way, as you know, generative AI requires access to vast amounts of data to learn and make decisions.
So the topic of data and privacy were probably the main topics of conversation that I have with many of these vendors, especially when I asked them, what's the feedback from your customers? So of course, ensuring that data is handled securely and in compliance with privacy regulations is a must. Another topic or topic of conversation and challenge that I discussed with these vendors was biases.
So there's a risk that these models may develop biases based on the data that they were trained on and based on historical data. So many of these vendors, need to thoroughly assess the data. They need to implement quality control to ensure that the information provided is going to be useful and is going to providing value.
So I think that many of these vendors need to be more transparent with the way that the tools are supposed to provide value. They need to also address some of these challenges and especially they need to talk with their customers on what are the requirements to be compliant and to ensure that data is handled securely.
Yeah, there's also the need, in my opinion, of having a sort of online community. Many of these SOAR vendors have their own online community where they share content, expertise, and experiences. So I think having this feedback loop can also help in mitigating biases by sharing experiences from different organizations.
Right, and I think that that's also the important part because I like to think of AI more as an acronym for augmenting intelligence. So it's really adding value to what a human user does. So it's not taking over the full work, but it's doing the heavy lifting. It's doing the repetitive, the boring stuff. But in the end, when it comes to, and you've mentioned that already, when it comes to the more precise, the detail work, the final 20%, then it's
still up to the SOC analyst, to those who are using the SOAR tool to get better in analyzing this supported, augmented by AI. And that directly leads me to the question, the new gold when it comes to finding the right people on the market to support you in AI is prompt engineering. So when you're good at prompt engineering, I think you should probably
not have a problem to find a new position, a new job. How important is prompt engineering for SOAR? I think this is a growing market. There's a lot to do still, right?
Yes, yes, of course. But maybe before jumping into the prompt engineering, I also forgot to mention something important that many of these SOAR vendors are integrating their tools, their solutions with other tools like OpenAI, for example, or Microsoft Copilot. And one of the things they're now doing and many of these vendors, I'd say maybe one third or even a half of them on their roadmap, they're planning to have their own proprietary LLM system.
Because they understand that many of their customers are a bit hesitant when it comes to data and privacy. So that's also one of the, let's say things that they're planning to do to address this challenge. But going back to the prompt engineering, I know you're a fan of prompt engineering. think you had a EIC session last year, if I'm not mistaken. So as many people know, prompt engineering is designed to structure
text in a way that it's going to be easily understood by the system. It's designed to optimize language, right? To increase the performance of the system without much computational resources and spending too much time amending certain things. But like you said, engineering is one of the skills that many of the SOC analysts could learn,
whenever they have, let's say this free time, that these LLMs are automating many of their repetitive tasks, maybe they can also increase their own skills and also leverage the tool by implementing specific and design prompts to increase the response and also the productivity of whatever the SOC analyst is trying to do or find out. So yes, it's a very important skill.
But it requires continuous, I don't want say training, but continuous learning on how to fully leverage these tools. Maybe you have something to add. I know that you have a lot of experience in this area.
I think using AI has nothing to do with magic or some technology taking over the work that you would otherwise do. You need to tell the technology what to do. And this is the prompt engineering part of things. Of course, there are prompts that you can reuse, though you don't need to reinvent the wheel from scratch. But when it comes to really fine tuning a prompt to what you really need, that can be a lot of work. So it's not just something that you do in five minutes and then it will do the rest of your work.
A good prompt can take days to create and to test and to test drive and to make sure that it's efficient and following all the rules that you've mentioned. So I think there is a lot of improvement in there. And in the end, we are still talking about a technology that is not necessarily deterministic. So it will behave differently tomorrow. So you need to adapt your prompts as well so that they continue to do the job that you want it to do over time.
But that also leads me to a topic that you just mentioned. We're jumping a bit around with all the topics. As you said, some are integrating existing platforms via APIs, OpenAI or Microsoft Copilot. Others are creating their own solutions. And these are two different aspects of A, getting the functionality and B, using proven technology and making sure that you deal with all the implications that the use of AI has.
The other option that you mentioned, and I think that's an interesting point, is proprietary self-trained LLMs that are integrated somewhere. And then we get to, and we have that on the Microsoft and OpenAI side as well, we get to the issue of the black box. So not really understanding what the AI actually does, how it processes the data, and how the actual prompts deal with the LLM to get to the proper output. I think these are challenges that are still under
evaluation with many organizations, be it customers, be it end users. So from your experience and also from your talking to all these vendors, How should organizations really make sure that they do this again right, that they balance innovation with their necessary responsibility when deploying SOAR? What would be quick recommendations? I know that's a different question, a difficult question,
but where to start, where to add this balance?
Yeah, I think it requires a sort of sober approach to not be overly hyped about the technology. Like you said, and we've been saying in our research, there's no magic tool that will solve all your problems and will implement security across entire infrastructure. So I think a combination of both implementing automation tools and at the same time promoting cybersecurity culture are essential.
As we hear many times in this industry, we hear that the human is the weakest link in cybersecurity. But if we equip SOC analysts and users with the right tools, and at the same time, we promote a cybersecurity culture, Like for example, helping them to get skills in prompt engineering, for example, that could be a good combination on how to balance both innovation and the challenges And
going back to this approach of many of the vendors, they are now initiating the process to acquire their own proprietary LLMs. Some of them are doing both. So they're integrating their solution with third party tools and also they're launching their own proprietary LLM. One of the vendors, they said something interesting. They said that they want the LLM
to be a sort of coach, an offensive player, a defensive player, even the referee, even the medical team of a team, you know? So it seems like many of them have this vision on how the role of the SOC analysts will change in the coming years. But I think that we need to, again, to my first point, have a more sober approach to that, to ensure that we understand customers' expectations,
especially across different industries to be able to balance these innovative features and at the same time these challenges that we have discussed in this podcast.
Exactly. And I think one aspect that is also of importance is, now we are adding another type of technology and new components to this SOAR infrastructure, to this technology. And I think that's an aspect with you being a cybersecurity analyst and KuppingerCole handling cybersecurity topics at very detailed level to understand that these components that we are adding to that. So the supply chain of AI, this tool chain,
needs to be properly secured as well. So it's a new technology. It needs to be applied. We need to apply the same strict, rigorous cybersecurity rules that when we deal with these technologies using API security, making sure that there is, apart from prompt engineering, which is the simplest attack vector, when it comes to protecting the overall infrastructure, the databases that are behind that,
everything when it comes to the APIs, to the data storage, but also just the simple things, making sure that data is maintained properly, maybe anonymized, pseudonymized when not needed in verbatim form. All these aspects are adding to the complexity of cybersecurity while we are improving our cybersecurity posture through AI in the first place. So, it gets more complex, but we need to do this from a cybersecurity perspective
right as well. So this is really an interesting and evolving topic. We are looking into securing LLMs and having research on that as well because yeah, this is a new component and with them getting more important, they are of course a new attack surface that we need to protect. What's your point on that?
Yeah, absolutely. think there are more challenges that perhaps we didn't really fully discuss due to the amount of time that we have. We talked about data and privacy, but I think there's also some social and political and economic challenges when it comes to the use of these tools. We were talking about art earlier, and there was this session that I had at the EIC last year on
generative AI. And one of the slides that we had was a picture of the fountain by this French artist. And it was, I think, exhibited in New York in the early 1900s. And many of the art critics at the time, they were saying that that was the end of art, that it's the culmination of centuries of art because it was just a fountain. So
one of the questions that I raised in this session was that with generative AI and all this, I guess you can say art that the system makes, what does that mean today in terms of philosophical and from an artistic perspective? I don't know the answer, but I think it's a really cool thing to think about.
Yeah, I think there are many challenges that are not necessarily connected to cybersecurity, but this is a global phenomenon that is really affecting many different industries and many different areas of our lives.
Absolutely, and I think we should and we need to continue this discussion. We need to focus it sometimes on specific aspects like we did today, looking at SOAR and looking at the SOC, leveraging this technology and improving the process quality and the cybersecurity level. But as you mentioned, there are so many aspects where large language models really add to our daily life. And I think we are not yet fully equipped
to deal with that, so we need to continue that discussion. Thank you very much Alejandro for being my guest today. That was an interesting discussion. So the research is not yet published, so it's really hot from the press, so it's not yet there, but should be available in a few weeks. So when you are, as the audience, listening to that episode or watching it on YouTube, please check back whether Alejandro's report is already there.
and have a look at especially that interesting aspect of integrating generative AI into SOARs. So that would be really, really, really important. And if you have any questions, you have topics to cover, if you think there were aspects that we missed in that episode, please leave a comment below that video on YouTube or drop Alejandro or me a mail. We are really interested in using that for further episodes.
So it's really also a podcast for the audience, of course. So we want to pick up your topics. For the time being, Alejandro, looking forward to having you soon again on more AI, more generative AI and machine learning in cybersecurity. Thank you.
Thank you, Matthias.