Let's start as a, so given that most of you are from companies or all of you are involved in some way in AI, probably you all will be more the evangelist of AI than the opponents, but anyway, anything which is critical is also good. And let's say, as I said, quick round of introduction, 30 seconds, good person. And maybe a first initial statement on what, what you think about as topic one, do you wanna start?
Yeah.
Hi, I'm William ho. I'm from Belgium I'm.
I did a, at the university of 11 and computer security. I'm co-founder of li we work in the area of identity and access management and an important part of cyber security. And from time to time, we employ machine learning. And I'm not exactly a big opponent of artificial intelligence, but I would rather call it artificial magic than artificial intelligence given the, the recent buzz around it.
Okay. Ya?
Yes. My name is Janne. I'm the CEO and founder of Minori with the background of about 15 years in the fields of data mining, machine learning and AI, which is kind of evolution.
Minori is actually one of the pioneers in entering a computer vision or bite level analysis and machine learning combination to govern data across platform and its scale specifically around dark data, which is unstructured mainly. And today we actually launching this new capabilities across many cloud vendors and, and companies that are entering into the cloud in order to enable them basically adopt cloud much more protected.
Okay.
My name is Darma a CEO and co-founder of L seven defense L seven defense developed a unique novel unsupervised learning, a approach, which is used now for network and application security. And the discussion about AI is inherent to what we are doing in the company. Cause we are AI based company. Not aside what we are doing, this is what we are doing. And that's it. Basically
Martin Mongol. I am from drive flock, which is a endpoint protection vendor.
And I think that AI or machine learning will not do fundamental changes, but it can really support existing technologies, especially think about the administration effort to get things handled
Richard
From university of HIA, interested, mainly to topics on I'm working on it's the linkage between cognition, psychology and behavior in decision making. And the second one is drones, decision making user and machine.
Okay.
So, so maybe, maybe let's start with one point because I think we all know, and sometimes we are part of that, that AI is a password, which is used sometimes correctly, sometimes less. Correct.
So, so when would you say we should really start talking about AI and when it's really something which is not AI at all? So, so where would you draw the line? So to speak where you say, okay, then that's the point where vendor really should use the term AI. So who wants to answer you look like you want to answer any?
I, I would, I would answer that from the perspective of the customer. Who's not necessarily can understand if this is a machine learning algorithm or just a, you know, bunch of regular expressions, cetera. I think there are main two parameters that he needs to look for. One is what is the time to value from the product?
And basically what is the level of effort he needs to do on an ongoing basis in order to benefit from this value and by doing so he will be very quickly answering the question, whether this is a machine learning or AI based product,
Other perspectives, or,
Yeah, I would add to that. I think this is a very good perspective from the customers side. And I would formalize that into a, the following one is the TCO, the total cost of ownership.
If this application on the system reduced dramatically, the cost that needed in a, a care situation that traffic is coming in and out, your system are constantly changing. And all of that, if your TCO is kept quite constant along time, you might have an AI system at your hand. This is one second thing is how secure you feel while this system is operate for you. If you partially feeling good with that, you might have something, but I wouldn't call it AI because it's cannot adapt itself to this, to the landscape, but that is constantly changing from the attacking side.
So inside cost safety, security by design, those are the two factors that will differentiate basically between the normal and AI.
So, so in fact, this adaptive thing that the system really learns adapts to situations augments sort of the, not talking about sort of the full AI thing, where, where system behaves like a human, but we always talking about applied AI or certain use cases, but use cases where it's not only about sort of repeat repeating always the same task, but learning and, and sort of really taking some workload from the human, not only as a tool, but as something which, which, which grows and, and gets better and then understands and adapts to scenarios, that would be sort of where you say, then when then we should start, should start talking about AI in contrast to, it's just another sort of analytics, statistical technology, whatever.
Okay. So when looking at AI and the future of cybersecurity, if you had to pick one, one single thing where you say, this is where the impact is biggest, so what would it be, do you wanna start? And we make you round that way,
Looking at it in a higher level than a machine learning because it's the extension of a machine learning. If I see the way. So I think it can really give us something which I call harmonization of a threat perception, which doesn't exist.
We don't have, what do, I mean, the three tensions I spoke about subjective component of, of threat perception and risk assessment, the relative component and the dynamicity. So it can harmonize this when you have in a business or there is big arguments and everybody says, this is not the risk and so on.
So I, I can help in harmonizing that conflict.
So, so saying in fact, it could help us to deal better and, and, and sort of more structured, less biased with a lot of things.
So, so, so to speak the better human, because it helps us overcoming sort of some of the human failure.
Yes. Unless we spill over our biases in
The programs. Yeah. We do need to do it. Right. We might do that. Martin.
I think you talked in, in some of your presentations earlier about the different layers where we used, let's say security protection.
And instead of let's say adding an additional AI around AI, I think it is key to involve machine learning, AI capabilities in each of this layers to improve the security for each of this layers to let's say increase in totally the entire security.
Okay.
Two factors that I would like to mention about the,
We said
Will, will affect the, basically the future of AI. One is the complexity of system. And with goes with that is the scale. Think about data center 10 years ago, several hundred servers.
Now those are a hundred thousand hundred of thousand servers that need to be operated that scale complexity, virtualization coming alongside with that. Okay. Now the true story will be how to operate this stuff. You need AI alongside with operation cybersecurity is there. And if there won't be the ability that this AI system will spread themselves automatically within the data center, along the, the, the dynamic of instances coming in, going down, it won't be there. It won't be there. Nobody will be, there is already a crisis in that, in the operation world, in Simon.
And it'll be only increas if AI system won't be closing this gap quite, quite rapidly.
So, so in other words, AI is the one thing which can help us to close the skill gap. Definitely. If not having enough people.
Definitely
Ya.
I completely agree. I want ed. I think scale is the key for the near future. Getting into dark places in the network, not just data, but also dark it as we know it.
But I, I think the next stage, which is more interesting is automatically triggering security controls. And that's real decision making that AI is already there. I know AI is already there. It's not been adopted yet, but it will be very soon there.
Okay.
I agree with all my colleagues here, I think also talent shortage or fixing some sort of talent shortage, especially given the, the enormous amounts of data that today's security systems are generating together with the, the human interplay factor, I guess, for, for my part, that's the, the most important thing about machine learning in specifics is the fact that it works well with combination with, with humans. So especially because it's machine learning is very good at setting some sort of baseline out of that huge pile of data and then gives you the anomalies.
Okay. From the human. Okay.
Maybe something to add to the scale point. Yeah. We have the scale scaling problem on one hand. But I think on the other hand, we have also a problem around complexity because if we apply, let's say really working rules, which cover all this kind of exceptions, you end up with a huge amount of complexity. And if you combine that with scaling, then I think solutions like machine learning AI are key to, to get that really managed.
Okay.
So, so I think we, we touched this human factoring and I think there, there are two other elements where I'm curious about your perspectives on when we look at the, the human interface, the human factory human element, one us to actually extend. Do you believe in the cyber security context, AI can help us. So in interfacing with humans, so we see obviously in other areas, chat bots, stuff like that already, which I believe is one important thing. The other thing is, I think there are some going on when you look at what IBM Watson is doing.
So the first IBM Watson use case wasn't structured data, it was unstructured data. So you have masses of documents around certain types of, of things. So which apparently is another thing. So who can read as fast as Watson can a few of us at least.
So, so how do you, how do you see these, these aspects of human interfaces and, and AI and cybersecurity is also where you see yes, that might be where we really get a big benefit out of it.
I can give you a, a filled use case that they can see. It's not necessarily exactly what you mentioned, but we see that AI can bridge the gap between the business user and the security Analyst, which was very far away these days.
I mean, in order to maintain a list of rules and keywords and policies, the Analyst was very reliant on the business user and vice versa. The business user was reliant on the technological skills of the bus of the security Analyst. AI can actually breach over that. It means it, it can actually, it can actually structure the results in a way that the business data owner can consume and make decisions very easily.
Instead of asking somebody to write the rule, getting back the results, analyzing the results, changing the rule again and so on and so forth, which required up until in the past 20 years, at least a lot of resources from each side and make the security failure. I mean the actual protection failure, or not covering the entire aspects of it. So making that as a one unit, the business data owner and the security Analyst, I think this is the main effect.
So
Is a little bit about, you know, reasonably someone told me he heard that whatever 90% of the IBM what's what's workload is really marketing chatbots more or less, but instead of marketing chatbots, we might have to sort of the security chat bots. So translating both in, in every way, in every means, not only from language, a two language B, but condensing content, translating it in the form, which is understandable by the recipient. And obviously the business user speaks a different language than the security user.
That is something where you say, this is exactly where this technology comes in. I think that would be a very logical thing because when you, for instance, look at whatever, let's take the IBM Watson use cases. Most of them are in some way around how can I deal with this human interface and, and translate stuff. So I like that idea very much.
You and you,
I would also like that, that I think AI can give you of can, can give the end users and especially business users, the, the focus on the thing that is really important, especially if you look at security systems, even with advanced algorithms, you get a lot of, a lot of results. And then again, the question is, even if I understand results, what are the most important results? And I think AI, there can give you at least focused by pinpointing, okay. Maybe you don't have the time to cover all these, all of them, but at least focus on the desktop top five.
For example,
I just, to the, I think synergism synergism mean processing information processes and in outputs, beyond the concept of focus, which is really because synergism is something else is not what you give it's far beyond. It's much more than that.
I think also another point is when you let's say, think about anomalies, there is a way to, let's say react on anomalies, but you also have very often situation that you face a certain behavior, which is not an enemy.
And then if you think about smart advisors, which can point you to this certain behaviors and they okay, if you would change this or that, maybe that can, let's say increase your security level. And you are not talking at that point about anomalies, which you can simply block.
So basically there are there, I would say three elements where you say, we, we need to make the next, or we should make the next step maybe in AI. So currently I would say most of what I see is part insecurity is about dealing with large amount of structured data.
And so sort of doing better statistics, so to speak, identifying the, the animal is the outlier, stuff like that. But, but basically it's about this human interface. So con condensing information, translating information into a way the recipient can really swallow. The one is advise, which might be the first step to the searching. So the second would be advice. The third would be decision making to really that way you say this is where the real potential probably well beyond what we currently see is showing up. Did I get this right? Or is there anything I've missed or anything I got wrong?
I think you got the right evolution of AI systems. Now we, after several years that a supervised AI system are in the market, everybody are already get used to the fact that they can analyze data, which is some sort of a model behind that that's already there in the market. Now it's a time to break to the next stage of data, which is, as you said, is unstructured is you need to, to work with basic rules, not with the data model, which will give you the idea of what is going on there on, on the limit size of the dataset. You're limited by the dataset.
Usually in this small, in this approach, working with, with principles as we do in our company, I may say that in a bracket is something which is much more challenging because you need now not to adapt yourself to a certain data model. You need to understand what is the nature of your problem.
So are we there or when will be there or will we never be there?
So I'm, I'm old enough to, to have been brokering expert systems in the, on the, in the university, tens of years ago. So in the first AI wave, which we all know disappeared rather quickly. So obviously, so from history, there's a little bit of skepticism on one hand. On the other hand, obviously the perspective, there's a lot of potential, but so, so do you believe we are really on the verge to these new fields? Or is it that we dream of them, but have reached a plateau, which we won't leave that fast?
I can tell from my own experience that for some, for some usage where there, of course there are multiple usage that need to be explored, but if you now explore the market, there are few UN learning companies which already approach the market and it's dream
All of you positive or someone a little bit more skeptical
Question. It is a question which the answer be only in retrospective.
Okay.
Bit, I'm sorry, go,
Go first.
I'm a little bit skeptic. We will get there, but slower because I'm, I'm unfortunately market is abusing the term AI and, and there's a lot of confusion around this.
Therefore, a lot of skepticism and, and people are adopting very slowly, although there's a lot of promise behind it. I know there's a lot of potential behind it.
I would also say I'm skeptical. I think as long as we talk about structured data yeah. Would agree that is not anymore a topic, but in the moment when we get to unstructured data and I think in the security sector, we have to deal a lot with unstructured data. If we take into account all the different kind of data sources, then I would say in the moment, yes, it works in some very specific use cases, but not yet in general.
Okay.
I think there's still a lot of gain with the older generation of AI algorithms then. So yeah, I'm, I'm, I'm hopeful, but I agree with the, that AI is a best these days and we should be very careful for that.
I could imagine what really would help us. And you said there's a lot of marketing faces behind that telling the truth of what it means to make these things work. So AI is nothing where you click on the button and it works.
As I said, machine learning means learning. Learning is a process. Learning is not a one time task. It is a learning process. And I think this is what probably in the message of many people talking about AI is, is frequently missing. So it's setting expectations wrong in the sense of, yes, you click on the button and the system will do everything.
No, it it's something where you need enough data. You need to drain the data. You need to review stuff. Obviously there, there are things which are more supervised things which are less supervised.
It's also, I think we had a lot of discussion yesterday and on other sessions also about sort of the drastic worseness about the do, can I really rely on, on, on the results?
Can I prove the, the results? And obviously that is also one of the challenges. So the better we are able to understand and to verify yes, the outcome fits, which usually probably is only saying if enough of that fits.
And if we have few failures become identified and we, we feel fine with it, but we need to probably be a little bit more, more honest in, in telling and selling that stuff to avoid sort of a, too much too deep frustration phase before we enter the next stage of AI. And I think that that is probably an important thing here.
Okay.
I, I should now defense a bit about my position here. Of course.
No, I think, you know,
That's fine. I like that. Yeah. Yeah.
There, there is a challenge with AI marketing. There is definitely a challenge because I stated here, there is a disappointed in the market there from the previous generation let's AI let's call it that way. It is limit the limit is, are not usually told bold enough to customers and, and the results are the same. Yeah. Now the challenge is basically to set system in production for some time and see whether it's worked for you. Yes or not. That's the only tool that I'm familiar with this market, if that is okay, you'll you will overcome this difficulty.
Okay.
Roger, Rachel, you, you brought up the perspective, not only of the, sort of the defender we are or we believe we are, but the attacker before. And so, so a topic for, for, I think for all of us obviously is, so we are looking very much on how can we use AI to improve our cyber attack resilience? And so how do we get better at our end? But obviously there are business models behind cyber crime. There are political interests depending on the type of attackers. So not only the good guys will use AI and cybersecurity, but the bad guys as well.
So will we end up in a situation where we again are always behind the bad ones or, or instead something where, where AI helps. I'm always very reluctant regarding, you know, a lot of stuff, which is brought up. So we had this, oh, when we have quantum computing, our encryption will network anymore.
I'm, I'm quite confident that we will find other ways because if there's quantum decryption, there might be also quantum encryption or things like that. So I think we need to be always a little bit careful, but obviously we have the situation of, it might be that attackers can use AI as well as we can. Maybe they don't not even need it. What is your perspective on that? My
Perspective is actually the national security perspective and there I'm optimistic when I know that it's going to be a selective access to AI. So it has tremendous solutions to classical problems, such as deterrents.
For instance, in deterrence, there is no such a thing as bad or good guys. It's your ability to convince the other, that you have the capability to punish him or to do any retaliation. If he dares to change the status quo between you and him, it's a very good, it's a very rational situation. So it's different to know that you have cybersecurity capabil capabilities and you have AI cybersecurity capabilities. It's a different capability.
And the way I see strategy, where I come from that tremendous problems in nationals, conflicts one, understand the other, I think it's going, I am optimistic
In this Martin.
I'm not really sure because finally we have to face the fact that cyber crime is a kind of business. So as long as you can earn money with this kind of business, they will let's say use techniques to get more or less their profit.
And if it, if it will be AI to do it, they will use it. And so I think if we will not be able to find a way to disturb this kind of business, we will again, end up in this situation that we have fight AI against AI on different sides.
Okay.
Eh, I will follow this line. We are there. We're already there. We are facing now with the customers, which after analysis of what is been made in those attacks, we see a very clear AI traces. There it's a dynamic of attacks.
The, the diversity of AER use the, the method, the, everything is telling you the same story. This is AI already in the, in the game. And if you haven't got AI fast enough in your side, that can be accurate as well. Cause they're accurate in what they're doing. You're in a problem now.
Yeah. And you know, I could, I think that something we can easily imagine, think about and attacker who uses a lot of data to sort of identify the anomalies in the defense. So where are the weaknesses? Certain types of organizations might have by, by trust having enough data.
I think that would be exactly, sort of the animal detection, the other way around to use a little bit of showdown data. And then you use some other data and you put all that together and say, okay, here are the common weaknesses I know here. And if this weakness is here, then usually that will be here as well. This is my attack vector. No one knows.
The only thing that matter is real time. Now it's been done.
Now, all of this process is, is a real time process now, and this is the problem. You cannot now use Analyst. You cannot use the, the classical tool to get into this game. It's faster.
So, so it, it, it makes also the attacks even faster helps us to be faster in defense, but also for the attackers.
This is the game. Yeah. This is the game multi,
I completely agree with one.
I'm sorry, that's before me. But anyway, we've seen that in 2011 and 2012 attackers are using data just like everybody else. They are very advanced in analyzing the data, the most advanced attacks of changing data at customer side to affect customers, decision points, for instance, short sales and so on. I've already attacks that are known and made. So think about it.
I mean, deploying a quick analysis or quick AI kind of tool that will change the data has already been done.
Okay.
Yeah. I also agree with Martin that cyber criminals will do everything to protect their business. And if that means that they have to apply AI, then they will use AI. But I think that also these criminals know that certain enterprises use AI.
And so maybe they can try to, instead of directly applying AI to themselves, by learning the weaknesses of the AI systems that enterprises use and exploit those weaknesses that I think it's, it's a different access, but it's, it's still about it about AI. So,
So basically it's not ending the cyber war, but trust opening sort of the next stage, the next level. So like in a computer game, we are moving to level two for free. Not a good, not a good comparison, obviously, because it's not a computer game. It's not trust the game. It's really war.
No, we should be careful with war, but it's just something which is worse than a game, obviously. So that leads me a little bit with, I would say a mixed impression on that. So on the one hand looking at apparently there's a huge potential of AI for cybersecurity, but unfortunately there's also a huge potential for AI, for the attackers and the more we make it affordable, the more it can be used by both parties.
So as I said, that leaves us with a little bit of, at least me with some, some mixed feelings.
Anyway, I, I, I'm convinced we need to work on these technologies to get better because at the end we are anyway in a situation as people tend to say, and I think that's just true needs to one, no one attack vector. We need to defend against thousands or tens of thousands of attack vectors. So he has the far better position today anyway, and maybe we can get closer to, or at least close this gap and this unfair, unfair competition, at least a little. So I thank you very much for participating in this panel.
I think it was very interesting discussion, which we probably could continue easily for a couple of hours. We don't have the time, so, but we have time for something different, which is coffee. So right now we have to write page here. We should be at a coffee break starting right now at four 10 and goes for half an hour. After the coffee break, we have a series of keynotes and also in between some start presentations, short pictures, case studies. Then after the, the evening networking with foods and drink, we went to cybersecurity innovation night. So still a couple of hours to go from here.
I would say, enjoy your coffee break. See you after break again in the room for the keynotes. Thank you.