We can show.
Oh, okay. Well thank, thank you very much to the panel. First of all, I'd like you to introduce yourselves and maybe give a one minute introduction to what your take on this issue is. And then we'll start to have some questions, perhaps if we start with you,
Can you hear me? Yeah. So my name is PLO I'm VP sales with minor. I was doing data classification, unstructured data classification.
Our intake on AI machine learning automation is that when you're dealing with hundreds of millions of files that you need to cluster and make sense in order to protect your data, your assets, you have to know what's in your data. What the data is talking about, what those documents are talking about. And that's the only way that you you'll be able to protect your, your assets, your business, and your business. Your obviously most of the contracts written documents, everything that put together your business.
Okay.
Yeah. My name is Nicholas Selman.
I'm one of the managing directors of so safe. And as I said, we are focusing on that awareness part, but also like strongly keeping in mind, there's only one layer, but strongly of the opinion that nowadays you have to have a multilayer approach to cyber security that involves the old school antivirus tools, stuff like that trace does, but also the human side. And only then you can basically defend against those highly advanced AI attack forms that will be coming up front at a certain point in time. And that's actually what we are doing.
Like keeping a close look onto those offensive forms of AI offensive applications of AI in order to develop measures, awareness measures, to tell basically employees how to spot them and, and, and how to yeah. Defend against them.
Max Sinai, I from TRAC technical director, I work with our big strategic clients and use our tool every day. So very based much of practice work. Having used an AI solution for three and a half years every day now. And I absolutely think the AI tax and what flavor and what form shape and impact happening already are gonna happen.
And we need solid traditional solutions like this human skin to protect us, but also other more advanced solutions that are properly built and secured to work against these novel threats, arising with AI being used by the bad guys.
Hello, my name is Rosa EO. I work at the university of Oxford in a department called the Oxford internet Institute where I am also the deputy director of the digital ethics lab. This is one of the leading research groups in the world, working on ethical and governance issues related to digital technologies. I've been in this field for the past 20 years or so.
I'm also during fellow a engineering Institute in London, which is the national Institute for AI and data science. When it comes to AI cybersecurity. I think that in a position or my position is this one that AI has a great potential for improving cybersecurity practices, but this potential is coupled with significant risks, security, risks, risks, and some of those, those risks can be mitigated by appropriate governance and policy procedures and my work.
And one of my team is mostly to help shaping this governance and policy procedures so that we can foster innovation and mitigate the risks that it brings about.
Thank you. Hello. My name is ALA. I work with the WEBPRO technologies as practice director for cybersecurity. And our role at WEBPRO is to make security programs successful for our customers, enterprise customers, looking at not just technology, but things like policy procedures, or awareness, basic security hygiene, and definitely AI and advanced technologies, which are becoming more and more relevant given the complexity of attacks.
So that's our view and would be happy to discuss it with everybody. Yeah.
Thank you. So we'll get the best out of this. If the audience, the participants ask their questions. So if you have any questions, please put your hand up in the absence of questions. Then I will ask some. So having listened to the sessions this morning, I don't know whether to go home and cut my throat really because there was a, a UK government survey in 2018, which said that most organizations consider cybersecurity an important and essential area that 80% of them are spending money on prevention technology.
And yet one in four organizations in the UK was the victim of a cyber attack during that year. Now, is AI going to make this better or is it going to make it worse? What is the panel's opinion?
If I might take that away, I was of the very same fear being very technically minded. I thought all this AI rubbish, right? It's you can just hack it and whatever. I think we finally turning the tables now on the bad guys. Cause it always used to be that we had to make one mistake, one misconfiguration. Now it's on them and depends on what area you look at.
But if you can turn around, like with anomaly detection, for example, it really puts a pressure on them, but it's gonna be an arms race. So for now I still sleep very nicely, better than I used to these days. Cause now the AI can do what human analysts can't do, or we don't have enough human Analyst to do the job. But I think I firmly believe the bad guys are gonna catch up in the broad masses in a few years. So right now I'm still quite satisfied. But if I think forward, we need to keep up this technology advantage. Thank
You.
You guys. Yeah.
If I may follow on with that is a question that needs to be put in a context, not to sound like the usual PHS were in the room, but cyberspace is a place where, or is a contest where attacking is gonna be constantly more advantages than defending. I'm not saying that's constantly or necessarily successful, but because of the facility with which you can attack and remain anonymous, the lower entry cost and the, the fact that the fence are S is much more convenient, attacked than the fence.
So this are raised dynamic, this vicious cycle of attacking a more attacking and then acquiring offensive capabilities more than defensive capabilities is gonna bring this game at another level over time. So yes, AI can help for the moment until this same technology does not fall in the end of the criminals of the terrorist of the archers.
And then we are all back to square one. So we have to embrace and understand that this is the dynamic. There is no final winning technology is technology is the use.
The fancy technology can be easily repurposed, and it's only a matter of this kind of time advantage until the bad guys catch up. And then when they are all caught up, we need to move forward. So it's a constant dynamic. It's an arms, right. Is an non raise. It's a real range. And the is there is that we get too caught up into the arm race to forget where we're going. And then at some point we have gone too much, too far. So I would say the advice is be mindful that it's an arm race, but also let's make sure that we know where we are going.
Cause if we really go too far, then it might be very hard to come back.
Yeah. So we will people be able to beat it.
Well, actually like currently, I mean, if we look at those deep learning or like convolution neural networks, they are built upon, I mean, they're modeled after people so that what these kinds of models do basically currently people can still do much better, except in those cases where we have huge amounts of data and huge amounts of information, I think there that's, that's the area where, where machine learning models are taking up and, and are probably gonna be better than people and more, more resourceful than people. But after all you can train people.
And that's, that's a very simple thing to do, like confronting them with those attacks, confronting them with the, the tactics and then they will learn. And that will be a very, very cost efficient way of doing it, but also being supported by technology
Of course.
Okay. And unless, you know what, you've got, you can't defend it. So does it help us to know, is it a problem that we don't know what we've got?
Yeah, correct. So using AI to classify data is, I mean, if you try to do it with the human, it's almost impossible. If you look at 500 million documents and you need to understand which documents include some sensitive information that can harm your organization.
If, if, if captured or if stolen, definitely AI allows us to do man intensive or human intensive tasks very quickly so that we can understand which data we need to protect, which data we can archive or delete. And overall just deal with the attack surface and reduce that attack surface so that those attacks will happen. But what kind of information or data about our business can, can those hackers retrieve
And perhaps for a question for you, how can we organize our cybersecurity better? How can we have our processes and, and so forth enhanced by the use of AI?
No, that's a good question. And that's a question which organizations are struggling with every day, given that the complexity of their systems is way beyond what it used to be a few years back.
I mean, they, it's very difficult for them to know what, where the, it systems like where their data like, where the users are accessing that information from. So it has to be a combination and I believe we definitely need technology. We need the latest technology, but it can also be armies, but we should look at increasing visibility. And when I say visibility is about not just knowing where our systems and data is, but also visibility for the people to know what are the risks? Where do we stand in terms of an impact of a cyber attack? Or what do we stand to lose as an organization?
So user aware, employee awareness, their understanding of cyber attack needs to also be augmented. And it may not be focused on a specific set of users or employees, but the entire organization, they all need to know thence of cybersecurity. That's what I
Believe.
Oh, okay. Thank you. So I'm still hoping that we're going to get some questions from the audience. Yes.
Can we, can we get a, a
Apologizing that it's not,
I'm gonna go off tangent a little bit. There is a piece of statistic I can get out of my head since I, I, I, I came across it earlier in the year. It's it's that piece of statistics from this research from MMC, the VC firm in London, which says that 40% of the startups classified as AI startups actually don't use AI. I don't know whether you came across that what's the view of the panel. I do apologize. I'm I'm going off a tangent a little
Bit here.
Can I, can I address that? This really irks me? Cause you know, we used to say AI, we as star trace and people were like, oh, that's interesting. Fast forward six. Yes we say. And people are like, oh yeah, it's all just fake. Anyway. So it's really hard for us as well, who actually do AI, the SNV of the shift unfolding. It's a German, but international think tank who really just deals with these kind of topics. What is an AI company? What is AI? They just release paper, which we contributed to for the attack surface of ML systems.
And they classified what makes up level one level two and level three AI company. It's a simple checklist things like, do they have their own machine learning PhDs? How many are there? Did they acquire machine learning companies and this kind of stuff? And just on a side note, we fall into the most mature one.
So, but that that's, if you wonder about this, it's a really easy thing to just take this list of 15 points and go to the AI start and ask within these questions.
Yeah, no, it's, it's a very interesting point.
We, we, we run in a similar categorization on another project here in Oxford, which is focusing on the use of AI for SDGs, the UN sustainable development goals. And you have the same kind of thing.
So when, when it starts being AI and when it's not just statistics or just categorization, I think that there is another question to be asked there. Why do we bother thinking, why do we bother checking whether it's AI or not? What is the, what is the difference? What is the implication? Does it matter whether dark trace is an AI company or is not? And if it does matter, why does it matter?
So there is another question to be asked aside from the marketing and, you know, the way companies might do their own PR, because for me, whether it's an AI company or not, it's about, well, do they have an autonomous learning product? Is there any oversight, is there or any auditing, how are we assessing accountabilities? So there are other questions that comes after your question, which I think we should keep in mind cause assessing whether it's an AI or not company per se is a very limited value aside from the market
Just really quickly.
That's, that's also part of this classification where it says, does the AI contribute anything to the business model or is it just AI for the sake of marketing AI, right.
Yeah. And I think that's, that's the, the question also you, as a buyer should ask yourself, what kind of problem are you actually solving?
I mean, that, that's, that's, that's the key question and is that type of problem that does it need to have any AI application as, as, as part of the solution and most problems don't need AI as a part of the solution, but it's marketing game because most buyers are actually seeing this as a magical black box that solves any kind of problem for them. And that's certainly yeah, a wrong, wrong way to see it.
But there's sorry for hoing up so much speaking time.
But one point I want to address was visibility aspect, which I clearly see, especially attackers these days, don't just hack you with a phishing email anymore that might come in in over OT, over use smart thermometer or something. So we need to cover your whole digital estate. And you know, most people don't even know how to secure their service and laptops. So how they know how to secure their smart flying monitors, for example.
And again, that's the case where I think AI and clustering can really help to tell you, look, you don't need to tell the system, that's a smart P telephone. The system can tell you what it is and give you that visibility. So it's
Really, so it's an interesting point.
Many, you've got some vendors on the stage, so does AI help to sell or does it in fact prevent sales?
Can I re quickly what
About the services? Does AI help services
EA definite.
I mean, it's not about whether it's helping services or product vendors. It's about, I mean, the question comes down to what is the problem statement we are trying to solve. And definitely given as max mentioned, the complexity that organizations have to deal with, I don't think so. The way they have been managing cybersecurity can, can continue. They will have to rely on more intelligent systems, whether we term it AI level 1, 2, 3, or whatever.
So these systems have to also be intelligent in terms of knowing what they're trying to protect, knowing what the attacks are, because the attacks are also evolving. So you need more intelligence. And obviously it helps everybody.
I mean, it's not, that does not help services. Companies does not help technology companies.
It do, it does help us, but the point is that we will have to elevate intelligence within our systems going forward.
And I actually think the whole, the whole view of, of the buyers is changing.
Like when, when you were walking around it's our last year, like all these AI stickers and, and deep learning and whatnot. And I think like quickly buyers realize that this is a lot of password. Bingo. Yeah.
So from, from our own experience, we like totally, yeah. Prevent using AI when, when it's not an application where we really use it.
We know, for example, we are doing these phishing simulations and know there's vendors that say, okay, we have AI based fishing simulations, which is simply not true and not necessary. So I think the, the, the buyers are currently changing their minds about that. And I think for, for, for applications that are not at the core of problems that have to be solved by AI, it's actually yeah.
More, more like, like, like preventing or a harmful ACC claim in, in marketing. Yes. 10.
Okay. Have you got that?
Yeah.
If, if, if we look at, from our perspective of protecting data and, and classifying data, if we look at older technologies where most organizations were relying on labeling files or documents, and then maybe using a DLP to stop those documents from moving around and they were built on regular expressions and you would have to build a huge amount of rules and keep on maintaining that, and you would never cover all the different scenarios and using AI or machine learning.
You, you have now technologies that the system trains itself with knowing what type of sense, what type of, of, of documents or data is sensitive. And it's dynamic. It keeps on building those clusters and, and growing them. And you don't have to write rules, but actually the system through different technologies and different approaches of classification tells you what, where the sensitive data is and what type of sensitive data you have.
Lovely. Yes.
I think to your question of using AI as a term helps sell or prevents sale, it really depends on the maturity of the prospect.
Sometimes I love it because it's a nice and fuzzy in one word, I guess other people have been disillusioned by it. And what's important. What we learned is you have to back it up. So if you just say it's all AI, it's a black box, it's a secret source. Nobody's gonna trust you. That's why we go in with a free trial for a month and say, okay, look, I can talk all I want. I'm biased. I'm working for a vendor. I can come in tomorrow, plug my box in. It learns for five days by itself, no configuration.
And then we can see the results and being able to bridge this gap from jazz hands and marketing stuff to actually showing value after a short period of time, that's the key. And that's why we've grown so much as well.
Lovely. Thank you. Any more questions? Yes. A question in the, can we, can we pinch one of your mics so that we can just send that
Speaker 10 00:19:22 You, I was wondering whether, are there any areas in which AI may get in conflict with the privacy issues and if the answer is yes, how do you bypass such complex?
Yeah.
I mean, several hard to think when it doesn't, when it comes to cyber security. For example, when we use, when AI is used for resilience for trade and anonymy detect that means monitoring all traffic on a system, deep inspection analysis of an email, any message, any content, every, any packet of information sent out. That's a huge potential breach of previously.
There, there is a company in Israel. I can't remember the name, but they use AI to learn about movement on the track part of the users biometric, so to access a system. So to make sure that you are the, you know, legitimate user accessing the right system, but that also means a lot of data, personal data. And then who's this data who how, and by whom this data access, it poses a lot of previously relevant information or aspects. So there are several risks there, which can be mitigated only by appropriate rules and policies and transparency.
So if you give consent for that kind of data, to be accessed by the company for that purpose and only that purpose and those data are managed to demand, well, that might all be fine. But then if those data are solved to their parties, we start having some issues and problems there. I guess
There's two examples coming to mind, one in the non cybersecurity area, there's AI being used in HR for automating the initial interview process. So you do a Skype interview and the AI system sees your facial recognition.
And if you use your mouth a lot, if you live you sweat and that kind of stuff, and based on voice recognition tries to assess if you talk sensible things now, for me, that's, that's very scary. Now we talk about surveillance state 1984 and all these cyber punk things. So that's definitely a huge error, which needs thinking of clever people like yourselves to make sure it's all ethical and in the right boundaries. If I think more about cybersecurity, I think it's not just about AI. We always have the tension, the best cybersecurity can be done.
If you have full visibility, like you said, you know what happened?
You know why it happened? You can track it down, you can do investigations within seconds, but then you have some people like the workers council in Germany, all thewe squad, having huge problems with privacy reasons, right? So you need to make sure, and that's not inherent to AI. That's inherent to any security. It's always this tension field. So we counter that because we provide this visibility in all the good stuff by having a pseudonymization mode, which can flip on and all the personal data is pseudonymized.
So we can't see it anymore. Process is in place there's Watchers for the Watchers. So every step is locked. You can ship it offline.
So, you know, the actions are tracked. It's all staying locally. So GDPR or DSK for O is not a topic in most case, because it's all local, not on the cloud. So it's a real challenge, but I mean, we have customers here in the legal space and big government entities and it works for them. So it's definitely solvable, but it's definitely feeding you to think about it's very good point.
Yeah.
I mean, as, as people mentioned here, I mean, surveillance is increasing, whether it's in the physical world or the digital world, overall level of surveillance is increasing. What's important is for organizations for states to let people know why are they doing it?
I mean, they are of course doing it. And what are the consequences?
What, what do people get out of it? Positive sides of it, right? If we are surveying emails, right, we are trying to protect phishing attacks. So the employees or the users need to be made aware the reasons for doing this thing and what are the consequences of both doing it and not doing it. So there is an impact, there is an impact on privacy, but it's also also been done for the good, so
Yeah, if I might just follow up, everybody's doing it, and yes, there is an impact. One of the criteria to, to assess this impact is proportionality, or even by the necessity.
So it's not that you need to have massive surveillance. If all you trying to do is fishing out or identify the one fishing email and the criteria proportionality and necessity is something that we have to understand out to embed into this process. Cause otherwise we is having the usual nuclear bomb to bump, to kill a hunt, right? So this is not really what we're going to go. There is space to learn how to do that in the, in this area.
So, so in a, in a sense, the, the problem of social acceptance of a technology is that governments can regulate it away and say, we are not going to use it, but the criminals will continue to use it. And so do we, do we believe that the level of social acceptance of this technology is there, is there going to be, shall we say a backlash that says we we're frightened of AI as a technology and we're going to stop it? I I'd be interested in
Opinions.
I, I think that's a very valid thought because I mean, basically with, with AI models, we, we see them as a black box. We don't know what happens inside.
And, and most people like regular, regular employees don't know how they have been trained. So I think the first step is to explain a lot about this technology.
So, so people can, can actually see what happens and what, what doesn't happen. And yeah, because otherwise we, we can, can use this, this like really, really helpful tool. And I think the first step is really to explain what, what can be, can be done by, by AI, in cyber security and what can be done.
We, I think there's two core issues. I mean, there's more, but one is often called the transparency gap or problem or challenge, which I'm talking about cybersecurity now and what we've experienced. Right? If we put a system in place that says, we're gonna look at everything now, then people wanna know, how does it work? Why did it come to these conclusions and why you can't always reverse engineer? Every decision we do everything to show how it came to that decision, showing context, data transfers, even package captures.
So all the access to the raw data to make sure you can follow up and understand somehow how it came to that decision. And the second point is trust. What you say is about trust, right? People trust these systems to make decisions.
Again, it's based on the decision is it's a facial recognition system for surveillance. There's gonna be more of backlash.
Is it solution to secure you against fishing? Maybe less so, but we seen there's a trust journey that we had to take our customers through.
And again, sorry for talking about so much about us, but it's a practical example. I think that's quite relevant. So we saw when we put the autonomous response in place, but can stop threats by itself. People said, my God, it's gonna kill our business. So first we put it in passive mode, recommendation mode where it says we would like to do these things and force normal behavior. People say, okay, that looks quite good. Then it's gonna be human confirmation mode. So they can, the human is still in the loop can take action.
And then for certain parts of the business, they can put it into autonomous mode. So this trust journey is really what I wanna highlight that we've seen, that was necessary when one cry came around and we had a solution. People were hesitant to put it in place because we didn't trust the solution. Yes. So we had to take them through that journey.
Yeah. Oh yeah.
So I mean, to answer your question, governments and civic societies will continue to play by the rule book. I mean, they will have to have a rule book while the criminals may not have it when they, they can do whatever they like to do with technology. So it's important.
I mean, as a, as a service provider for us also to make sure that we make organizations understand why are we doing it? I mean, make people understand why this is being done and VI is important and we should help in shipping policies accordingly.
Lovely.
Thank you, Maria. As
You go.
No, no, just the final point. You had two questions.
Yes, indeed. Criminals are gonna do whatever they want to do, but there shouldn't be a measure by any means for companies or for governments or for those who stand on the right side of the thing. Cause liberal democracies, which is where we leave that's because all stakeholders behave according to the rules, we define criminals. Those who don't. So let's maintain that divide quite clear. I would date to be in a conscious where this is what we do in cyber national defense.
Oh, we don't define rules because our enemies don't define rules. Yes. But that's by no means a wise way of reasoning. And the other part about social suitability, that's a key point. We see that when technologies push the boundaries of social sensibility, what we, what we happen is a backlash, which then prompts very hard regulation, which stop innovation.
So we have to make sure that technology and social sustainability dance, this fine delicate tango tango us to, as we were yeah. Where we can try to understand each other and, and go forward, move forward the bar together.
Cause otherwise it's very much problematic and to do so. It means that so technological innovators, they cannot behave or they cannot go forward without thinking about the values of the societies, where they operate. The example I'll give you is scatter data in the UK. Ke data is a program that the UK launched for 15 years to put together a national database of healthcare data for residents and citizens. They didn't have it. It used to be quite a big of a mess.
It is still a big of a mess because after launching this systems, which would've allowed GPS doctors throughout the countries to access relevant healthcare data for their patient, they launched it. They forgot to tell people that they had launched it to inform citizens residents about previously risks. Transing trust transparency. The backlash was huge. The program has been posed in definitively, which if you live in the UK means not in this life, not in the next life, never 50 million pounds out the window because of medical concerns.
If there cost nothing to involve stakeholders, citizens in the process. So that innovation would there been accepted. So there has to be this awareness. Otherwise we, we might be dumped.
Yes. Lovely. Thank you. Perhaps the
Final, just very shortly, I've seen this already many times that the companies are holding back with their use with use of technology and use of AI in, in, in, in protection and, and cybersecurity.
Because many times you'll, you'll, you'll be able to pick up on something because you have a very high resolution of user behavior and, and all sorts of cyber aspects, but then you'll approach the, the end user. And, and he'll say, how do you know this about me? How did you find this information about me? How do you know that my computer is, is, has been breached and, and they cannot cope with fixing those issues. So a lot of companies are, are just doing what, what is mandatory or what is absolutely necessary to secure their, their business and their, their customers.
And I think that's the, the smart approach of not just using technology because you can, but to, to make sure that you're using it in a way that is effective in to your business.
Thank you.
Well, unfortunately we've now run out of time. We will, the panelist will be available for lunch and during the break. So please will you thank the panel for their participation? Thank you.