KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Unlock the power of industry-leading insights and expertise. Gain access to our extensive knowledge base, vibrant community, and tailored analyst sessions—all designed to keep you at the forefront of identity security.
Get instant access to our complete research library.
Access essential knowledge at your fingertips with KuppingerCole's extensive resources. From in-depth reports to concise one-pagers, leverage our complete security library to inform strategy and drive innovation.
Get instant access to our complete research library.
Gain access to comprehensive resources, personalized analyst consultations, and exclusive events – all designed to enhance your decision-making capabilities and industry connections.
Get instant access to our complete research library.
Gain a true partner to drive transformative initiatives. Access comprehensive resources, tailored expert guidance, and networking opportunities.
Get instant access to our complete research library.
Optimize your decision-making process with the most comprehensive and up-to-date market data available.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Configure your individual requirements to discover the ideal solution for your business.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
Good morning. Thank you so much. I hope you all had coffee. I didn't. So I apologize if I sound a little bit slow this morning. Yes. The talk is about what are the limits or the risks that we have when we face trust in AI, especially when we focus on the users of AI in cybersecurity, I should say, as a, as a warning, I'm a philosopher. So at some point you will hear some philosophy. If I do it well, it will be like a doctor giving you an injection, just a little bit of pain, but not too much. Don't run out of the room, resist temptation. It won't be too strong of a pain.
So let me start with the definition of AI. There is a lot of vibe about what is AI. We hear science fiction, proto argument made every day. The Terminator, the en labor paper clipping machine is living as all.
Well, that's all good. Sci-fi no science in there. So this is an article we published with my research group last year, where we defined AI as a growing resource of interactive, autonomous and self-learning agency, which can be used to perform tasks that would otherwise require human intelligence to be executed successfully. Two points to this definition. The first one is about the intelligence of AI. There is no AI is not intelligent, intelligent in the way you or I, or I would be intelligent, no intuitions, no gut feelings, no emotions, no ideas.
EBAs performing tasks that if I were to perform them, adjusting the temperature of this room on the basis of how we feel, whether code or hot you would think that I had some intelligence to do that is performing a task. And the other part is that is relevant for the stalk is the nature of AI is this interactive, autonomous and self-learning agency. This is where the deal starts. It's the first time in the history of humanity, where we have machines, which are autonomous, not automatic.
And by autonomous, I mean, machines that can interact, learn, improve the interaction with the environment without a direct intervention of the designer, the engineer, the programmer, you name it. This is where all theoretical issues and the issues that we have with AI and the potential of AI begins. So there are a bunch of challenges. I told you, I'm a philosopher. The cams with AI, we identified five of those last year. I'll just name them. And then I'll delve into three, which are more relevant.
When we think about AI and cybersecurity, the challenges go from enabling human wrongdoing, reducing human control, removing human responsibility, evaluating human skill, eroding human self determination. The first challenge comes with the forms of control. We trust AI these days with a lot of sensitive tasks, whether it, it is reading an x-ray to identify a cancer or a disease, deciding who to appoint at an next recruitment exercise. Coming from my hotel to here and getting lost two times, there is a lot of trust, but most forms of AI, the most refined forms remain in black boxes.
We cannot explain given the input, given the input, given the infrastructure, the architecture, how the output has been determined, we cannot predict it. How the, how do we deal with machine, which are this complex and yet performing tasks. So sensitive to us, the other challenges to do with Rachel. I'm sure I don't have to introduce Rachel to you.
We have all watched blade runner, and that comes with the idea of izing AI, looking at AI as if you wear another form of humanity, and this is wrong on many levels, the most dangerous one is the idea that if we start co conceptualize AI as a form of humanity, then we start thinking that AI is responsible for its own behavior. We forget that humans remain responsible for the successes and the failures of AI themself.
And the risk is not run just in remote, you know, universities, somewhere in the world, European parliament last year suggested we should grant legal personhood to AI systems so that we can maintain them responsible when they do mistakes. This would there be nothing more wrong than that. And the other point that this word considering when it comes to AI and cybersecurity is the use of AI to perform tasks that we no longer want to perform. We delegating tasks. We should make sure that in doing so, we don't give up the skills.
We still want pilots to be able to land a plane doctor, to read an x-ray because otherwise we don't know when AI fails or we don't know what to do when AI just doesn't work. So keep in mind the three challenges cause they come back later in the store. Many of you might be familiar with this picture. It was got viral.
2014, snapshot of north star website is a dynamic map, internet traffic. And at some point the map showed more than 4,000 cyber attacks going on in the span of one minute, it was 2014. So we might be tempted to think, well, actually, yes, that was five years ago. We didn't have any AI. We didn't have the technology we have these days. We didn't have the fundings that we have these days in this market. Surely things are going better.
Sorry, guys. Now really 2019, the world economic farm, the global risk records stresses that cyber attacks are among the top five sources of global risks. These days, Jamal published a record early last year where he says, well, actually in 2018, the first six months, there were 4.5 billion records compromised in the first six months. And the number basically doubled for the amount of records compromised throughout 2017. And Microsoft showed us that in 2018, 60% of the attacks lasted less than one hours relied on new forms of malware. So things are not going well in that context.
And this is where I can actually offers to the previous talk. AI can help. It is true. It can help with respect to system robusters. We can use AI to verify and validate our codes to make sure we find bug much earlier, much more precisely than we than we often do. It can help with system response. There are companies come from the UK. So perhaps dark space. They offer systems to respond attack in an autonomous way. The 2000 and system cyber grand challenge run by DARPA show that actually we can use AI to make sure the system can fight each other without any human control or interaction over it.
Any can with system resilience, monitoring a system and improved threat and anomaly detection taking down the drilling time from weeks to hours. These are three areas where we know that AI can help be mindful. None of these three areas, the use of AI count without ethical consequences in red, you find the consequences. You remember the three slides before. So using AI for system robustness.
Well, it means that we might be giving away a little bit more control that we want. Cause we delegating to a machine, which we don't really understand or can explain at very important to dedicate task. If we get two machines fighting each other and that each interaction escalating or refining their attacks and something goes wrong, how do we ascribe responsibilities? And that's a key question, especially when we think about the use of AI for system response, for national defense and system resilience, it is true that Analyst team are overwhelmed by work.
But if we start relying on AI for tried anomaly detection all the time, where do the skills go? The human skills go, how do we retain them?
So, so far that's, let's say alpha, the story. This is how AI can help cybersecurity, but there is another point that is worth stressing there, which is that AI itself. Okay. There is I, I swing my slides. AI itself is not a say for robust systems. AI is actually very vulnerable. As you might know, better than I do. There are kinds several kinds of attack that can actually undermine the very use of AI in a specific way. Data poisoning. There was a recent study that showed that you can only poison up to 8% of the data used by a system.
In specific case was a system to run drug administration in an hospital in the us. And by poisoning 8% of the data, 78% of the patient got the wrong dosage of their treatment.
Backdoor, you insert a backdoor in a neural network. There is no source code. You might never know that the system has been tempera with. And so playing with categorization model, I think it was very famous. The study where a bunch of computer scientists designed a 3d title and then use it to mess up with the neural network model so that the model would identify turtles as rifle. Suppose the model would've been used for national defense, then we would been real issue.
So the problem with cyber attacks and AI is that attacks have gone from extracting data from systems distracting systems to acquiring control of systems. Cause if you can do that and if you can do that below a certain threshold, you can control the systems to which you have delegated your, the security of your infrastructure to give an example. So how do we deal with that now? I'm not sure what happened with my slides. So I'm gonna venture here and see what happens.
But yeah, so there is a, a lot of discussion these days, how to trust AI so that we can deploy AI more often for cybersecurity purposes. This is a list of ongoing initiatives. As you can see the us executive order, the U commission cybersecurity act, the guidelines of the commission, the I on deploying standards for AI cybersecurity. They all rely on this idea of trust. We should trust AI despite notwithstanding the vulnerabilities on the fr of this, this systems. The point with trust is that when you think of it as a philosopher, trust is a specific thing.
It's a way of running a relation, a relational delegation. When you have delegation with no control, when you trust something, you trust something to do a tasks and you delegate the task and don't supervise don't control the way that task is executed. Trust rest is on the assessment of trustworthiness of the trustee. So the technology in front of us and trustworthiness is nothing but the measure of the predictability of the behavior of the trustee and the risks that the trust runs.
If the trustee behaves differently, now we can easily understand what are our risks when it comes to AI and cybersecurity, if AI misbehaves or behaves differently, from what we expect, it's very hard to predict AI and the behavior of AI in the future. And the reason why is this? There you go is because robust AI is something that is very complicated and very hard to assess possibly unvisible. There are a lot of initiatives going on in the world to work on developing robust AI. ISO is prompting or defining a standard on this DARPA as a research project, working on AI that cannot be deceived.
The us executive order is working, is pushing work in this area. And even China is prompting standardization to develop robust AI. What's the point?
Well, the point is that when it comes to robustness and AI, we are talking about two things that don't really go end in end, as you know, robustness is the measure of the divergence or the behavior of a systems. Once the system is fed with perturb data or the model is perturbed. So assessing AI, assessing robustness requires understanding all possible sources of perturbation and calculating that Delta. When it comes to AI, we have two kinds of problems on the one side, a attack on AI can be deceptive.
You can have a bacterial neuro network and you might not know until the backdoor is triggered. And once the back is triggered is the divergence of the behavior is not really huge. If the divergence of the behavior can go under a certain fashion, you might never know that your systems has been tempered. So there is a deception of AI, which is quite likely, there is lack of transparency. We cannot explain how a certain output is produced by a given input. So how do we know what a certain output is given by is a consequence of the right or the wrong input.
And finally assessing robustness requires understanding all possible perturbation, but when it comes to AI, the number of possible preservation is astronomically large. Think about image recognition. It goes down to a pixel level. So if you are AIAN, like I am, you will say that this is a problem, which is computationally untractable problem. It means that, yeah, in theory, you can understand, you can enumerate all possible sources of perturbation to AI systems in practice. This is impossible. So we cannot know, we cannot know how robust our AI systems are.
And if we cannot know how robust our AI systems are, well, we don't know whether these systems are trustworthy or not. And you remember trust it requires trustworthiness. So if we cannot define AI systems trustworthiness, we should not trust AI in cybersecurity. It's conceptually misleading, but is operationally very, very dangerous. Cause what we're facing are systems, which can be deceived without us ever knowing it. And the meanwhile we are as delegating to the systems, to the security of national critical infrastructures, for example. So how do we deal with that?
And this is my last light apparent. The following are three suggestions that I offered in a paper that got published in nature this week. And the idea is that because trust AI is not trustworthy, especially when it comes to cybersecurity tasks, it does not mean that we should not use AI for cybersecurity purposes. There are evident advantages that come from its deployment in cybersecurity. It's just a matter of understanding that we cannot afford to forego control. It's a matter of understanding that because we cannot calculate and assess the trustworthiness of AI.
We can delegate tasks to AI, but we need to have in place some forms of control of these systems. So we came up with three suggestions, which go from foster in inhouse development. Now this is suggestion, as you mention, they're really targeting services of AI for cybersecurity, which focus to support national critical infrastructures infrastructures going on in mega cities. Like the one in the picture, when it comes to that, we cannot really take the risks of delegating and not control it cause the consequences and the risks will be very high.
So at that level of business, the suggestion will be, make sure that you resist the temptation to machine learning as a service cloud systems, buying models, the models or dataset. Well, that's a way where attacks coming. So start with developing in ours to dataset, to model the training procedure, adversarial training. We know that AI systems improve their function through feedback slopes, which allow them to change their variables and coefficients. They improve the more the system at the adversarial training uses model, which are really very refined.
So the idea is that we should start defining standards, which mandate the level of refinement for adversarial training and finally parallel and dynamic control. And this is the most important suggestion, perhaps AI systems, as we say at the beginning learn, and they change their behavior throughout their deployment. So if we are thinking of maintaining control while control cannot happen just at the beginning of the process as to go end in end with the old deployment of the system itself.
So what we suggested was that when it comes to this high level of security risks and AI supporting national critical infrastructure, where we should have a twin model from the one that we deploy deploy in the real world, and the twin model is not a simulation is not something that runs abstract. It's something that is kept in a control environment and work as a benchmark with respect to deploy deployed model. So that we know when there is too much of a divergence between the regional and between that something is going wrong and we can intervene one more.
So yeah, you remember the beginning of this talk, there was this definition, AI, as a growing resource of interactive autonomous and self-learning agency, this is where the potential begins. This is where the risks begin. We can use AI to improve the way we perform or it performs task, which are very sensitive, but it is also require fundamental constant forms of control and monitoring and to help or to use AI with the triage tasks, the vast resilience and the response systems. We need Togo the idea of trust AI and support what we call reliable AI.
So an AI that can perform tasks, but which is also controlled while performing these tasks. And it's not just my word. You might be familiar with U C D principles for ethical AI. One of the principles stresses that AI systems must be, must function in a robust and secure and safe way throughout their life cycles and potential risks should be continually assessed and manage it. And this is a way of implementing this principle. This is the work that I do, but also my lab in Oxford does it's called the digital lab. And with this, I thank.