Thank you so much. So as you can tell, the title of the joke is trusting AI and cybersecurity and why this is a double edged sort. Now I'll start with this, which is a definition of AI. There is a lot of hype, a lot of focus on what AI is, what the dangers are, what the potential the AI is to be identified. And there is also some tendency, especially among some philosophers to get distracted and think about AI in terms of science fiction.
So you, my words about work words like singularity or existential threats, or the idea that AI is gonna come and kill us all, let's leave us all. This will not be the focus of this presentation, and it's not the focus on my work. We're gonna be focusing on a very specific definition of AI that takes I'm afraid science fiction out of the, out of the window, out of the question.
So for the rest of this talk, we're gonna refer to AI as a growing resource of interactive, autonomous and self-learning agency, which can be used to perform tasks that would otherwise require human intelligence to be executed successfully. Now, with respect to this definition, there are two aspects that I think are very important. One is the nature of intelligence we are considering when we talk about AI, we are not talking about machines which have human intelligence that have intuitions, that they have feelings or memories or ideas.
We're talking about machines that pretend to behave as if they were intelligent. So the performed tasks that if I were to perform them, you will think, well, there is some intelligence there. Intelligence is required, but machine do it without it. The other aspect which is important to consider is the nature of this technology.
AI is the first for technology in human history, which is able to learn from the environment, interact with it and to do these things in a, an autonomous way, not in an automatic way, but autonomous, that means without a direct intervention of a programmer or of a users, most of the ethical and the trust problem that we're gonna discuss come from this kind of quality of this technology.
So what are the problems that we face when it comes to AI and in terms of which is gonna be the particular angle we focus on today, focus just on the blue circles.
There we have questions which have to do with enabling human wrongdoing. You might heard about bias, reducing human responsibility, removing human, sorry, reducing human control, removing human responsibility, devaluing human skills and eroding human self determination. And then there are promise to security. So the risk that we use AI to, for example, respond attack, and this why might prompt escalation issues related to lack of control.
I wanna delve in some of these question, these challenges, which are relevant when it comes to the use of AI for security national security purposes, the first one has to do with control. We all familiar by now, I guess, with special black box. And the fact that AI is a technology whose behavior we cannot explain or not completely explain, given an input and given output.
We can't really work out out the output as being determined yet.
We trust AI with a lot of important tasks, whether it is protecting the security of national infrastructures as we do in the UK or the us in Australia and in Japan and in China, or, you know, support healthcare. There is a trust which is now matched by forms of control and predictability. And I'll delve into this, into this in the second part of this talk. The other question is to do with responsibility. I started here this talk by referring to AI and science fiction. And now this is not the appropriate focus for this for addressing questions about trust in AI.
And here I am showing a picture Rachel, from late runner and this like another way of stressing the need to avoid science fiction temptation and put racial here, because there is a tendency not just among philosophers, but also policy makers to look at AI and Ize it.
So think of that about AI as if you are another form of humanity. And if you think about AI as a kind of another form of humanity, you might be tempted to think that AI is morally responsible for its own action. And this is, will be a catastrophe, a very big mistake. We have to make sure that understand.
And we implement policies according to which the successes and the failures of AI remain responsibility of human beings. The other challenge I refer to as relevant when it comes to AI and cybersecurity is the way in which we use this technology. We use AI to delegate tasks, which sometimes are tedious, takes a long, no long time to be performed. And this is important to it that we can actually do that. We have to make sure that while we delegate the task, however, we don't give up the expertise, the skills we want pilots to be able to lend the plane.
Even if we AI can do it because so pilots can kick in when AI will fail. Cause AI does, does fail as we will see. This is also important when we think about cybersecurity and when we start delegating to AI tasks like verification and validation or anomal detection. So this takes me to the cybersecurity part. This is a famous picture. It's a picture of a dynamic map that north shows on its website. And it was quite famous when he was published in 2014.
The picture shows cyber attacks occurring in the world within the span one minute in 2000 and June, 2014, and the picture was famous because at the time it showed 4,000 cyber attack opening in less than one minute. And we might think, well, okay, this was the situation in 2014, but 2020, almost 2021 shortly, we made progress. We are in a better situation, not really the world economic farm last year.
The global risk record told us that cyber attacks are among the top five sources of global risks.
And we've learned that also during the pandemic, umto provided a red when they stated that in the first for 2018 cyber attacks compromised 4.5 billion records, almost twice. The amount of records that have been compromised the year before in Microsoft. And this is particularly rewarding. They tell us that 60% of cyber tax occurred in 2018, not only were much quicker, so they last less than one hour, but they relied on new forms of malware, which means that existing defenses or responses were not necessarily up, up to date.
As we were mentioning before I started the talk, AI can play and is gonna be a key element in addressing this strength is gonna be crucial for system robustness. So I'm improving self feeling, processes, software that can identify its own bug bug and assess them.
It can improve system responses. The idea that AI can not only identify its own bug, but also identify the bug in adversarial system and target those targets. Sorry.
And yeah, target those vulnerabilities when responding to attacks and AI is used also for improving system resilience. You can use AI to monitor another system to identify incoming attacks. And this takes to dwelling times down from days to hours, if not minutes, and this is why AI is gonna be central from cyber securities in the year, years, Japan, we know that the value of AI in the cybersecurity market by 2025 will be of almost 35 billion.
And this is up considerably from the value of AI in cybersecurity in 2016, for example, which was 1 billion, all these users, AI, however, they pose important technical programs, which are key to address because we've learned from the past, in other context, that if you fail to address medical challenges, then AI will not be adopted and will miss on the opportunity to use this, this technology.
And so when it comes to use AI for software verification and validation, the issue is the one of the skills losing the skills of cybersecurity experts, which might not be able to do the task again.
When we think about AI in terms of response, when you think about cyber tasks and how quickly core responsibility is essential. And when we think about AI used for system resilience, the issue of monitoring another system poses, question respect to the protection of privacy and not surveillance. So this is the first part of the quest. The problem I wanna tackle today, it's an space where using AI for cybersecurity on the one side promises to be very effective and efficient.
On the other side, there are some important policy and governance aspect to, to foster and consider address, to make sure that innovation can can occur. And however, we know that if we address the challenges, AI can really offer a good element into the fight against cyber crimes or cyber attacks.
And this is why there is a strong push throughout the world, or shall we say the Western world and this contest to use AI for cyber defense and cybersecurity. It is mentioned in the us accepted order, the EU commission cybersecurity act, even the guidelines of the U commission.
And trustworth AI refer to the use of this technology for cybersecurity purposes. And of course the, I report on this contest, these documents when considered together, they reveal another element. They all link the use of AI for cybersecurity, with the, the need to trust AI for cyber security. And this is where actually some problems which are very important to start to emerge the emerge quite clearly.
When we think about start thinking about what trust is, trust is a very simple concept at first glance, but when you start delving into it, it becomes much more complex to define trust also because it depends from the domains.
And so trust and cybersecurity differs from what philosopher or psychologists or sociologists belief trustees, but at some level, very fundamental level trust can be defined as a way of delegating a task to someone else without controlling how that task is performed.
And trust is based on the assessment of trustworthiness of the trustees or the agents to whom we delegate the mistake at this point is to believe that trust justness is just an assessment of the predictability of the reputation of the trustee. So trustworthiness is just a way of saying, well, if that agent or that person behave this way so far, they're gonna do it in the future so we can trust them.
This is not entirely correct because trustworth measure on the one side of the predictability of the behavior, the trustee, but on the other side is also a measure of the risks that we take once we trust someone else to perform a given given task.
And there is by vary, depending, depending on the contest on the moment. So there are two measures to consider when we think about use of AI to foster or perform cybersecurity task on national level.
So when we, for example, use AI to protect key national infrastructure, the level of risk become quite high and that's already something that makes the trust in AI very problematic, but also the trust in AI is very problematic. When we think about the predictability of this technology. When we look at AI Analyst itself, we'll see that while it is very versatile and very efficient. AI is also very much a fragile technology is very vulnerable. There are few kinds of attacks, which I become very famous to AI. I I'll mention them here.
So that poisoning, you can just inject a fraction as one amount of malicious data in a mold, in a system and change radically the way the system will behave.
There's a case of an AI system used to distribute drugs in a hospital. And an article published a year ago, showed that by injecting 80%, 8% of malicious data, the attack is managed to ensure that the system will distribute the wrong amount of drugs to 75% and more of the patients. Then you have tampering with categorization model. So confuse a system into categorize things into a different way.
So typically the example given in this contest is a system which confuse tars for rifles and back doors, which is where you make a system behave differently. When some kind of input is perceived in this case, the famous example is autonomous cars, which would been back door. And so the car would recognize as speed limit signs, stop signs, which had a particular sticker on it. You can imagine the consequences for security purposes, aside from the technicalities.
This list of attack is relevantly, cuz it tells us that tax to AI have changed their goal.
If before attacks cyber attacks were focusing on starting information or disrupting systems and services attacks to AI, they have the goal of acquiring the control of the system. And sometimes the control of the system can be just minimal, just the necessary amount for the attackers to do what they have to do and the attack might go and Noce it to the owner of the system until it's it's too late. This is why this very problematic to trust AI because AI is a very fraudulent technology.
Its robustness is something that is very problematic to develop and ensure so much so that you find robustness as a central point of a key international initiatives, whether it is the ISO standard DARPA, the research branch of the ministry defense in the UK, in the us as a program to develop robust AI, another push to develop robust AI came from the us executive order.
And even China is working on that. But the robustness of AI is something very, very complicated to achieve on the one side attacks are deceptive, as we say, for example, back doors on the other side, AI is not transparent.
So as we mentioned at the beginning of the stop, given an input, you cannot determine or understand how the output has been determined, but more importantly, the way in which the point of failures, the way in which AI can be manipulated or attacked are extremely large. If you take, for example, a system for image recognition, it takes one pixel to confuse the system, to make sure that it will behave in a different way. So the sources of preservation are extremely newer. The number of possible perturbation is technically speaking, astronomically large.
And so computer scientists will say that defining robustness robust AI is a computationally intractable problem.
It's not impossible, but it's unvisible. But if AI is not robust, well then it's behavior is not predictable. And then trust in AI is not warrant. It's not warranted. It's something that is misleading and also very much dangerous. Why this is this a double edge sword?
Well, because on the one side we think about just as an element to foster adoption of this technology. On the other side, we just saw that foregoing control on AI might open up a rather big kind of risks and problems. So coming to the conclusion, what we suggest in a paper that I published in nature machine intelligence last year with some colleagues, is that it is not true that to adopt AI, we need to make it trustworthy. And if you think about the nature of AI as an interactive autonomous self-learning agency task is now what we should be looking for.
When thinking about how to rule AI and deploy AI, we should much more be concerned about developing an AI, which is reliable so where we can combine delegation with some form of feasible control in terms of computational visibility, but also in terms of cost and input. And this is something which is very much stressed in the O C D principles for AI O C D reveal defined five principles for the governance and the deployment of this technology.
And in one of those, they stress that AI system must function in a robust, secure, and safe way throughout the life cycles and potential risks should be continually assessed and managed treat recommendation now to implement principles when it comes to developing, acquiring in terms of procurement and deploying AI for cybersecurity or national key infrastructures, a way of developing a way of having a reliable AI is to make sure that providers do not rely on third parties, that there is no such a thing as reliance on machine learning as a service models should be developed our and trained ours with resources, which cannot be easily assessed by Malaysia's party.
We should define standard for adversarial training. Now adversarial training to improving robustness is something that is already a practice, but what, what we lack is standards to refine the level of refinement of this training, how difficult they should be.
And also in the most important cases requires some form of parallel and dynamic control, which could be for example, acquired or established by having two systems, kind of a twin or a clone, actually not a twin, another, another twin, a clone system of the one deployed in the real world, which runs under controlled circumstances or, or in a controlled situation that can, can act as a benchmark or a baseline for the one deployed in the wild so that when any kind of divergency divergence happens, we can take that as a flag for some kind of misbehavior of the system and intervene.
And I guess the, the last point I wanna mention before, perhaps having some time for questions is that the key aspect to this is trying to change the narrative.
It is not correct to say that innovation, particularly AI rely on trust to be adopted trust and forget approaches, which is seems what we are heading to can be very problematic and the issues much more understanding what kind of control can be established so that we can rely on this technology while make it feasible in terms of computational times economic advantage to maintain some form of control so that we know when things go wrong in a timely way and guarantee intervention. And when they say thank you, and I hope you have some questions, I'll be happy to address them.