User Interfaces And then I invite Lars and Sounil to the stage. We need to speed up and we will use our additional five minutes as well. We should have a third speaker being online. Is he there?
Yeah, that's great. So, I will do a round of introduction very soon and I will sit down with you as well. I don't need a microphone.
So, here we go. Giving a few more seconds for changing here, so now we can start. We have 20 minutes for a panel, which is short, it is. But nevertheless, we have the panel that's called Building Trust in AI, Ensuring Security and Reliability. And for this panel, I have with me Lars Faustmann, Sounil You and Chris Sullivan, he is above. And the three of us, we are covering different angles of this topic. And since we only have 20 minutes, I want to touch upon all these dimensions in general. And first of all, trust in AI. We have people in here and I know it's after lunch and carbocalypso.
So, it's really, I know it's difficult, but nevertheless. How many of you, just hands, just three questions, really just for the idea of it. How many of you feel confident in the reliability of AI systems today?
A, in healthcare. Do you feel confident in healthcare? I do. If I have that black spot on my skin, I would like to have the AI double check.
So, maybe. In cyber security. Sure.
So, we need Agent Smith again. A different way to ask the same question is, A different way to ask the same question is, would you trust AI to make the diagnosis and act on the diagnosis without hearing the intervention? If you ask that question, I will put my hand down.
Exactly, exactly. But for the diagnosis. Second one, in cyber security, do you trust AI in cyber security? And in your daily work? Okay.
So, first of all, quick round of introduction, maybe starting. I have not yet talked to Chris. Maybe you could do a quick introduction, 30 seconds, who you are and what's your stance towards trust in AI? Sure.
So, Chris Sullivan, I'm the GM of OSEC. That's a management consulting firm focused largely on emerging tech in large enterprises.
Today, we spend most of our time talking about AI. Today, we spend most of our time doing AI governance work. I am also a member of InfoGuard, which is an FBI private sector, US FBI private sector partnership for the protection of critical infrastructure. I serve on the Atlanta Executive Board. I'm the Financial Services Sector Chief, and I'm on the National AI Cross-Sector Council. With respect to trust in AI, I think, especially, I think specifically on this panel, it's about trust in AI security systems. And frankly, I think to the earlier question, it's been done a lot already, right?
We do fingerprint recognition. We do all kinds of threat detection analytics that work well together. People are now talking about different ways to apply that. But I think for me, it's confidentiality, integrity, availability. Confidentiality, obviously, people need to think about keeping our systems and tools, techniques, and practices private. So that adversaries can't figure out how to get around them. Availability, people need to think about, you know, DDoS attacks during, you know, of the systems, especially if they're AI systems in the cloud, DDoS attacks in those systems.
So that, you know, that you can't, adversaries can't get around those controls or detective controls. Yeah, so. Great. Thank you. Thank you. Moving on to Lars. I would say I look at it from different perspectives, right? So first of all, it depends on how good human oversight is happening. It depends on, you know, what security strategy we have on it. It depends on, you know, to what extent privacy regulation is being adhered. To what extent transparency is there. To what extent bias has been avoided. And obviously, social impact and accountability.
So if we have, you know, these kind of measures, it helps me to understand and to trust in AI. The more I'm, you know, evaluating an application to that. And obviously, depending on the use case, I do the effort of trusting it or not. Great. Thank you. And already, I don't have to ask the first question because I wanted to ask, what do you think, what is trust? It's already answered. If you do it as well, so then introduce you and what is trust? Sure. So I'm Sunil Yoo. I'm a founder of a startup called Gnostic, where I'm the chief AI officer.
I used to be the chief scientist at Bank of America as well. In terms of the answer to your original questions about do I trust AI, of course, the answer is depends, okay? But it really depends on what type of AI system you're talking about. So there's three types of AI systems for the most part. One is what's called first wave AI. These are like expert systems. For example, TurboTax, which is what we use in the US for filing our taxes. It's an AI-based system. And I trust it implicitly because I use it to file my taxes and it works pretty well, right?
But it is an AI system and it's handcrafted. So chess is an AI system, right? And I trust it implicitly because of the way it was designed. The second type of AI system is machine learning-based or statistical-based. And those are statistically impressive, but the results are individually unreliable. Statistically impressive in aggregate, individually unreliable. Do I trust those systems? Not exactly, okay? Then there's what's called third wave. We don't have third wave systems, but third wave AI systems, the main criteria for them is that they're explainable.
And when they're explainable, you start building that trust again. When TurboTax fails, it's explainable. I know exactly why it fails, or the people who design it knows why it fails. Third wave AI systems, which don't exist today, but we're trying to develop, do have that explainability. And when you have explainability, you build trust.
Great, thank you. So there's so much to digest in there already that we cannot cover in 20 minutes anyways. But going back to what Chris said, so you focused on CIA, the traditional CIA paradigm. How can we make sure, for example, through architecture decisions, to make sure that this is actually taken care of and that how does your architecture choices and monitoring strategies make sure that this really works? So is there a methodology to control that and to make sure that, for example, AI-based cybersecurity stays secure?
Yeah, so I think there are layers, right? So there's good practices that we do. Now we think about all these AI subsystems, the threat surfaces get really complicated. I think the last speaker did a great job of talking about that and shifting left. The last speaker did a pretty decent job about talking about that as well. There are controls that you can put in. And he said, basically, we haven't shifted left at all. That's actually not true. There are controls emerging and there are controls people using in production, like serializing a model, right?
So models, half software, half data. It's not really a database, but you can serialize that and you can scan that and you can pick up different types of malware and threats and that kind of stuff. You can digitally sign a document.
You know, maybe you grab, excuse me, a model. Maybe you grab the model off of Hugging Face or maybe you built your own model, you know, early in the ML pipeline and now you want to make sure that that model is actually the model and hasn't been infected with prompt-injected or poisoning, excuse me, or anything like that, right? So you can digitally sign that document and then you can track that through the process. So there's a lot of things you can do in the ML pipeline or AI pipeline. A great resource for that is what's called MITRE Atlas.
MITRE has been mentioned, ATT&CK has been mentioned earlier in this conference, but that's a framework, it's a publicly available framework for looking at the controls and threat factors, how do adversaries come at you, and then specifically what controls you can put in place to mitigate that. ATT&CK is something, excuse me, ATT&CK is something that they've laid on top of that. So specifically for AI, here's all the things you can put in that will mitigate known threats. So these are threats that have either been demonstrated in the wild or demonstrated by a white hat.
So the last thing I would say, though, is people think about AI, bigger is better, the bigger the model, like how huge is it, this model, how huge is the next version? Actually breaking it up and putting smaller models closer to the source of the problem, maybe on a mobile device could be a much better idea. Number one, you don't have to worry as much about who has access to that model, but number two, going back to the previous comment, you could do attribution a lot better.
If it's just your data, and I think the previous speaker was talking about RAG architectures, but if you're using something like that, you can augment both the query and the response and attribute that so you can get a much clearer idea. You're still on oversight, but you can get a much clearer idea of, is this a good decision?
Right, so that's the technological aspects of ensuring that AI models act securely as far as you can go, MITRE ATT&CK, you've mentioned that. When I sent out the question, when I reached out to these three different, very diverse people, Sunil answered to me, trust is closely related to safety slash security, which is Sicherheit in German. Maybe you can elaborate on that in context of trust as well. So safety versus security using in context of AI, how does that come into play there?
Yeah, so the word safety and security in German is the same word, Sicherheit, right? And in English, we have two words, and the distinction is important. The way to understand the distinction is to add other words in front of it.
So food, food safety versus food security. Food safety is about hygiene, compliance, best practices, personal responsibility. Food security is usually the government's job. It's somebody else's job. And when we think about safety and security, there are distinctions that we want to make when it comes to AI as well in terms of how we think about AI safety versus AI security. But moreover, the bigger challenge that they're out for us to consider is what would you be more comfortable with, an AI system that is very secure but is unsafe or an AI system that's safe but is insecure?
And I would much, much rather have, going back to Sergei's presentation earlier, if The Matrix was unhackable, it wouldn't be much of a movie, okay? The fact is that The Matrix was insecure. It was unsafe, but thankfully, it was insecure. And so they were able to make an interesting movie out of it. Imagine if you had an unsafe but very secure system, and we can't break it because it's insecure. We can't break into it.
Obviously, I'm creating a dichotomy here, but the point is more so I think we should be much more interested in safety, having safe AI systems, and spend 99% of our time and effort into making safe AI systems before we even think about making it secure. Great. Thank you. I would love to follow up on that, but we don't have the time. So we have the technological aspect. We have the more conceptual aspect and even more philosophical aspect. Lars replied with an aspect that struck me as well. Sorry for that. The societal implications and the bias.
You've spoken about transparency and societal impact as critical to AI trust. So how can we deal with that? We heard earlier today that we could use AI to clean up data to remove bias. I'm not quite sure, but maybe your thoughts here. So we at HP offer a system which we call Workforce Experience Platform. And what it does is it collects data, and with this data, telemetry data from our PCs, printers, creating recommendations to improve security, recommendations to improve productivity, et cetera.
So the question that I often get, and we're here in Germany, right, so there are some parts who say, you know, we have a German workers' council who says, you know, problem, because you may take personal information and then you draw a conclusion on it. And based on these conclusions, it may have an impact on the workforce. So people are very much concerned, and this is the ethical and social impact where we are very sensitive to, so that there is a capability just to switch on and switch off certain information.
And we get this often from our customers that they say, yes, we trust it, we want to see it, but first we want to learn, right, and we want to have the capability to enable or disable certain functionalities. And that's what we meant with transparency, that so we will enable our model simply and switch on, switch off, depending on capabilities in order to create trust, right, at the various levels. And this is obviously a very personal or, you know, an organization has very individual feelings about that or regulations or policies.
And at the moment, we can answer that to that way, that there is a possibility to switch off and switch on certain things. People start to trust the product, trust the results. And then obviously, even if it takes in the first step more manual and human oversight, but it creates trust. And I think that's my answer to that.
Okay, great. Thank you. So it's both. So it's also a kind of technology because it's people, involving people into the process.
Again, we're running out of time. We have five minutes left and we did not even have come to a real good discussion. Maybe the last question can spark this.
Again, start with Chris. Of course, this is the final question. So what is your final recommendation? But maybe if we can cross over from technology to conceptual to ethics and bias, what would be one key takeaway of all these people in this room should take home from when it comes to ensuring, maintaining, improving AI trust? So start with me? Sure.
So yes, I would say this is about process and not a project, right? So it's never going to end. You need to think about what's your strategy, like what's my vision for this stuff? What's my vision? What's my strategy? How do I get there? And what are the tactics? But along the way, you need to measure every step of the way. You need to communicate constantly, which means you need good KPIs to measure what you're doing. Is the efficacy of a specific AI subsystem improving over time? You need to communicate that out.
You need to do pen testing and you need to expect that we're not going to implement this and all of a sudden we'll have a lot more free resources because as we improve, the adversaries are going to improve. So it's process and it's about measuring your maturity, picking your fights and improving over time. Right. Thank you. So I mentioned this yesterday's keynote around the different stages of brain development. Where we are with most AI systems today, they are inherently, by design, not going to be trustworthy, OK? So and they will never be trustworthy. So that's maybe the key point here.
So let's not waste too much time trying to make them trustworthy. Rather, we can wrap other sort of processes around how we deal with whatever outputs that come out. So it's not about making the system trustworthy, but how we react to it. That's what I would say. And to give you an example of... The point is that there can be a limit of what we can do technologically to solve that problem. A quick example of that limitation is the problem of deep fakes. So I've seen all these technology solutions emerge, technology solutions to help us defeat deep fakes.
But they are one step away from being defeated themselves. So any day, they're not going to work. So what do I need to do? I need to have process controls. So at any point in time, tomorrow in my break, I'm going to rely upon process controls. And if someone says, oh, I have a new deep fake detector today, I'm still going to need process controls to avert whatever may be circumventing that. So there's a limit to what we can do on a technological standpoint.
I think we should rely upon other type of controls to build trustworthiness and the processes that we have and not necessarily the technology. So back to multilayered security?
Well, if you say multilayered, meaning include process as a control, yes, absolutely. OK, your final thoughts for what should they take home? So I think you guys mentioned it perfectly, right? So it's the process. It's the additional layer of, I think, human oversight, transparency, for me, where we can have or we should. We need to have that level of final thought to create trustworthy AI systems. And I think that's what we do at HP, to create that transparency for our HP, for our systems. So nothing more to be added.
Right, and maybe at the end, some statistics. I just, there are tons of those around. I took one from KPMG from 2020 from Austria. So 85% of people believe AI results in a range of benefits. Three in five are wary about trusting AI systems. 39% are willing to trust AI systems. And 67% report low to moderate acceptance of AI. So way to go. There's quite some way to go. Thank you very much, Chris. Thank you very much, Sunil. Thank you very much, Lars. Always too short, really too short. Thank you very much for your comments.
And thank you back for being my, yeah, my partners here today and for sharing our thoughts. Thank you very much. Thank you. And I have to apologize for, first of all, thank you.
Thank you, Lars. And I cannot shake hands to Chris, but thank you.
Okay, he's frozen anyway. Okay.