We are still in identity security as the main topic of that track and we had a great discussion earlier with Adam Price and with Justin Richer regarding inclusivity and digital vulnerability and we want to cover another topic which is a bit more forward looking and not yet fully covered and I don't think fully understood. We want to talk about the topic that's called autonomous yet accountable. Do we need identities for AI? But first of all I would like the two of you to quickly introduce yourself. I don't know, I don't think it's necessary but at least we need to. Maybe starting with Jacoba.
Well, Jacoba Sieders, I've run identity management, everything in the financial sectors for the past 22 years. I've been there, seen that, seen the whole topic grow and being audited all the time, so that helps. So Martin Kuppinger, Principal Analyst and one of the founders of KuppingerCole Analysts and in the identity space I think for more than three decades now.
Right, and the same holds true as for the last panel that we had. If there are questions in the room just give me a sign, I'll try to weave it into the discussion as well. But as a starting point, whoever wants to pick that up, do we see a need for individual identities for AI entities and what do you see as the primary factors driving this need? Do we need it and if so, why do we need it? Maybe starting with you, Jacoba.
My statement would be, well, there are dependencies but in principle, yes, and if it's not because we have an opinion, it's in the AI new regulation that you have to assess the types of AI, you have to report about them, you have to define them as high risk or not and even for auditing alone, you would need to identify your IA and yeah, so I think that's one of the reasons. Yeah, I think I would give it a bit of a different perspective. So I'd like to talk about AI identity because I think there's a lot of AI towards identity, a lot of identity towards AI.
So let's call it AI identity and I think there's another area which is we have and will have, think about metaverses, not the metaverse, there will not be the metaverse. There are already many metaverses out in organizations which are about augmented reality used in manufacturing et cetera, which are sort of metaverses. So it's not the metaverse, it will be many but what is happening increasingly will be, there will be sort of a digital double of us. I would love to use the term digital twin but it's already used in some areas so let's call it a digital double.
There will be whatever, an avatar of Martin or many avatars acting somewhere on behalf of Martin. So things that are powered by some sort of AI and that are doing things on behalf of me.
They are, so to speak, digital representations of Martin and in that sense they should have some sort of an identity. Okay, we could get philosophical and ask ourselves is identity the right term because identity, when we look at the term itself, would require consciousness. So maybe the only need. It's very philosophical I believe. I mean you can go back to Plato and all that.
Yeah, we better refrain from that part. Identify for sure but we would, commonly we use the term identity and so in that sense I think we should have an identity for everything that is acting on behalf of someone, of something, of some organisation or whatever.
Well, I would rather compare it to something I've done in one of the banks implementing ABAG, Attribute Based Access Control, meaning you have a set of rules that could calculate new rules or do some learning based on data and the outcome would be after the rule is played, there's a request, there's the rules, there's the data set, there's an outcome. You can or you cannot withdraw your 10 euro. This outcome has to be, when the auditor comes, he wants to know was there the right right for these euros to get away, to be taken.
I have to prove the whole calculation, the whole reversed engineering rule set or algorithm or whatever it is to prove that yes, this was not, this was correct. Now this I use as a parallel. So it was in 2015. It was not AI, it was rule based but I learned a lot from that, that you can have algorithms and you can have rules but even above that, how do the rules play together when there's one rule overruling the other one? That's becoming quite complex all the time.
So I would want it, I would want to know which algorithm with which capacities, talking to which attributes, what data set gave me the outcome that yes, you have access or no or anything you were searching for. But of course there are many types of AI. So if that's a simple rule set with a clear purpose and it's a critical thing, I want to identify which exact algorithm was at play and I want to freeze it or store it or get some trail and that's what I'm talking about. And right now we're talking about that so to speak on steroids. Yeah.
Isn't it in the sense of there's something which is way more powerful sort of saying okay, I'm the avatar of Martin, let's stick to this and I assume I know what Martin wants. Maybe there are some defined rules that are set and this thing evolves so to speak, it learns, it goes further. We may end up with the very, still very common explainability and I would say traceability challenge in AI, where do these things come from etc.
And then I think we have this thing which, going back to the identity, we have something which in some sense has an identity which has a representation of someone or is totally autonomous. So we take connected smart traffic where we have a ton of autonomously acting systems which should learn.
Yes, they should learn from what is happening in the traffic and get better on that. But, so all of them have an identity but then we end up with control, with liability, with a ton of I think interesting questions. Right and I think we, when I asked the initial questions, do we need identities for AI? Hey Matthias, what do you mean with AI in that case? So you've mentioned the avatar that acts on behalf of a person, so there's a relationship. We have mentioned the totally autonomous entities that act, yeah. And give you an outcome and as a person of themselves, actually as an actor, let's say.
Exactly, so that the first thing would be some kind of identity relationship management to say, okay this is something that acts on behalf of Martin, but hey, it's Martin who's responsible, maybe. Or on the other hand, this is really fully autonomous, so you've mentioned. Still a lot of relationships. Absolutely, absolutely, but between each other, but no individual responsibility, maybe. Is it?
Yeah, that's the question. Responsibility or accountability are two things. I think that the process itself could have the responsibility, like an actor on behalf of someone doing stuff, but you are the accountable, the end responsible, accountable person. I just recently thought about, I don't have an answer on that, but I think it relates a bit to that. I thought about, so you have a vehicle that allows for autonomous driving, and so then you're speeding. So the first question is, could this happen at all?
And if so, it would require either, or probably both, you, at least it would require that this vehicle is able to ignore the speed limit, so that the manufacturer has built it in a way that it's, or programmed it in a way that it can ignore the speed limit. And then the question is, does it do that by itself, or are you able to configure the system, your vehicle, so that you say, oh, you can easily drive 10% faster or 20% faster. And if not allowing it, what is in a situation where you need to speed a bit to avoid an accident?
And this, I think, is a very interesting point, because it involves a lot of things about this identity of the car, it's your identity, there's the organization, a complex relationship, a lot of questions around them, responsibility, accountability, liability. Exactly, and that's why I want every acting autonomous process, as I see, or algorithm, or whatever you call it, would be identifiable that this was the one running on your car at this moment with these parameters, and this is the reverse engineered outcome, and that's why you crashed.
And then the next one is the legal liability, was it your fault, was it the car's mistake, or the producer, or whatever. But first you need to know which rules was playing, and that should be identifiable by some unique identifier, and that's what I call the identity. Fully agreed, and I think the vehicle, by the way, everyone understands vehicles a bit, probably.
The vehicle is a very interesting thing, because when we think about the vehicle as an identity, then I think we are way, way, way, way too coarse grained, because the vehicle is, in fact, a conglomerate of connected entities, call it entities, and very complex relationships, not only with the vehicle, but with the outside. So take the black box for recording of accidents, etc. So this black box is one element in this, which gets data from other systems, which has very sophisticated access rules, because in some countries, the police may be allowed to access this at any point.
In some countries, insurance companies may read a lot of data out of that. In other countries, only in the case of an accident, certain groups are, and then around this vehicle, there are many, many different entities, like the driver, the other people sitting in the car, the leasing company, the manufacturing, the garage, the police, insurance company, and the people working there as well, plus all the other vehicles and the traffic control systems that are connected, and then we end up without an identity.
It will not work, and everything which is autonomous, and this brings us back to AI, because when we talk about autonomous, then we always have a bit of AI in how big the quotas may be, but it means without this identity concept, we can't get in control, neither for tracking it, so you come from the governance side, but it's also about the control side, so who is allowed to change what under which circumstances. So we need identities and sophisticated, so to speak, access control concepts to make all these things work, I believe. We have a question.
Sorry, if I have to, if that's the microphone, just a sec, as usual, keeps me moving.
Yeah, to come back to the question, you were talking about speeding and what would happen, but isn't there already something happening now with AI systems, and then especially with the image generating ones, and they say if you use our image generating system, and you get sued for copyright, because of something that was there in the training data, that we as manufacturers, we will be limited liability, we will take on the, let's say, all the court costs up to a certain point, isn't that more or less the same thing?
So you get a disclaimer, a culture, because everyone is trying to get away from the liability, that's what you mean. Oh, sorry.
No, no, that's good. Yeah, but I think the AI is some element in your car, like your motor, for you, if you don't know anything about cars, your motor block is also a black box for you, a black box for you, and there are elements, and they are numbered, and there are series numbers, and you have to put in the oil, and that's it, and you know how to drive, but if something's wrong in your car, yeah, that could also be the AI, but it should be traceable. I believe there's a fundamental difference between your sample and my sample.
In your sample, it's always about, so to speak, conscious abuse, so to speak, in some sort, you say, create something that looks like Dali, or stuff like that. So it's something where you say, I use this actively to create something.
When you have a vehicle, you say, I want to have something that drives me autonomously, and when you then say, either when that thing by itself, or that conglomerate of things, ignores the speed limit, or when you say, I ignore the speed limit, then you have a different scenario, because it's a bit more complex, because in the one case, you just get the tool where you do something with.
In the case of this speed limit thing, it's probably that, as I've said, the vendor could allow you to change settings for ignorance, or the thing could ignore it automatically by itself, and then you have very different scenarios. So I think it's more complex, but yes, there's the risk of liability culture. On the other hand, we see also that there is some level of liability, I think, for autonomous driving level three. As a vendor, you need to take a certain level of liability, for instance. If I jump in there. It's a bit different, but you're running out of time. We are running out of time.
Maybe I think we should take one step back and look into use cases. So you've mentioned identifiability of an AI to know who did what. If I read the act about artificial intelligence, it classifies AIs in three levels. Unacceptable high risk for life of people, threats to people's lives, or whatever. Then high risk, and then minimal risk, or lower risk. And the impact on the process that's being executed, if it's driving a car, of course, that could be high risk, but acceptable, because you should manage it properly. And there's a lot of... I've been thinking about that.
If it's just drawing a cow with a doll's head, that's not a high risk AI. And then it's less important to know exactly what algorithm did it. But I think the high risk AIs, as mentioned in the act, those are the ones in hospitals, cars, aviation. There's a lot of legislation also in those industries themselves, talking exactly about the composite risk that now emerges from the AI, conjoined with the existing industry, old school risks that are all safety risks. But at least, I think, with all the things we touched, we at least agree there needs to be identity for AI.
And we need to identify the right use cases. So it's authentication... In some cases, it's more important than in other cases.
Yeah, exactly. But also for forensics, you've mentioned them. Why did this happen? So you need to have auditability and... Auditability, but it's also writing transparency and human intervention. If someone has claimed that this was wrong, some decision or whatever, there should be a kill switch and there should be a manual intervention and all these things. So if you don't know where to intervene, you need at least an identifier for the process.
OK, so we are now at the same point in the last discussion. We just touched upon the surface of the topic. But I think this is an important discussion to start as well. Also with AI's, spawning AI's and having clouds of AI's communicating with each other. That will be topics that are much more complex than we just touched upon. But nevertheless, we agree upon at least... Or is somebody in the room who disagrees? That we should have identifiable AI instances acting within our systems? In critical systems, yeah. And in critical systems? I'll just grab the mic.
I would say if something goes out into the world, it should be identifiable. But if I run something which I run in my garage as a hobby project, should it be identifiable?
Yeah, minimal risk, low risk, low impact. When it runs over your cat?
Well, when it escapes my garage, then I do think, once they track it back to me, that there will be something said, but as long as... Okay, thank you very much for the discussion. Maybe we can ask AI to help us in this question.
Right, ask your chippy teachers. Thank you very much. You're welcome.