So as Martin was saying, how do we trust AI decisions, right? That's my topic for today.
Trust, transparency and user experience in AI driven IM systems. And how can we achieve that? It's through the explainable ai. So let's understand. So as we all know, AI is getting more and more involved in making critical decisions, right? In different areas of IAM, like policy enforcement, risk assessments, even automated provisioning and deprovisioning and dynamic authorizations as well. So how can you trust a decision which is made by AI without knowing why it is, why it it is made, and how it has made the decision, right?
So how can you trust something and would you adopt a technology that is affecting security, but it does not explain its actions. That is the problem that we are facing today. And what's the solution for that? Is to build the trust and transparency through explainable ai.
Now, what is explainable ai? It bridges the gap between the decisions that AI is making and the human understandable explanations for it. So let's understand what is explainable AI in detail, right?
It is, it refers to some methods and techniques that you can build on top of the ML models to provide explanations which are understandable for humans. And let's take for a quick example, right? AI is recommending allies role to be in the finance admin role. Now how does it do it, right? What is the reasoning behind recommending a certain role? So that's through X ai, where it can provide explanations saying that because of Ally's recent activities and past user history and the past access requests, they all well aligned with the role that he should be in.
That's how Xai is saying that allies should be in the finance admin. Now, what is the importance?
As we all know it is promoting transparency. It is enhancing trust. And for compliance with regulations like GDPR and others, we, if we provide clear explanations, why is a certain decision made, then we can be compliant for that. And fairness and accountability is another one, which is it can act like as a feedback loop. Now the systems are biased sometimes, right?
And with those reasoning behind certain decisions, then you can find out why AI has made a certain decision and you can improve the system further, make the models more fair and smarter as well. So let's understand some of the explainable AI techniques, right? So the two main popular ones are the lime and shep. Lime is as it stands, local interpretable model agnostic explanations, right?
And it, so the thing with lime is it provides local explanations for each individual production for the role that is has recommended for the users.
And it does not provide global explanations. And it is applicable to any ML model that you can apply the Lyme against it. And the limitations, as we see that it's only providing local explanations, it does not, it, it may not provide you what are the features that are influencing the overall model's decision. Whereas shap, it provides both local and global explanations. And like lamb, it also is model agnostic. You can apply it to any model.
And the limitations for Shap is because it cal, it calculates the features across all possible combinations. It is expensive for, especially for large models with large number of features. So that's the techniques. And this is a case study that I have done with an AI driven role recommendation system. So you have like if you see on the left hand side, I have a data tion layer, which is like collecting data from different sources.
And then is the next layer, which is the feature engineering layer for which I have used the multi label izer because I have, because we all know a user could have multiple roles and it's not one-to-one. So that's why we have, I've used a multi label by riser for the feature engineering model.
And then I have the ml, which is the machine learning layer, where I've used the random forest classifier and multi output classifier to give me multiple roles that I can recommend for a particular user based on the user titles, based on the user attributes like job title, department location, company, et cetera. It recom, based on those attributes, the system can actually recommend certain roles. And the explainable layer AI layer is the one where I've integrated it with Lyme.
And once it provides me the explanations for those roles, I have further integrated chat GPT and the assistant API to even provide more natural language expressions, explanations with some bar charts and pie charts.
That's the next slide. Like how it has predicted roles for a certain user. So it has predicted three roles. And as you can see, I have two bar charts here.
One with the prediction probability, what is the probability for the three roles that it has recommended, and then what is the feature contribution for each role because of the space I just had at one role, but it has bar for all other roles as well. And what are the features that are most contributing? And it also provides some insights if there are positive or negative impacts for based on certain user attributes that's xai, how it is implemented in the role recommendation system. So not only the trust and transparency, right?
Using xai, you can enhance user experience as well, like envision personalized dashboard where it recommends each automatically adjust its layout based on the roles of the user and the permissions that the user has.
And it can have a simple why button. And when you click on that button, it can provide clear, straightforward explanations why certain predictions were made or why some tools are prioritized over others and how certain information is also prioritized. What will this be the, how can user be benefited from this, right? He can see the explanations easily on the ui, right?
And because of that, the reduction in support calls, like now the users do not have to create an incident in the support call saying, why, why can't I see this tool and I should have access for? And they can simply see it using the explain button, it provides explanations for it and increased adoption as well. And the users will be, feel, will feel more comfortable to use the user experience of the tools, sorry, and some more real world scenarios. So in a large corporate environments with large IM systems, right?
You the users are already have certain permissions, but what they're using currently, it differs greatly. There are a lot of discrepancies in the permissions that the user has. So now AI in here can play a role where it can say, these are the role, these are the permissions that the user doesn't even need and they're not even using, although they have the access now that can reduce the unused permissions. So I have a diagram here.
If you see without the xai IT for IT administrators, the recommendations will be shown, but it does not have any explanations and there is no trust for administrators to trust the decisions for which the AI has made. But with explainable ai, IT administrators can now take the informed decisions because they already have the explanations for it and it'll obviously reduce the unused permissions, which is increasing the security posture as well.
And the future of I am right AI and xai at the forefront as everyone is talking about in all the talks.
And if you see I have here took like how can IM be integrated with other technologies and how can X AI help in there? So one of, so one of the three is, one is the blockchain technology and the quantum computing and the iot and the biometrics.
Now, if you think about blockchain envisioned IM system where AI and XAI can provide dynamic access controls and also transparent audit trails. Right now, how can X AI will enhance the security and trust in the decentralized networks and for IOT and biometric technologies as they're advancing AI's role in managing complex networks of devices and identity will grow at a faster speed than we can imagine.
And how, how will xai again help these integrations by providing more explanations and enhancing trust for users and for quantum computing as well. Quantum computing is promising to revolution, revolutionize the data processing.
Right now, how can, again, AI and xai leverage this power to manage identity verification and real-time threat detection using the unprecedented speed and efficiency that the quantum computing is claiming? Thank you all for listening.
If, if you have any questions, please feel free to ask.
Okay, that was a very fast one, which probably is because you felt you're the only one between the audience and the lunch.
You know, it's, it's a bit like in school, the one who's raising the hand just before the bell rings usually. Yeah. Yeah. But anyway, questions from the audience. I think we had two very interesting AI talks taking very different perspectives.
I, I thought of it for me, you know, join these two and, and that then something, not that this isn't great, but something even greater to phrase it correctly probably would evolve. So I think there are, there are some extremely interesting ideas from what I see as an analyst around what we can do with ai. And I think we've, we, we are right now, I think last year we probably scratched a bit the surface around what can AI and trend AI do for, for, for, for identity management. And maybe a few years ago as well where we said, okay, let's strike to come up with some nice road recommendations.
But now we are really going to a next level where we, where we think about revolutionizing. So not, not, not that just doing some things a bit nicer but doing things very, very differently and much better. I think this is something what I really like.
Anyway, questions? Come on.
That's good.
It's very, very good to have someone like you in the audience.
So thank you very much. The question is, it's all great, we love it. Have you ever tried it on actual guine be, I'm sorry, customer, have you tried it?
Like, you know anybody? Like have you seen the prototype and anybody's trying it? Thank
You.
No, I did the prototype, but that's internal to Empire ID with, I just did A POC, but it is not like been to any customer yet.
Okay.
So the, the proof of the pudding is still missing. Okay, another question, Mr.
Lap Lopez, here we go.
In your model, who were you explaining it to? So your target audience and how did you measure success? Because I understand we're all different, we understand things in different ways, we use different languages. So what have you learned and what can you tell us about that?
I couldn't understand the question, correct? Completely. Sorry.
So the, you were trying to measure explainability. I guess the simple question is how do you measure success of explainability?
How can we measure the success of explainability, right?
Yeah, it's like through the prediction of the ML model itself, but again, the, the logical explanations that it provides, like based on the departments, you have to verify it again like by IT administrators, but I, I do not know exactly at this point how can we measure the explainability aspect as well.
I guess they can keep chatting with it until they understood.
So it was, they keep chatting until it is okay.
Yeah, I I think, I think a lot, a lot of interesting thoughts on this. This really I think helps understanding, thinking about the future.
Patrick, a very short question for you, otherwise you will be the one standing with in the crowd and the lunch, sorry, but
Maybe just one curve ball to think about maybe when we get back in, when we get back in the office, explainability in a multi-agent system where the agents are optimizing their interactions beyond what humans can understand. Maybe we need some type of logging where we can look and see what they were chatting about and they explain it to us. Less intelligent carbon-based life form.
But something to think about when you get into that explainability where there's a, without the human involved in the middle,
Without the hu Yeah.
Okay. So you see things are evolving very quickly as we, as you can see here. So thank you very much for this Great take talk Anma.
Thank you.