Welcome to our KuppingerCole Analysts webinar, EmpowerNow AI, Modernizing Identity Workflows for the AI-Powered Future. This webinar is supported by EmpowerID. The speakers today are Dr. Michael Amanfi, who is Chief Architect at EmpowerID, and me, Martin Kuppinger. I'm Principal Analyst at KuppingerCole Analysts. So just a quick note, for technical reasons, Michael will not be on camera, but you will be able to hear him. Audio is working excellent. That's just as an information.
As I said, a little bit of housekeeping. So for housekeeping, the first thing is, you're muted centrally, nothing to do around that. We do two polls during the webinar, and we'll, if time allows, look at your results during Q&A. So hopefully, a lot of you will participate in these polls. There will be a Q&A session at the end of the webinar. And last but not least, we are recording the webinar. Both the recording and the slide decks used will be made available for download process soon after the webinar. You'll get informed to them. So without further ado, let's get started.
And the first thing I'd like to do is, I'm looking at the first poll. And that is a question about, so there's, we are talking today a bit about the intersection of AI and IM, something I tend to call AI identity. And the question is, do you already have something productive where you rely on AI technologies? Are you in an evolution proof of concept or deployment phase where you work on that? Or are you not yet there? So I'm curious about your perspectives on that. So there should be the poll online.
I think there's a fourth option displayed, which you can ignore, which for some reason is just the same as the question. So focus on the three, productive, evolution proof of concept phase, or no. Okay. Next would be to look at the agenda. This agenda is split into three parts. The first part is about AI use case and IM. So I'll talk a bit about all the use cases I can envision. The second is then going very concretely into using AI for identity orchestration. I think this will be very, very insightful. So I've seen some parts of that already. Last but not least, we will do our Q&A.
So to get started, AI and IM use cases. There's a lot we can do with AI and IM with AI identity. And by the way, also the other way around. So we need AI and IM very urgently for AI, for proper handling that. And I think just taking all the discussions currently going on around DeepSeek, this highlights that we also must have concerns about how data is used, how data is accessed, et cetera. So we need the other way around as well. And there will be quite some research provided by co-analysts on the other side of it.
So this time today, we are looking at how can AI sort of foster what we do in IM. And there are many use cases. And one very typical use case is around identity analytics. So using really ML algorithms to analyze data, identifying anomalies, and so to help preventing security threats, improving compliance, identify gaps. We have also as a very prominent area, the risk-based authentication. So looking at how can we use AI again to deal with anomalies.
Well, the identity analytics part is really more about to a certain extent, the static patterns, and then what someone does with access. This is really about what are the context factors, what are the signals, as we call them today, that we have around an authentication, and how do we basically use these signals.
And again, it's about anomaly detection. It's about flexible reaction on things that are changing.
And that, again, is an area where AI, I would dare to say, is very established, maybe the most established area. Also in its facet of adaptive access, where the result is basically really to react on that. So in some sense, I would say risk-based authentication and adaptive access are very close to each other, are significantly overlapping. And so I think these are areas we all are aware of. We can use it in many other areas. So let's look at an area of predictive identity, as we called it a while back.
So predictive identity is basically going beyond the behavior analytics to look at what would be potential securities threats. So also looking at what could be the next expected behavior, and if this doesn't occur, maybe raising your signals to a certain extent.
So again, this overlaps, in fact, with behavioral analytics. So nowadays, I think that the main term we use for that is a different one, which is ITDR.
It's, in that sense, the new user behavior analytics. It employs the ML algorithms to identify patterns and anomalies in user behavior and enables organizations actually really to better react on security threats. I like also, by the way, the term ITDR much more than I like the term user behavior analytics, because user behavior analytics, I think the typical perception is more on the negative side.
Oh, we're monitoring the user and the user's behavior. And if you tell this to a worker council, worker's council in the one or other country, that might raise red flags. Identity threat detection and response where you could argue it's not that different.
Basically, it goes beyond, I would say, in today's capabilities, but there's a lot of overlap. It's basically the positive side. We protect against threats, we detect them, we respond to them. Very positive. So sometimes it's also a lot of psychology in the learning. And I think in this case, it is. Then we have the identity verification. This is before the authentication. So we verify someone, ideally once you create an account, someone comes back with authentication.
For this verification piece, we also need AI, especially in an age of deep fake attacks, where we need to look at really deep fake detection to understand what is true, what is not true, what has been faked and whatnot. And again, algorithms help us to, from simple things like improved OCR to sort of take a character recognition to bring up the old abbreviation here, to really analyzing if someone is live, if there's any indicator for any deep fake or whatever.
Again, one of these areas where I would say it's a very important use case. And then there's clearly also around password management, looking at weak, potentially compromised passwords, et cetera. But I think the better choice than that is this one. So I would not care much about password management in an ideal world. In reality, where still a lot of legacy is around, we probably need to do it. But put your emphasis on passwordless. Going passwordless, because if you don't have passwords, you don't need to care about the challenges with passwords.
We can argue that when we look at the non-human identities or machine identities, so like workload identities, et cetera, and all the secrets they use instead of passwords, sometimes when we look at technical accounts, SSH certificates, et cetera. And we run in, again, problems that are not that far away from some of the challenges we have in password management, like regularly changing these. So the world does stand still. And AI doesn't solve everything, but AI helps us, basically. And I think this is very important also for the entire context of what I'm telling. It's not the Holy Grail.
It doesn't solve everything. But it augments us in doing certain things better. So I definitely like the interpretation of AI as augmenting intelligence way more than as artificial intelligence. Because this is what we get today, what we really get today. So then we have roles, role assignments, recertification.
When we look at the role of IGA tools, Identity Governance and Administration, very early, so the first things more or less that happened around AI, ML usage in that space really was about, so to speak, a better role mining in identifying what could be potential role candidates, where are anomalies in that, et cetera, and helping in simplifying this process of defining and managing and changing roles, which is clearly a bit using a very powerful technology for a very old problem. But I think there's a logic in doing so.
Even while I personally believe the better choice would be, how can we get rid of roles? How can we get rid of standing privileges of static entitlements? But we have them. They are around. We won't get rid of them easily everywhere. So these technologies help us at least to cure symptoms a bit, even while they don't fix the cause. Peer recommendations and verification might be something I am always extremely reluctant regarding any type of peer recommendation. So because peer recommendations to me are a bit, how should I phrase it?
They are a bit of at least having the risk, carrying the risk of repeating the mistakes and conserving the mistakes of the past. So it might be that you say, okay, if the peer already is over-entitled, you recommend that someone else becomes over-entitled. Only if you combine that really also analyzes around at least the privilege principle, then it makes sense. And I think this is one of the strengths also when we go back to the entitlement management and the user behavior analytics.
AI can help us to much better track which entitlements are never ever used by someone or only used in very rare, but I would say tangible situations. So we can say, okay, these entitlements are used in that scenario, in that context. In this case, we can either remove them because they are not needed. So an entitlement that is never used for a time span of more than 12 months at least is probably over-entitlement. And sometimes you have entitlements that are only used whatever, during certain maintenance periods or during the years and closing of the books or whatever.
So we could think about how can we assign them just for that period. That would help us achieving at least privilege principle. Also for access recertification, identifying what can be recertified automatically, providing more information, et cetera, can help better. It would be if we find ways to get rid of recertification because no one likes recertification. And by the way, just a quick reminder, if you have any questions for the Q&A, enter them or you can already enter them into the Q&A session at the right-hand side of your screen.
The more questions we have, the more lively the Q&A will be. So what else? Supporting support. We have seen this popping up with generative and conversational AI, which with all these co-pilots, et cetera, with whichever name they have, you will find quite a number of different names for all these co-pilots from very artificial names to very human names. And they potentially can help you with quick and hopefully accurate responses to better solve a certain job, to do something more efficiently. This is something where we definitely will see quite a bit from Michael later on in his part.
Again, we should be cautious and conscious here about how good is the result, how well it is the result. Not everything that comes back is really good. Sometimes generative AI tends to hallucinate.
Sometimes, I think the current DeepSeek discussions showed us, sometimes it just delivers wrong, factually wrong information about certain things. So don't trust it blindly. Maybe think about zero trust as a principle you apply to AI. Don't trust, always verify. Intelligent onboarding. So how can we improve these processes based on prompt books, et cetera, and help in the configuration process and identifying the right entitlements and proposing the right entitlements and all of that? And what else? What are your ideas? Think about it. What all can you do?
You will see a lot of things being presented by Michael later on. I think one of the things clearly which is increasingly hot these days is using natural language user interfaces where you just say, I need a report about orphaned accounts in Active Directory. And that's it. Nothing else to do. And you get the report. Or I need access, or I need to grant access to whatever these people in that project for that purpose, et cetera, and to write access comes. This is what we can do and where we really have a huge potential. So let's look at these new options.
These are just some thoughts about what all we can do. Before I hand over to Michael, the second poll. And this is about in which area do you expect the biggest impact of AI on identity and access management? So is it around improving the privileged identity security posture? So dealing with privileged identities like administrative access, et cetera. Is it advanced behavioral analysis for session management? So when you have someone having a session, be it an administrator or not, a privileged user or not, do you continuously analyze and detect potential misbehavior here?
Is it about more down-to-earth access and role analytics and management like role mining, recertification, et cetera? Or is it AI-based ITDR, so the identity threat detection and response?
As always, we will leave open the poll for a while. The more responses, the more interesting and valid the results are. And with that, I come to the agenda slide again and hand over to Michael for his part, which is about using AI for identity orchestration. All right.
Hi, everyone. Thanks for joining this webinar. Unfortunately, for some technical reason, we couldn't get my camera up and running, but we decided to move forward to get through using audio in the presentation. So there's not a lot of time here, but I want to spend more time digging into some of the technical aspects of this. So I'm going to skip through my usual presentation here and get to the meat of the webinar.
So if we look at each one of the core components of identity and access management, we see long standing challenges from rigid predefined processes to static rules that can miss new risky user behaviors. But generative AI, powered by large language models, can help us address can help us address these challenges in traditional IAM systems.
In here, I'm talking about generative AI being a reasoning engine in identity workflows and help us simplify these workflows with human-like conversational interfaces. But I think the best way to think about generative AI in the context of identity management is to think of it as intelligence as a service, essentially giving you on-demand scalable access to AI capabilities to handle user requests better, make self-service workflows faster and easier, and also enable workflows that can learn and improve themselves over time.
The big challenge that I see for most organizations is really figuring out how to update traditional identity processes with AI conversational flows. And in the demo, I will actually show how the Empower Now AI solution tries to address this kind of challenge. So the Empower Now AI solution is a Python-based agentic workflow system that uses AI to manage and simplify complex identity workflows. The solution automates the entire process and easily works with existing systems as well. It enables workflows to understand and respond to natural language, obviously, because it's powered by LLMs.
And it integrates with APIs or custom tools. And it's really flexible that you can use it for both small tasks and with your large enterprise systems. The solution is built on four key components, a tool to design workflows, a system to run them, a chatbot interface, and a service, what we call the CRAP service, to manage data in external systems.
Now, the CRAP service is pretty critical here because its role in modernizing identity workflows is that it manages all interactions with external systems separate from your AI-powered workflows. And building on our experience with user-friendly identity management tools, we now make it easy to upgrade your identity workflows using the same visual models Empower ID clients already know and use. But we're actually going further this time by offering a new web-based agent development studio. So now let me get into the demo use case here.
So in the demo that I'm going to show you, you will see how Empower Now AI makes managing access requests simple by using an AI conversational interface. So there's no need to essentially deal with complicated web portals or log into an application. So in the demo, you will see AI agents work together to make managing access requests seamless.
So you can think of, well, you can think of an AI agent in this case, like your typical chat GPT, but instead of it answering just questions and responding or responding to inputs, agents can observe the environment, figure out what to do, and take action essentially to try to reach a specific goal that you define for it. So in the Empower Now AI, agents participate in identity workflows, making those workflows agentic in their nature. In the demo, we have basically three agents.
One of them is the main supervisor, which itself is an agent, but it's actually going to be working with sub-agents in terms of how it addresses user queries. The sub-agents, one simply handles access requests or business requests, and the other works with the JIRA issue tracking software. So with that, let's dive into my development machine, and we can take a look at the Empower Now AI solution. So let me bring up my desktop here.
So let's imagine that I'm your typical access request manager, and I get up in the morning, and I'm interested in seeing what is really going on in my environment with regards to access requests. So the first thing I'm going to do is simply jump into my Microsoft Teams and go to and go to my AI conversational interface here and kick off one of the AI agents. So in this case, I'm going to ask my bot for help.
Okay, so it brings up things that it can do for me. Now I'm interested in working with this access request, essentially the supervisor, the supervisor agent. There are multiple agents here that I have deployed, so I have one agent for the business request and another for the managing JIRA issues, but I'm going to kick off this agent here.
All right, so the first thing it does is that it tells me about its capabilities, so what it can do for me, essentially, it can retrieve access requests and perform crowd operations against those access requests, and it also has the ability to interact with JIRA-related tasks. But what I'm actually going to do is that, because I'm in this meeting, I'm going to delegate my task to one of my colleagues here in my team.
So folks, what you're seeing here is that here with my team, we have an AI agent that is actually an identity workflow that is part of our team, is a member of our team. So I invoke the agent and then I ask one of my colleagues that I'm going to be away since I'm finishing up a meeting to check on the most recent access request for Patrick for me, right? That was the message. So my colleague responded, but instead of going to some portal or some application to actually check on that access, the top most recent access request for this individual, he didn't do that.
Right here in place, in context, he asked the agent, hey, can you give me the most, the two most recent access requests for this individual? And the bot responded back with that information. And he receiving the information, he sends me a message that now he has some concerns with the second access request. So he wants me to review that. So being aware of his concern here, what I'm going to do is I'm going to ask the agent to now create a JIRA issue for the second request as my colleague indicated.
Folks, you have not seen this anywhere. This is new. This is people, AI agents working together in real time. So if I go to my JIRA dashboard, if you look in here, I have no JIRA issue created here, but we're going to wait for this guy to respond. And now if we look at the response, it's saying that it has created the issue for me. Awesome. So if I go back into JIRA and I refresh the view, now I have an issue here generated. Let's go back to Teams.
Oh, so here in Teams, we have one of my colleague, Hamad, who has been following our conversation. And he says, he thinks we need to add some comments to the JIRA, to the issue in related to the security teams meeting tomorrow. So I'm going to respond back to Hamad. And I'm going to ask him to add the comment.
Now here, instead of Hamad, once again, logging into some external application or going into some web portal, navigating some complicated application, he's right in here in context, being part of the conversation, having real time information, asking our agent to add the comment. And here you have the agent behind the scene, being able to act on behalf of each user, not using some service account or some service identity, but using the individual requesters identity to really go back into the backend system to do that work. So here we have the comment.
And when we set up this, we use a specific identity from JIRA to set this up. So we use Patrick's identity. So you may see that Patrick may pop up in here with a comment or his name might show up in the issue. But here we see that this is the comment that Hamad actually indicated that should be added to the JIRA issue.
Now, something important is taking place here. If you look at the conversation, it's obviously a natural conversation we're having here. And when my colleague Joel expressed concern with the second access request, I did not give any specific details about that request in terms of the agent going to create the JIRA issue. I simply said it should use the details from the second request. And he knew exactly what I was referring to. Once again, when my colleague Hamad asked that it go create a comment on the issue, it did not specify any details about that issue.
He simply stated, hey, add this comment to the JIRA issue. And he figured out what issue we're talking about, the issue that it created, right? It has that contextual awareness to know that this is what our team is dealing with.
Now, let me jump back into the environment, the Workflow Studio environment, where I actually built these three agents. So, this is the supervisor agent. And when I set it up, I simply gave it access to two sub-agents.
So, each workflow is actually an AI agent in itself. And it has a unique ID and a description.
So, I gave the supervisor agent these two sub-agents that it can work with. So, what is happening really is that the supervisor, when he receives a request from me or any of my colleagues, is planning, using the capabilities that these two sub-agents have, is planning, making decisions, and interacting with these sub-agents, reflecting on the outcome, and of course, using even the feedback to improve itself. If we look at the business request agent, one of the sub-agents, we see here that the agent has access to a bunch of tools.
Now, what we do with the Empower Now solution is that we ship a bunch of tools out of the box. So, if you wanted to retrieve, let's say, audit information from your IGA system, or you wanted to interact with some custom database, or you want to make a call into some external system via our crowd service, you have the tools to do that.
So, here, I have a whole bunch of tools that I have added here, including, of course, the one that actually pulls the access request data. So, here, you can see that I'm pulling a bunch of data from the backend system and surfacing that to the chatbot. This is the JIRA agent.
Again, it's a single agent. It's independent. It can work alone.
And here, it has access to a bunch of tools. So, these agents have tools, and the supervisor has access to these agents.
So, the supervisor is able to coordinate the conversation between these agents to essentially address whatever issue that is set for it to address. And, of course, the code is Python. Much of this code is really generated.
So, much of the time, you're not really writing code, but within our development environment, you are able to come in here and make changes to the code. If we go to the JIRA, we can see one of the tools here. If we look at this tool, we can see that we're taking the user's access token. This is the user here in Teams. We take that user's access token, and we pass it down.
So, it's the user's identity that is actually being used to interact with the external system. And, of course, the central craft service is responsible for managing authorizations and all of that.
So, we don't have to code any of that within our agent workflow here. This is really the future of identity workflow. People are not going to be logging into applications. They will be right here, real-time, collaborating with the agentic workflows and solving problems in real-time this way.
Much, much more efficient. So, before I run out of time here, there's a couple of things I wanted to talk about.
So, I'm going to jump back into the presentation and try to finish that in time. So, one thing I wanted to talk about is a quick look at the architecture of the demo.
So, in the demo, the user interacts with the workflows through Teams via the bot service. This is a service that Empower ID, we actually developed about a year or so ago.
And, it's responsible for managing communication and connecting to essentially your backend systems. In this case, we're connecting to the Empower ID bot flow technology.
And, the bot flow is really something that we actually built before large language models became a thing. And now, we've integrated large language models into the bot flow.
And, the bot flow is really our bot technology that we can visually model chatbots, basically. That's what it really is.
So, that integrates with the agentic orchestration service and the craft service. So, the agent orchestration service is responsible for running the AI agents.
And, the craft service is responsible for coordinating calls into the backend system so that we're not doing system calls or system integrations within the agentic workflows themselves. We have a dedicated service that is responsible for doing that. I'm just going to jump ahead into here are some of the challenges that we see here.
So, even with generative AI, there are challenges like poor data quality with old or difficult scripts, right? And, the need to also run a traditional system alongside AI during your modernization journey.
Obviously, the fact that you're modernizing your identity workflows does not mean that you're going to cease operations until you complete that. So, you're going to be running your traditional systems alongside your AI. The two biggest challenges in my experience or that I'm aware of so far when you're doing this kind of AI modernization is the limitations in our own imagination. Essentially, what can we do with generative AI, really?
And then, the next one is really how can we do it? That is really the prompt engineering, right? Modernizing identity workflows obviously needs creative thinking.
Of course, we need to imagine new solutions and that is really important. And then, good prompt design, right? A bad prompt can really stop your AI from working well at best.
And, in the worst case, it can be downright dangerous, right? One thing I wanted to actually point out quickly while in the middle here is that in one of these agents, if I go here and show you the system prompt for the agent talking about being careful, right? AI being dangerous. I actually had to restrict what the agent can do by specifically saying that it's only going to interact with this particular project, JIRA project, right?
So, I actually, within the system prompt for this agent, I specifically stated the project key that it could use. So, restricting its ability to go wild and somehow enter into some JIRA projects that I don't want it to touch.
So, that's just something that I wanted to point out in terms of when you're doing your prompt engineering that you really got to be careful there. So, some actionable strategies here before I wrap up. When you're working with generative AI, modernizing your identity workflows, and introducing AI conversational flows into your processes, you really want to take a proactive approach and you can start by first analyzing your current identity workflows to obviously find problems and areas to improve.
And another recommendation would be understanding the capabilities and limitations of the AI models that you're using. And of course, you want to clean up your data since LLMs are really good SEO data, right? And making full use of the API features provided by the model vendors.
Now, it's always a good idea to be careful. As I said earlier, when working with large language models, obviously the term agent in AI agent actually implies a level of agency, right?
So, like you and I, they get things wrong and really they do often depending on the model that you're using. So, you really have to be careful that you may tell it to do A, B, and C, and it may actually end up doing X, Y, and Z. But when you get it right, it's a very amazing system. And you also want to test your data very early on in the process, right? That's pretty important. Keep things simple. There's no need to add complexity. You got to keep things simple and use models that are really suitable for your particular needs, right? There are models for just about everything nowadays.
So, you want to really do your homework and use models that are suitable for your need. And above all, you want to protect sensitive data.
So, you may implement some kind of pre-processing to really ensure that you are maybe de-identifying data before they reach your LLM, particularly if you're not hosting the model yourself. All right.
So, the Empower Now AI solution works with any AI model. It's really model agnostic. And it's also a framework agnostic as well. If your framework is, say, LangChain, feel free to use it. The way we've put the system together, you can use any agent framework that you want.
LangChain, LLMA Index, or whatever have you, CRU API, CRU AI, whatever API or framework that you want to use, you are able to do so. It's really flexible. You can customize the system prompts and also integrates easily with identity systems through the CRUD service. The solution also ensures reliable fallback options and obviously emphasizes responsible AI use.
Now, when I talk about fallback options, I'm talking about when you're modeling your workflow, you have the ability to really input some code that can make it more deterministic because it's not always that you want something to be completely autonomous and having the agent make decisions completely on its own. The agent decision making is really on a spectrum where you can make it completely agentic, where it decides to make decisions on its own based on obviously your system prompt defining its constraint.
And you can also be much more deterministic where you're really bringing in some of your legacy code to kind of make specific decisions rather than delegating those decisions to the agent. And with that, thank you for your time and I will turn it back to you, Martin.
Thank you, Michael. Very insightful, very helpful, and very impressive. We're now in the Q&A part and that also means that everyone has the option to add further questions and also to vote for questions. And we will pick the questions first that are voted highest. So this is what we will do right now during the Q&A part. And I think the first question goes to Michael.
It is, how do you handle situations where older systems like modern APIs or flexible configurations necessarily for real-time AI-driven decisions? So what about the older stuff?
Yeah, that's a really good question. And we often have to deal with that kind of question. That kind of question was actually the reason for the CRUD service. So the CRUD service actually acts as a wrapper around your legacy system to enable communication. So the CRUD service sits between the agentic workflow system and your legacy systems. And within the CRUD service, your system administrators can use the YAML data format to really define the capabilities of that legacy system. And then the CRUD service surfaces those capabilities to the agentic workflow system.
So it's not that within the agentic workflow system, you're trying to integrate with some old legacy system. We let the CRUD service handle that. So that was actually one of the key reasons for using the CRUD service. Coming along with that is the CRUD service that you can deploy and connect to basically any legacy system, whether your system has a database behind it or is file-based or uses queues or whatever have you, the CRUD service will do that integration and then surface your data and operations to the agentic system.
Okay, Michael, thank you. Then let's directly move to the next question. And that next question was it's about what auditing capabilities exist to keep track of AI recommendations, chatbot interactions, and final outcomes in a heavily regulated environment?
Yeah, good question. So in the EmpowerNow AI solution, we log every AI interaction and final outcome in a consistent and structured format, obviously for not just audit but traceability. The logs are machine readable and easily searchable. We actually do track how AI-generated decisions are made by also storing relevant context. So here I'm talking about the LLM prompts, the responses, and metadata, essentially inputs that go into the agent as well.
In fact, the data that are stored are sufficient to reconstruct an entire conversation. And of course, we use session IDs to group all interactions.
Okay, thank you. Then next question. If LLMs are responsible for authorization and decision-making in EmpowerNow AI, how do you make sure it doesn't break the rules of zero trust? Mm-hmm.
Yeah, great. Another great question. So in EmpowerNow AI, the LLM does not access resources directly or even your systems.
Instead, it works through the cloud service. And the cloud service, of course, check roles and permissions before allowing any action.
Now, the PDP or the policy decision point component of the cloud service uses contextual data like here. I'm talking about your identity, device, location, and time. So the zero trust architecture of the cloud service ensures that agent calls into external systems really never breaks zero trust rules. And even in your own deployment, you would also want to make sure that the agents only work with cleaned and anonymized data, obviously, to prevent exposing sensitive information, and in this case, breaking your zero trust rules as well.
And we do a lot of detail logging and auditing, as I mentioned. Here, I mentioned earlier that we store all your LLM interactions, the inputs and output and actions. But of course, when we work with our clients, we also recommend that they do review those logs to spot any errors or if there are any unauthorized behavior. And finally, I would say that for sensitive tasks, you want a human to review and approve the agent's outputs just so it's safe. But the way we've actually approached this is that agents and people can work together in real time.
So these are not some abstract events that are happening in the background somewhere and a human is going to get involved. It's a problem. You're really working in real time, collaborating in real time. So that also addresses the human in the loop issue there.
Okay, thank you, Michael. Let's look for further questions. So we have quite a number of questions. Probably we'll not be able to respond to all of these questions, but maybe while I'm here. Would it be possible to integrate the own agents a customer builds, so to speak, sub-agents in that concept? Correct. So you can create any number of agents and with our BotFlow technology, we expect that customers will deploy hundreds of agents. And these agents, they can act independently. They have their own agency. They can be configured as sub-agents of other agents.
In fact, agents with sub-agents can themselves be configured as sub-agents of other agents. So the architecture is really incredible, how you can layer your agents to really solve a particular business problem. Once again, it depends on the problem that you're trying to solve and the number of agents that you need to bring together at that problem.
Okay, thank you. The next one would be, could anybody in the Scrum channel invoke the bot for all actions or how is this handled?
Well, yeah, absolutely not. So the people in the team, actually only a few of them are authorized to interact with the bot itself. So that's one, but within the bot itself, each agent, the individuals interacting with any of the agents must have direct authorization to do so. The only place where once an individual is authorized to interact with an agent, if that agent interacts with a sub-agent, then that's where the user access is limited, is the supervisor that is interacting with the sub-agents.
But if a sub-agent is trying to perform an action that ultimately the user requesting that action to be performed is not allowed to perform, then the sub-agent is not even going to be able to perform that action. But the supervisor agent interacting with those sub-agents, the individual does not necessarily have to have access to those sub-agents for that interaction to take place.
Okay, great. So maybe let's have a quick look at the poll results. And so for the first question we had, this was the one about, are we already deploying AI-supported technologies?
16%, so roughly one out of six, responded already productive, while 41% say they are in evaluation or proof of concept phase, and another 43% responded no. So it looks like there's a trend, but it's also still quite a journey until we are there. When we look at the biggest impacts, that was pretty much a tie between almost three of the answers.
So the lowest score was for improving the privileged identity security portion, so specifically used for privileged access management use cases, while the other three options, which were supporting access and role analytics and management, AI-based ITDR, and advanced behavioral analytics processions, all score relatively high. So I think what this tells us is that there's not one use case. There will be many use cases, AI, HNAI, analytical AI are impacting many of our different scenarios and use cases we are working on.
So I would say this is something which is not surprising, as it also confirms a bit the state of the market. So in the interest of time, we may take one final question with a relatively short answer. The question would be about, are HLFs, or reinforcement learning, for human reinforcements? I've failed this. Reinforcement learning from human feedback.
Yes, it's a correct one. With these systems and human oversight, most AI errors can be caught and corrected. How do you see the industry balancing automation with human involvement, et cetera? So it's a long question. Maybe you can provide a short answer on that.
Yeah, so as I showed in the demo, it's always going to be people and agents working together. It's always going to be like that. It's not like AI is going to do something, something goes wrong, and then we need a person to now get involved. People are always going to be involved in the process. One of them may be in real time, as I showed. The other may be just through other notification interactions.
But again, the interaction would also be real time as well, that when you respond, you are really responding directly into that workflow. The human is a part of that workflow. The human is not out of the loop, really. The human is in the loop. It's just a question of when does the person really participate within whatever process that is going on. But my belief is that it's going to be really real time collaborating with agents. You're not really going to be able to differentiate between your colleagues and the AI agent.
The only difference is that the AI agent is able to actually execute actions in your backend systems and really get some transactions to take place. Okay, thank you very much.
With that, we are almost done with our webinar. There are a couple of questions left, and we will probably answer these questions soon via a blog post or in social media. Thank you very much, Power ID, for supporting this webinar, Michael, for all your insights provided, all the attendees listening to this webinar, participating in Q&A and the polls, and hope to have you soon back again in one of our other upcoming webinars. Thank you.
Thank you, everyone.