There we go. There we go. They jumped ahead a little bit. So thanks everyone. I'm the founder of Empower D. Happy to talk, be here again to see a lot of familiar faces and it's grown, so some new faces as well. So what I'm here to talk about is we're all kind of talking about AI and LLMs and the magical things they can do, but I think most of the discussions, you feel like it's a black box and you know, magical things can come out of it, good stuff, bad stuff.
But I wanna take you a little bit inside till you understand enough about LLMs and agents to where you have those aha moments where it can factor into your design thinking for whatever you do, so you understand how they work and how they could be applied and what the limits might be. Unbeknownst to many of us, we are already living in an AI agent world, meaning that the, the cat's out of the bag, Pandora's out of the box, there's no going back.
AI agents are here.
The technology's reached a point where it is beneficial and sometimes not beneficial, but it's at a point of utility to where there's no, it's, it's gonna be, it's a different landscape for any new software being developed today than previous to this. So we're all living in an AI world, an agent world, and it's transforming everything around us in closed doors, in different countries around the world. And most of us haven't even realized yet the extent of how the, the, the ground beneath our feet is actually being reformed.
So a lot of you gonna say, you get the di you know, the people say, well, yeah, yeah, we've seen this all before. It, it's all overhyped. It's just another way for these big companies to make money.
You know, we had clippy clip wasn't clippy ai and then poor Bill Gates having to defend.
It's like, no, no.
He, he's probably so tired of hearing about clippy that clippy is not an agent. Clippy was not, it did not intelligently think and learn on its own. It wasn't trying to solve a problem, it was just reactive when, and you did something, it popped up and that's it. It wasn't thinking, improving learning, it didn't have context.
So it's, it's not an agent. So the question is, what is an agent?
In 1983, society of the Mind published by Minsky, he was under, put together a thought model of how the brain actually might function in order to solve problems. And he came up with the word agent. The idea that your brain forms these little independent agents, like with different functions that work together in a collaborative fashion to solve complex problems. Because when you break down a, a task into different experts that can work together, then you end up with a better solution.
So the agent really came from, from his book, and the, the agent basics are that you have a users that can ask a user prompt the user query, they call it ask the question. The large language model is the brain. It behaves much like a human brain. It tries to figure out what, what did he mean? What was the intent, what did they ask me? And then it has at its disposal memory. So when you're asking a question, if I, if I'm asking a question about something and then later I ask a question, I don't have to give it all the information every time.
If it knows we're talking about Bob's access, then 10 questions later. I don't have to keep mentioning Bob when I'm answering the question. So it has short-term memory, and it has long-term memory that it uses for learning. It has a planner, which is probably the key thing that we're gonna see, which tells it, how do I break down a problem into a way that I can solve it?
How do I create tasks subtasks? How do I do my chain of thought, my knowledge base? And then the most interesting thing for the future are tools.
And tools are what allow it to actually perform actions instead of just answering questions. So I'm not gonna get too technical, but when you create an agent, you have your core prompt. So users are prompting it with their queries, but inside the, in the, the, the thing that's hard to wrap your brain around that you have to get to click is most of the programming for an AI agent is you writing human language to tell it how to think who it is, what's its expertise, what does it know, how does it behave? So the core prompt you actually telling it how to solve a problem.
So that way when it takes the user query, it knows it has tools, and it says, break it down, think what you should do.
Think about the action, take an action, observe what happens, and then rethink. So you're literally writing code, but you're not writing code. It's just human language that behind the scenes would've been a ton of code previously. And the way that works is like, so a user asks the question, how many intra ID workload identities have not been assigned an owner in the IGA system? That is a really complicated question, but it can break it down.
And it says, okay, thought sounds like a question where I need to query both enter ID and the IGA system. Let me look at my list of registered tools and see if I have tools that, that say that they're for this type of purpose. Finds a tool and it says, okay, observation, I have a tool I can call the graph API and then I'm gonna call this endpoint 'cause it has access to public information.
This endpoint's gonna give me back to service identities. That's the action that makes the query. And then it observation, it gets the data and it thinks about it.
Did I answer the question or what's the next step? So I got some data back, but now I need to call the IGA system. Do I have a tool for that? How do I call the IGA system and pass in the information and get it back? And then how do I end up giving you a final answer? So it does this thought action observation. What do I do next? Action observation until it reaches what it thinks is the end goal. So the genius of it is that literally if you give it the right tools and you give it the nice core prompt, it could do theoretically almost anything theoretically.
So AI will do what you can explain to it.
So it's almost like LLMs were invented by a cabal of angry high school English teachers. Because now you can't just say, make me a cool image or you can't, you know, use, you have to actually learn how to use your language. Doesn't have to be English to really explain things because it will do what you explain it, but you have to be good at explaining it or it's something you're not gonna get a good result. So the question is, the new skill is how well can you explain things? Did you sleep through high school English?
Did you slip, do you do, do you use, you know, cool and groovy and you know, make me a rad this, I mean it's, it's never gonna figure out anything if you can't actually explain 'cause it'll do what you try, what you tell it to do. So you have to become part Jedi master because in prompting I say you are an expert in sequel commands and sequel syntax.
So you're doing a Jedi mind trick where you're telling it what and who it is. And then you also have to be kind of a dungeon master.
'cause the dungeon master was always the nerdy kid who had the best, best vocabulary and the most creativity and could paint the awards this amazing scene with dragons and wizards and, and tell everybody what was happening step by step. So if you combine those two skills, then that now is the new rockstar programmer in an LLM agent world, which is weird. So I think the English teachers are probably pretty happy about that.
So just a quick example, we had a workshop this morning about authorization, you know, and just, I had a call with some of the coordinators last week and I thought, well wonder, I just wanna do a prototype. It's not a product, just a prototype.
Could you make a policy decision point, an authorization endpoint to make decisions using just no code? Basically just LLM. And it was interesting. So I took their off the off then interop, I took their web website content, I fed it in as a word doc into an open AI assistant. It vectorized it. And then I created this, this took a long time.
I, so this is the new programming I, it, I taught, I taught it that it was an expert in Alpha, it was an expert in Amazon C it was an expert in RAC, it was an expert in OPA Rigo. So, and it understood authorization and it could receive a query. It was supposed to look at the, the information from their website about Rick and Morty and who had what access and what roles and give a decision if you asked it a question.
So Morty's trying to view somebody's to-do list, and it had to give an answer.
Now, it was very difficult to get that to work because in the beginning it was, it's like trying to have a good cop and a bad cop. You can't have have one person that's good cop, bad cop, good cop. So I had to split it into two agents. So even in one prompt, you can say, okay, one of you's gonna be the gatekeeper. You're the bad guy. You're gonna validate and never let this guy have any information unless it's passed all these tests. It's only valid responses. And the other guy is the author, the nice guy, he's gonna look at this thing and give back the decision.
And he's an expert in all these languages. So he can take a request that, so JSON, he could take a human language request, he can output back out a human re a human language response or A-J-S-O-N response, whatever you want.
He basically knows everything. So in the end, it actually worked, which was, I had never thought possible, but you could actually make one that functions and works. Then it returns valid responses. So then that gets you thinking, what, what other software are we designing today that if you pause and you think through this new mindset could be done completely differently.
So you need to think, what do you have ongoing that you might wanna take a pause on and do a rethink to see, is this a futuristic solution in three years, given what's going on today? Maybe not. So then it comes the cool part, the cool part is not just answering questions, it's automating things in reality. So tools are the arms and the legs of your LLM AI world. They're the things that make it be able to interact with the world, interact with APIs, do the magical stuff.
Now the interesting thing, this took me a while to get my brain around.
When you register, when you're creating these tools, you actually write a human description about what is it, when would it be used, what's it used for? So you don't programmatically define properties in JSON, you literally just write a document that says, Hey, this tool is forgetting the weather. It expects this type of input and this type of output, and this is when you would use it. So you describe, everything's a natural language. So then every tool typically hangs on the wall and has a description of manual and it can read it instantly to say, is this the right tool and how do I use it?
And, and a tool. And, and then it can use the tool to perform a task. Now the interesting thing is, anything with an API can be a tool.
So you have to wrap your brain around that. So you have this, this brain that can plan out a process, it can figure out how to do anything using the tools that you give it access to, and it can assemble them in any order and use them to accomplish a goal. So if you think of something like onboarding an employee used to be a static process. Now it can be completely dynamic on the fly every time.
So when I say anything could be a tool, an IBM chemistry lab robot can be a tool. So the interesting thing there is an, a project called Chem Crow where they actually hook this thing up. That 18 tools, one of a couple of them were driven by the robot and they just told it, we want you to synthesize in insect insecticide. We want you to synthesize and actually use those tools to autonomously plan and produce an insect repellent, three organ catalyst.
And it helped them discover a new novel compound.
So basically it, it had the tools, it had the knowledge to search all of the literature, and it could generate actual working chemical products with no human intervention in most cases. So literally anything can be a tool. So if you're, if you're like me, I'm kind of science nerd at this point, my mind was just like blown. I'm thinking if it could literally run a robot, generate it. It's just what are, what are the limits and what are the limits? That's the scary part. What are the limits? So what does this mean for software development?
What we do making software, we have to think about, well, how do we make software today? And, and, and, and Eve wanted to say, what is dead? Let's declare the way we make software today is officially dead. That is over.
If you continue down that path at your, at your own peril, because software today, not picking on anybody, this is great software.
We, we love it, we use it. But a, the idea behind software today is you have the page, you have the vi the invoice page, the opportunity page, the con, you have the button that you click to create a new opportunity. It is what it is. It literally is a physical thing, a tangible thing that exists. The moment they cut the software, that's what it is and that's what it's gonna be until they update it.
So, and then you have, oh well let's get fancy, let's have wizard processes, maybe some low code workflows where somebody drag and drops the form, they design the workflow. I designed this one and I thought it was super cool, but I'm not cool because low code workflows.
So last week not cool anymore. So you got, because, and you gotta remember coming back to this point, anything can be a tool that your LLM brain can orchestrate an automated process for. So what could be a tool? Well it's kinda like Sparta 'cause it's like I'm a tool, I'm a tool.
So, and any API can be wrapped as a tool. Any low code workflow that can be called via an API, an entire complicated process could be wrapped as a tool. A whole agent. An agent. You can have an agent made a fake one, that's my horoscope manager. And then I always call him the travel planner. And the travel planner would, would coordinate with my horoscope manager agent to make sure that the stars were aligned for me to travel that day. Anything could be an agent, anything you want. And the other thing is humans are tools.
And we all know many tools, but, but we all could be tools.
So because humans can be, can be subscribed to via APIs and hired out. So as soon as you have a project manager that is an AI agent that needs to hire resource, that can draft a specification, that can hire them through a service, humans are tools that might be driven with no human intervention in the middle. So we are tools and I know many tools, I won't point at anybody but Carlos, none, no pun intended. But so everything could be a tool. So then the possibilities are limits. So then there, there come the challenges.
If everything can be a tool, then you have all these identity challenges because everyone's thinking of the scenario where an agent is your personal agent, your advisor that acts on your behalf. But there are many scenarios in that scenario.
I would imagine something like verifiable credentials, like it's your child, you give them access, you have some ownership, or not ownership, but control over what they can access or U use on your behalf. And probably verifiable credentials are the best way to do that. But you have many other scenarios.
You'll have completely silicon employees who whole department, maybe there's one human and the rest of them are all agents. You have agent swarms where agent swarms are out there searching your network, looking for things, optimizing access and converting it to zero standing privilege where possible. So how do these things have identities? And then independent of the identity a hive, how do the hive have an identity when it's spawning agents by the thousand that disappear any second. And how do they have access? Also the ability to spend money.
So really you're gonna have to have fine grain control.
The first level is course grain control, which is if you're a user and you say, I say I want to terminate an employee, it's gonna have to look in the toolbox and see if I have access to the terminate. And if I have access to the terminate, that's the coarse grain and then I'm gonna tell it who I wanna terminate.
Well, if I tell it I wanna terminate the CEO, it better do fine authorization to make sure that I can actually do that before it executes it. So you can have service accounts, tools and then the whole idea of bring your own cred. Maybe it's gonna be this looser model where I have authority over these creds. I can grant my, my, my tools, my agents access to use them, but only on behalf, you know, permissive, non permissive. So there's gonna be a lot of challenges.
And also, how many of you, if you go look at your tools in your garage, in your garage or your basement, how many people have tools where they only buy them from one vendor?
No one. So the same thing here. We're all gonna be deconstructing our products as rapidly as possible and say, Hey, I have this great tool that allows you to talk to intra id. I have this great tool that does risk detection. I have this great tool. So you're gonna have these cross organizational boundaries where you're gonna piece together the best tools for your agent orchestrator across the entire landscape.
So you're gonna have to have billing, metering, authentication, authorization in a very dynamic environment. So lots of challenges. And then of course lots of opportunities. There's always a downside. But we forget the good side. The good side is democratization of access to information. So we're all going to have access to have these, these agents that can provide, provide us personalized training, personalized services that typically only the mega wealthy have today. They'll be able to help us with our personal convenience ordering, doing other things.
That's gonna be kind of democratized because everyone's gonna have access to this type of thing. Just like the internet, relatively, not everyone, but a lot of people. And then also cross the cha, these new challenges where we want to have cross organizational tools or rent. The best tool from any provider is gonna hasten the, the progression of certain standards, I would imagine verifiable credentials, digital wallets, and the whole idea of an identity and service fabric because you're gonna need the best tools to complete the job.
And that could come and go at any point in time because you're less, there's no vendor lockin, you can pop in a new tool tomorrow, it's self describing and replace it with another if it's cheaper or better. So lots of opportunities to push forward new technology standards that might have been languishing because of they didn't have the killer use case. And I think this provides a lot of killer use cases that we need to push forward some good standards. And that is it.
Well, well as usual it was great. It was full of energy. Thank you. And full of stripes, man.
I'm, I'm, I'm, I've got some suit envy here a little bit.
I'm doing like a talking b talking heads David Burns type of thing.
Yeah, yeah, yeah. I mean Okay.
The, the whole, the whole, the whole ensemble. I have one question for you. How could you measure the effectiveness and efficacy of LLM AI agents in identity governance tasks? So what benchmarks or KPIs could you use? So
Here, here's, so we, I don't wanna talk into this. One of our early ones were prototyping what was risk optimizer. And it's just prototypes not ready. But the idea is that you can, since it's an AI agent, you give it a mission, you give it how to think about things and instructions and you give it its tools.
So risk optimizer can use the risk engine to find the riskiest people and where the, what's the source of their risky access. And then it can reach out and communicate to them through teams and ask, beg, cajole, threaten their manager to get them to try to convert permanent risky standing privilege into just in time.
Hey, you can activate it when you need it and if you need it. And then the, the agent can record out to a dashboard data source that reports for management how good a job it did today, how much, how many conversions did I do. So the agent will be responsible through a tool to report its own KPIs to populate that data. So it's just, you wrap it all in one.
That's great. Thank you very much Patrick.