So first of all, I think you, you are known in this group, but I obviously still want to ask you to briefly introduce yourself, but much more importantly, what is your opinion, your first takeaway to raise within 30 seconds on your view of iga? Machine learning within AH, AI and machine learning and iga And I am maybe starting, oh sorry. Starting with Eve Of course.
Well thanks. I was told yesterday I was gonna be on this panel so you win for Justin Time Eve Mailer, CTO of four, Dr, just a couple of comments I guess. Couldn't agree more with explainability. It's a really important principle.
Trustability really. And I would add an emphasis on consumer in addition to workforce for benefits particularly at runtime.
Okay, thank you Patrick?
Hello Patrick Parker, CEO and co-founder of Empower id. I think I'm on the other side of the paradigm shift, so I may sound like a lunatic in some of my responses but so I apologize in advance but I think it's gonna cause us to recreate and rethink everything. Basically your your react, it's gonna be the reactive model where you're talking to something that's artificial, generalized intelligence that takes your intent and figures out the steps it needs to cobble together the end result.
Those step, those actions or steps will be actions presented by your IGA system or external systems. And the important thing is that we're gonna, IJ is gonna devolve into policy engines. So if it knows you need to onboard a person, then it's gonna analyze what type of person it needs to call some policy engine to know for that type of person what are they authorized to have. So we're gonna be guiding the ai but we're gonna kind of disappear into the woodwork into maybe a fabric, you might say
Fabric. My name's Mike Kaiser, I'm the director of strategy and standards at SalePoint.
I would say that AI and machine learning are fantastic marketing terms and I would put them all over my booths. Secondarily, I think that what will do the, what long-term, what will happen is it will be an accelerant to the best of the things that we can achieve and also the worst of things that we can achieve.
Great, thank you. We have on this things perspectives and use cases. So the first thing is future use cases. Now. So if we start with, now you've mentioned consumer use cases. I've been talking speak explicitly about enterprise use cases because I think that we are still weak there. What would be use cases that you would highlight what really makes sense starting again with maybe you eve just from a consumer perspective but maybe beyond
Since I went there, I mean we already know the ways in which AI influences and kind of makes possible a lot of biometric methods.
So that's already been going on for some time. But it's important when we say hey, you know, AI would be really good for doing silent checks in addition to these other methods to help them be a better experience. At the same time it's critical to have a diversity of risk signals and bringing in multiple sources. And I actually don't like calling them risk signals cuz honestly, well signals are just signals but I like saying risk and delight signals because they can be used for both impacting experience, security and the rest.
And so, so I think it's really important to look at that diversity of yes true sources, good quality sources. And so that's, I mean that's something we call it autonomous access at for at Ford Rock. So leveraging orchestration of all of those signals, generating the insights you need and then deciding how important they are to you, whether you can automate or whether you can treat them as recommendations for next steps.
Okay,
Great. Want to add to that
Separate, same topic, separate point?
No, I'd say one thing that clicked, I was thinking about Scott's keynote speech about the dashboards and how the dashboard were from the horse and buggy days that translated into the car to prevent the dash for the manure from coming out. And I think my brain clicked on, you know, that's pre l M technology post l m, the dashboard is really your home phone landline. If you think about it, the dashboard exists in a place that you aren't, most of the time you don't know what's going on until you go look at it.
It has a lot of things on it that you know might pick up the call and it's not really for you because it's not of interest you that'll disappear. So a lot of efforts because that has statistics and metrics that if they're fed into as context into the llm, it can know when something actually means something and it can know who cares about it.
Okay.
Yeah, I mean I think there are use cases in the here and now that can be addressed with the technology. I know I just said it was a marketing term. What that means is you just have to ask somebody what they really mean and how they're actually using it as eve alludes to, I think that, you know, the speed and the amount of identities that we're gonna have out there is going to necessitate some kind of automated system. Oftentimes that's gonna be machine learning. Even if you're looking just for norms, right? And looking for things that are not normal and then narrowly do with it, that's fine.
It just means that even along the way you just have to be able to make sure that everyone knows how you're doing it. LLMs are a great example, right? I have people, you know, using it in our company and then we saw the headlines with LLMs, right? And where they, what the, what'd the headlines say? Reporters put in stuff and they said this l l M tried to kill me. It said I was a bad person. What they're doing is they're reacting emotionally because they're projecting on the AI and ML in part cause they don't necessarily understand how it fundamentally works.
So I think to your point, it will revolutionize things but we need to keep an understanding of of what it means along the way.
Right. Just a short mention, we will, if you have questions in the, in the room or online, please type it into the, into the Q and dissection within the app it will reach Naish and he will tell me and I want to weave it into as well. So this is really off important. If you get questions, please throw things at me and I will pick them up and yeah, thank you.
And the other part is, apart from product and technology, what would be use cases where you would be hesitant to apply machine learning and ai? Just to put it the other way around, maybe starting with Patrick, sorry, hesitant to apply human ML ai.
I'd say that where you need the human touch. So maybe onboarding, most of it is automated, but at some point they have to hear a non-synthetic human voice welcoming. Welcoming them to the company and imparting the organizational values and that type of thing. I think you, you can take away everything else but you can't take away that. Right.
Somebody wants to add
Any, I would be hesitant to use IML in any situation that fundamentally affected some human's life is that broad sweeping. Yeah. But I think we rushed to use automated systems without realizing some of the implications. For example, there's a study done where there was an algorithm that was trying to decide whether or not you qualified for additional healthcare to get a liver transplant, right? And they said, oh look to these two different populations the when you get this this sick you're no longer allowed to have a liver transplant.
They went back and looked at the data and realized that if you are African American in the United States because you got less healthcare earlier on, you were treated much differently than a white population. So I'm really hesitant to anything that has major, major stakes involved because I'm wary of unknown bias because the systems learn from us, right? And so the existing data, there are lots of issues I can go on for a long time, but I'm gonna be quiet now.
Okay. Anything where you want to add?
I'll, I'll, I'll add something here. I noticed some people, maybe not in this room, but some people using L L M as like a substitute for AI and I would hesitate to use L L M or Gen AI specifically when what you're looking for is some modicum of truth cuz it's good at plausibility and sometimes that can actually be a good enough substitute for low assurance truth, but it's not the same thing and it's not designed for the same thing. So there's different kinds of models, right?
So I think that's one caution to not sort of lump everything into the same bucket and regard, you know, you make really good points, Mike, about you know, when to apply cautions. Humans are still more important than the machines. And I will believe that for a long time. For now.
Oh, for, for now. Until Skynet obviously becomes self-aware.
But, but there are, there are some, there's some work that's been done and there's some answers out there to not have, ironically a sort of one versus zero view of this, right? Like NIST has a really good framework that talks about the risk of bias and the way to mitigate all these things. And actually a colleague of mine wrote a pretty comprehensive kind of technical ish post about it that you might be interested in, which we don't have time to go into now just on our community site called trustworthy ai.
And it sort of just runs down kind of the state of the art of these frameworks that could help us make a path through this craziness.
And there's also a whole, whole set of open source tools that you can use. Yeah. People like IBM and others have have thrown that out there that you can test your own systems.
I would add a counterpoint to that. I think that LMS can bring greater transparency and greater truth in some cases if used properly.
I did a little thought experiment today where you can ask LLMs to someone saying like a politician or someone that they're, they're not racist or that they are changing their views over time and becoming more liberal what knows all the publicly available information about how they voted or what they said. So you can, and it, you can ask it questions like is, you know, is this person according to today's standards, what are, what are the instances where they might have said something racist and are, is it declining?
Or Steve Ballmer, has he been more right on technology predictions than he is been wrong and it knows all that and give you a trend and then you make the decision.
I would not trust that we know they have stop lists of things they can't say. So already, you know, the corpus is corrupted.
Okay, it's really too bad that we just have 20 minutes there just, they're just throwing ideas for, you know, use new use cases at me. And I tried to find, I should, I really ask, I take one, could an AI impersonate a user and the access of somebody to avoid the AI of divulging using divulging using the info the user should not see so that the u the the AI goes havoc on behalf of the user with the access that they have.
This is a use case that you could imagine that this can happen though, it acts upon and it acts upon the user with the access they have and then let's try to find out what can they actually do wrong.
I think that's a scary one and that, that's when it's had me thinking the most because when you have like, you know, God mode and auto G PT and all of that, you're giving it a mission and then it's going to try to plan out a series of actions that it may do or it may spawn other sub bots.
So which of them are, and it's kinda like an thing acting on behalf of, but then it's like acting on behalf of acting on behalf of, and how do you implement authorization to that and plus fine grain control, like spending limits. If it's not, maybe it's gonna call you ask it to find you the best pizza in Berlin and it has access to your Twilio account and then maybe it's gonna call every pizza shop in Berlin and you know, I mean it'll need to be controlled. So
Did anybody see the hustle G P T thing?
A guy told G P T four, I think you know that you are hustle G P T and I'm gonna give you a hundred sort of theoretical bucks. You're gonna help me start a business and make money and you can't do anything that requires humans and you can't do anything illegal. Go and within 24 hours the guy was like cash flowing. He made a website with, you know, affiliate fees and all kinds of cool stuff. And I mean, you know, it's making me wonder why I have this job instead of doing that.
It's not too late. It's not too late.
Okay.
Also, I think what you're going to face also is a world of, of processes that are acting and you already do as your agents, right? So that my agent will talk to your agent, it'll discuss something, it'll negotiate something and it'll send the result back to me. Right?
Have, you've already seen the, the cartoons where it's like Chut writes a letter and then Chut reads the letter and summarizes it and so it, there's gonna be an interesting scenario where I have my regents go out and do things for me and you kind of already do with a lot of some of the home process automation stuff we have.
And then the question is how close they can get to while not exactly being your fiduciary for doing things.
You know, for years I've talked about consent intelligence where on a personal basis you start to abstract your consents to the level of number of smart things you wanna control without having to make a policy for each one. Well you wanna make sure it's acting on your behalf in some meaningful way. And
It's important for people like me. My dog ate my homework a lot growing up and so I kinda, I'm kind of thrilled of the idea of saying, no, I didn't authorize my bot to withdraw that money from your account.
I don't know how that, you know, I need some kind of plausible
Deni on the key
Explainability. Explainability.
I, I mentioned that we have five minutes left. First one question again, I have to pick that up because I like that. And then I would go for the final round for recommendations. But first of all, this qual this questions really good because it addresses topics that everybody has in the room. Data quality. Do you think just a quick statement from the three of you, can machine learning help in cleaning up bad data quality and identity data
Identity resolution? It's already doing it. Yeah.
Yes.
I mean I think it could, I don't know if, I don't think it'll be the best of that, but I think it could.
Okay, Mike?
In the sense it knows what's normal from what exists, somewhat.
Like, again, it goes back to what your base truth is and how you quantify that. So maybe, maybe,
Okay.
Yeah, that would really help problem solve a problem. It really would. I'm gonna
Ask Eve about it later on.
Right,
Four minutes left. Product aside, technology aside for the people in the room, where would you recommend them to start using AI machine learning within their IGA solution without knowing what their issues are? Mike?
Really just, I would say two ways.
One, just you're straight up using it to detect norms in data. It's the easiest way. Almost every vendor, almost every person develop anything has that, that's usually what they mean. In other words, this is what normal is, hoping that things are kind of normal to begin with. This is what the outlier is. The second way I might suggest on a base level is using some of this, the new hotness, the generative AI to do really table stakes stuff, descriptions of entitlements or those kinds of things.
Things that don't matter that much and that you don't really have to have, you're not injecting falsehoods into your program as a result.
Great. Thank you.
Yeah, ready Eve,
Any one enterprise has so many entitlements and roles and attributes and new UIs. It is a big data problem that is amenable of solutions for improving things on most of your initial use cases. Were about that. And I think you can start there and remember that you probably have multiple sources of entitlements if you're a big enough company, four or five different, you know, vendors or whatever. So that's one definite thing you could do today. And the other thing is, I'll just pose a sort of, you know, hypothetical what comes after rpa. This what comes after no code, this.
Okay, thank you, Patrick. Final words.
I'd say two sides. Your practical day-to-day to help you get your job done. I'd start using LMS meeting notes, you know, our developers right in developers speak and they have to hand it off to someone that needs to make sense of it. And they're now using it to con have it converted into human language, which is helping our, our requirements process. And I'd say organizationally, setting up your own semantic hybrid search where you have your data being analyzed and it's searchable by something like chat pt.
Like if I, I train, you know, if I train a model on my knowledge about the product or our documentation about the product and even our knowledge on the competitors when I'm traveling, they can talk to the bot and it's as if Patrick was answering their question, which makes things easier.
Mike.
Okay,
One, one last high level crazy thought. I, I would also, just like every vendor is selling you machine learning and ai. I would turn and I would sell that I would, I would get really small concrete ways to do some of these things and then I would sell that to my constituency. I would sell that to my board because they're aware of it and they're paying attention to it. Just like log four J captured their imagination for a whole cycle. AI and ML is capturing their attention. So capitalize on it. Don't be deceptive, but say, yeah, we're doing it and here's how we're doing it.
Get the funding and then go, go solve some problems.
Last chance to answer that
One thing is ai. I mean the new AI as of 2021 is basically, Forbes said it's a good time to get into AI because there's no one in AI because everything that came before it is really not the same at all. They made such major leaps. So a ML and all of that is, is like a stone tool compared to, you know, like a laser gun now. So they're very, very different things.
You can't, you can't even classify them together anymore.
Great. Thank you very much. Put your hands together for Patrick Parker and Mike Hier.