So good morning everyone. Good morning, everyone. Morning.
Hey, welcome. There we go. So my name is Jason Kenahan, and I am the head of product management for IAM at Tallis. So for those of you keeping track of various buzzwords in the identity space this week, I am happy to inform you that with my presentation this morning, you can now check off two more of those buzzwords with AI and trust, maybe a few others as we go along as well. As we said, AI is a major theme in this conference. And just because I say that AI is a buzzword, that's not necessarily a bad thing, right?
Saying that something is a buzzword is really just an indication of the growth in popularity that we have on the subject. So, you know, if you look at the evolution of AI over the last 60 to 70 years, it's really kind of mirrored the growth of computer technology overall.
Why do I say that?
So, if you think back to the original computers, large mainframes that took up, you know, entire rooms inside of, of data centers, they were initially just for the largest of enterprises or governments or learning institutions. But over time, as they became, you know, smaller form factors, they became, you know, more cost effective. They started to move into every business and then slowly into every home. And now we're at a point where computers are in every pocket or on every wrist. At this point, AI has been very similar, right?
The history of artificial intelligence dates back to those early computers into the 1950s, you know, and the, the progression has taken time, you know, it took 30 years to, to start to advance into the early eighties with machine learning algorithms, and then another 30 years before we got into deep learning and neural networks.
But then just in the last three to four years, we've seen exponential growth in the adoption of, and the, and the different technologies for artificial intelligence looking specifically at, you know, large language models, generative ai.
And so with this growth, we've actually seen that the, the technology is now not just for enterprises anymore, but it's for everyday individuals like you and I in our everyday lives, thanks to services like chat, GPT as an example, right? So I looked into, you know, what are the main things that individuals are, are engaging with services like chat, GPT-4. So you see it, you know, being, being asked questions around everyday things. What's the forecast for the weather?
What are different weather patterns that we're seeing, seeing, can you give me different travel recommendations for a vacation that I'm planning on going on? Maybe explain different forms of science and technology or engage in philosophical debates.
But where the power of AI really struck me personally was when I found out that you can use AI to do more important things like generating memes, right? So now you can be the envy of all of your followers on TikTok or Instagram or on LinkedIn by using AI powered meme generators.
So in preparation for this event, I went to a, a site called Super meme.ai, put in the phrase, you know, to showing the potential of AI at the European Identity Conference. And in less than 30 seconds, I was shown, you know, 12 different options for memes that I, that I could use.
You know, this one's a particular favorite of mine. You know, why is AI, the, the Spider-Man of technology? If I had planned better, I would've worn my Spider-Man costume coming to this event.
But, you know, maybe next year, but this is an identity conference after all.
So, you know, really we want to see, you know, how can AI be applied in the area of IAM. So earlier this year, the Identity Defined Security Alliance, or IDSA conducted various surveys to identify some of the trends that we're seeing in the world of identity based security. One of the questions that they asked is, what types of identity related use cases would benefit from AI or ml?
And what you see on this screen here is, you know, things like identifying outlier behaviors, evaluating alert severity in the security operations center, or being able to do more efficient onboarding or offboarding experiences, or helping with access reviews. Now, the one thing that really jumps out to me is that for all the way to the right where it says, you know, there was less than 5% of respondents that said AI and ML will not be beneficial to identity.
So basically what you see is there's a lot of optimism that AI can help reshape how we as an industry do identity and access management. But it doesn't come without questions and concerns as well. So if you look at the different stakeholders that interact in the identity ecosystem, everybody has some different concerns when it comes to ai. Take end users, for example, these could be consumers, it could be your workforce. They're asking themselves questions like, is AI being used responsibly? How do I know that my data is actually being protected?
How is my data actually being used as a part of these AI algorithms? From the organization's perspective, as they start adopting these new technologies, they wonder, do we have the skills? How can I trust the output that is coming from this ai? Is the AI actually secured properly as well? And then there's larger societal questions, and some of these were being discussed on the panel on the stage yesterday afternoon or evening.
You know, what are the legal implications? What does this have to do with, you know, the liability for outcomes of ai? Can the AI decisions actually be auditable for regulatory compliance reasons? So these are all legitimate concerns.
Going back to the, the hesitations or, you know, some of the intrinsic fears or, or uncertainties from the end user perspective, Ali, earlier this year conducted what we call the digital trust index survey.
So we surveyed over 12,000 consumers globally to understand what impacts a users or a, a citizen's trust in different organizations that they interact with across the digital channel. One of the questions was focused on where do you see the most potential for technology having an a positive impact on your digital interactions? And not surprisingly, AI was called out as that item that was, you know, having the highest potential to Im to have a positive impact on digital interactions. But at the same time, there was a lot of concerns.
57% of the respondents said, I'm worried about what this will do to my data privacy. And that's a legitimate concern. And as I said, organizations are worried about this as well, right? Because they want to be able to do, you know, you as delivering services to your customers, to your business partners, to your employees are custodians of that private information. So ultimately, you know, you want to be able to protect that as well. And there are lots of threats. There are new security risks that emerge as you start to adopt AI as well.
So, you know, building into your practices, the, the proper data security concerns, authorization concerns, who has access to this, to this information is a critical part of adopting AI as well.
And this, this, this idea around, you know, using AI responsibly and securely was one of the key tenets of the artificial intelligence, A ACT or AI act that was PA passed by the council of the EU just a couple of weeks back.
And again, I think this was starting to be discussed yesterday in the, in the panel discussion as well. Now, the purpose of the AI Act is not to dissuade organizations from using ai.
In fact, it's actually quite the opposite. The intent is to drive innovation, but to do it in a way that is done very responsibly, right? And I think the, the passing of the AI act is another indication of much like GDPR, how the EU is leading the way, and we're gonna start to see other, other, other countries and other regions start to start to come out with other legislations in this area as well.
So yes, admittedly, there are lots of concerns. There is a question of trust with ai.
However, going back to that IDSA survey, the potential, the optimism around ai, specifically in the identity and access management space is real. And if we look at the use cases, I would group them into one of three categories. So the first one is around using AI in the area of enhancing security. So this is not a new topic. AI has been used in identity based security and in fraud protection for years, for more than a decade. Just look at what we do with biometric authentication, identity verification, document verification, behavioral biometrics, continuous authentication.
Underpinning all of these capabilities is some form of ai, whether it's image recognition, natural language processing, machine learning. These provide the foundation of these identity based security technologies. The thing that has changed in recent years is the growing adoption of AI by adversaries as well.
We're at a point now where the term deep fakes has made it into our everyday vernacular, right? It's not just for security individuals, everybody knows of the concept of deep fakes. AI is actually being used as well to commercialize fraud.
So if you go on the dark web, there are services like Worm, GPT and fraud, GPT, which leverage AI technologies to improve the efficiency of fraudsters. You know, take fraud GPT as an example. It is a whole suite of capabilities that you can have to, you know, generate fraudulent sites to create, you know, really compelling phishing emails to check organizations for vulnerabilities and a number of other things. And that service can be yours for just 200 euros a month. Or if you want an annual subscription, it's about 1700 euros.
So as we see adversaries adopting ai, we have to up our game as well and combat AI with AI in the identity space.
The second major category of where AI can be applied in identity and access management is for process optimization. Here I'm gonna focus specifically around the area of think of identity governance and administration.
And again, just like with the security area in this area of process optimization, AI and machine learning is not new. It's something that has been in use for a while. Machine learning is available in most, IGA vendor products in the form of identity analytics tools, right? And what do they get used for? They get used to identify outliers, to do things like peer group analysis, to do risk assessments of individuals, to do recommendations on whether or not an access request should be approved or denied or in certification campaigns.
Again, if, if the, the entitlements should be continued or revoked. But what's the problem with the technology that we as we have it today, those recommendations, those analytics that are performed are based on a data set that is inherently not clean. And what do I mean by that? Just think of how we go through access requests or certification campaigns today. It's a big rubber stamping process, right?
You're, you're asking managers who don't do this on a normal, on a, on a normal, everyday kind of task to go ahead and approve access for their users.
And typically over decades, we've just been rubber stamping that. So our database of entitlements is made up of information that we don't know how accurate it is. And so providing recommendations based on that data is not necessarily best practices.
But then as we look ahead towards the future and the evolution of AI technologies that we now have available, look at like large language models, we can start to consume additional information as well start to bring in, you know, auditors and their policies. Look at legislations and regulations and natural language and process those. Look at a huger consortium of data that is anonymized across multiple organizations and across industries.
Now, feed that into your analytical models as well. And now when you identify outliers, you can have a higher level of confidence that the recommendations that are being done are what should be, okay. The third area of focus for leveraging AI within IAM is what I refer to as the improving experience.
Now, there are multiple different constituencies that interact with an identity environment or an identity solution. And I'm gonna talk about two of them here in this idea of improving experience. But it can really apply to, to multiple different groups. So the first one is around end users. So think of how end users, us, your family, you know, your relatives, your friends. We all engage with identity systems on a daily basis.
Whether, you know, we're, we're logging into digital services, whether we're onboarding to new services, it's a part of our everyday life. Now just think about how frustrated you get. For example, if you get locked out of your account, your first attempt might be to try to resolve it yourself, but others do not.
You know, they call the help desk, which puts it in, you know, in your organizations that puts a high burden and a high cost on your it, on your IT team, right?
To manage that help desk. What if instead you could change that form of engagement to use a chat bot or an assistant to allow your end users to just say, you know, I forgot my password, or I'm locked out of my account. And now through the power of gen AI and kind of the conversational dialogue that can happen here, you can start to deliver a better experience.
You can coach the end user on how the process to go get their password reset and that, and that's good, but what if you could actually take it a step further? What if you could use this to actually educate those end users on, oh, by the way, maybe you shouldn't be using usernames and passwords anyways. Did you know that you could actually use passwordless forms of authentication? Are you familiar with pass keys and say, no, how do I go and set up a pass key?
So again, this can all be done, you know, spoken over voice or you know, typed into to a chat window.
But there are other concerns that the end users might have as well. Like, you know, maybe your consumers think they're getting too many marketing emails from you and they wanna find a way to, to customize that, but they can't navigate through the self-service portal that you have to change their consent preferences. Or maybe they want to understand what data you're collecting about them and how it's actually being used.
Well, what if they could just ask these questions and then you could use generative AI on the back in the background to help service them.
But it's not just for end users as well. And this is just a sampling of some of the things that we could do, look at other users internal to your organization. So the second group that I, that I wanna talk about here about changing the experience, maybe business users, maybe you have, you know, product managers or business owners of a digital service inside your company and they want to get insights into how is that service performing?
You know, and on demand, they'd like to ask questions like, you know, how many new customers have I onboarded in the last week? How does that compare to last week? Or how does that compare over the last couple of months? What kind of trends are we seeing? Where are people falling off on that onboarding experience? And maybe I want to actually do some AB testing and change my onboarding flow.
Well as a business user.
Wouldn't it be great if I could just speak my intentions there and say, well, instead of maybe taking on the phone number as a first step, maybe instead I wanna start with an email address and go and validate that email address and then move on to, you know, whatever the next steps are on the, on the process flow. But again, doing this in natural language and not necessarily having to understand the tools behind it in all of the individual steps, but just speaking your intentions, speaking your policies that you want to go.
So yes, admittedly there are risks with ai. There are a lot of unknowns and uncertainties, but essentially the upside of AI is way too great for us as an industry to ignore.
So I'll just leave you with a couple of parting thoughts.
So, you know, tackling the concepts of ai, it can be daunting. It is a big task. My recommendation to you is to select your use cases wisely. It's going to take time to train AI models to have a high level of accuracy. So don't pick the hardest use cases first. Don't start with the use cases that require, you know, near 100% accuracy, but focus on things where you can start to see a quick return on your investment and start to build trust. So you know, leverage Gen AI as a way to reimagine some of those end user experiences that you have. This will build trust for your end users.
It will build trust for your organization as well, that you can actually get reliable output from AI and then move on to other things. Move on to the internal stakeholders in your organization. Use copilots, use assistance to aid in the that internal IAM decision making process. And then lastly, partner with vendors, partner with service providers, partner with consultants that are experienced and have a high competence in AI to help guide you through the process as well. And so with that, I just want to thank you for your attention and wish you the best for the rest of the conference.
Surprise, surprise. This topic has generated quite a few questions. I dunno how many we'll be able to get through. Maybe some quick ones. Okay. Should organizations rely on the software they buy for leveraging ai or should they build it themselves?
Builder, buy
Builder, or buy? So there's a number of tools out there that I would say that are commercially available. I think that's always a good place to start. Different organizations have different levels of competency where you may have a build preference, but I would say, you know, the, from a cost perspective, from a time perspective, getting started with commercially available is the way to go.
Okay, great. How do you convince the DPO or HR responsible that for that AI within IAM isn't work profiling? So I guess in Germany that's a big question. Yeah.
Yeah, that, I mean that, that, that's a tough one. You know, it, it's gonna depend on the DPO and what, what their own internal concerns are as well. But I would say ultimately you have to start, start small, as I said, in closing, you don't, don't start with like the biggest projects that are gonna take in lots of private information from your different users. Start with things that are, you know, using anonymized data and you know, build up momentum and build up trust in that fashion.
Okay, thanks. I think we'll have to leave it there. Thanks very much. All right. Excellent. Thanks Justin. Thank
You all.