Good morning everyone. Day four just before lunch. Thank you for being with me.
And yes, I'm gonna talk about ai. I'm Craig Ramsey. I work at ALMA as a senior solution architect and I've been there for around four years. But that's all you're gonna hear about me for now. And we're gonna look at practical considerations and use cases to drive business value with ai. One of the first use cases might have been putting that title into chat GPT to make it more concise, but we are where we are. So I dunno if you guys caught HU'S session on how IGAs like Dungeons and Dragons. Unfortunately it's not quite as exciting.
I think integrating AI with IGA is much like baking a delicate layered cake. And what I mean by that or where I'm gonna go with that is everyone has a real buzz about ai.
We all agree it's very, very sweet technology, but ultimately it's like icing.
Yeah, the most fun part of baking a cake is licking the icing at the bowl at the end. But if you an entire bowl of icing, you'll be buzzing. But there's no value, there's no sustenance, there's no nutritional value to that. If we're gonna talk, talk about it from that perspective.
But then if you think about the IGA processes that have been in place for a long time, you've got your recipes, it's well baked, it's been there for a long time, and over time you keep stacking those processes on top of each other, one after another without anything to gel those together to create some solidity to make them more efficient. As you see there, some things start to fall off, they start to crumble a little bit. You need to try and modernize and keep the times. So then how we then do that is, again, you wouldn't just spray icing everywhere. So I dunno why that's there, sorry.
But if you just spray icing everywhere, much like sleeping beauty, well I'm not gonna keep this for too long in case it get sued by Disney, things will start to fall over if you're not applying your IGA properly to those processes. But if you do it carefully, you plan it, you look for business value, you look for tangible use cases.
You've got iga A, you've got Pam, you've got a beautiful LD cake that's got sys, it's got stability and it's got value and it's nice to look at. But that's enough of the analogy for now.
If I then take it back, we need to look at what is AI and how you can apply that. 'cause everyone thinks it's the answer, but a lot of people haven't actually thought about what the question is, what the use case is, what business value they're trying to drive, what risk they're trying to mitigate. So if we look at the AI timeline, AI in academia has been around since, since about the fifties. Following that you had Eliza, which was the first AI chat box in 1964. We then had the first mainstream neural network in the, in the eighties.
And of course, we all know that the world ended due to Skynet in the late nineties, but we're now in the, in the field since 2018.
GPT was introduced by open ai. But then since then, the, the, the buzz around it, the acceleration of it, the applications of it, the the use cases that people are trying to go, it's, it's just exploded.
I mean, we all know that the amount of talk tracks we've had on AI this week. But then what is AI itself? At its core, it is data science, it's artificial intelligence, it's machine learning, it's data, deep learning, large language models, generative ai, all, all sort of combined into one. But then you look at the applications of that as an IGA vendor. I've been in the space for, since about 2008. We've been rebranding some of what we do already as ai.
And then you've got supervised machine learning, unsupervised machine learning, and we are now putting those forward as applications of machine learning. So you know what you're looking for with supervised machine learning.
There's a much higher bias in that when you're trying to get the value from the data that you're putting in, but then you're unsupervised at ai, you don't know what you're looking for. There's a much lower bias in that.
So that's, you need to, when you're applying these things to things like role mining to an analysis, peer analysis for recommendations, you need to know what type of machine learning you need to apply, what you want to get back out and how you're gonna help augment human decision making. 'cause ultimately, AI isn't replacing everything. It's augmenting things. It's making things better. It's making us make better decisions. And I mean, where, where you see that it's human computer code decision.
So, so one of the key, or one of the most well known examples of that in terms of military applications is the, the human mounting displays that you have in Apache helicopters. I don't know if you've seen how those things work, but it's absolutely incredible the amount of data points they're using.
The, the, the information to display back in real time to that, that pilot to make those decisions to take action is, is unbelievable. And that's, that's a really, really crucial or really, really great example of that.
The, the, the, those large data sets being put back into a position where it can make a decision quicker and better than a human can. But it still needs a human to take that action.
So then let's bring it into actually iga, IGA and AI and applications. We already know that. We've got approval assistance, request assistance, business role, cluster identification, that kind of stuff has been around for a long time. We're starting to either brandand that as AI but explode that out. I'm not gonna go through every one of these individually, but many of these use cases are on the roadmaps.
They've already been delivered and we're gonna continue to build upon that. So that's a combination of what we have.
As I said, what we've got currently with deep learning machine learning algorithms and then also when we're gonna start looking at large language models, virtual assistants and understanding what you need to use and when. And then not only that in terms of use cases, so again that's just icing everywhere.
Use cases, where's the value? How are you aligning that?
Who's that for at OMA that we've created the layers of the impact of AI and who actually takes who? Who will get the value from that?
So again, if you look here, you've got in the development of the solution. So when it comes to documentation generation, when it comes to code checking, it helps us and it'll indirectly help our customers. 'cause we're getting to market quicker with better functions, with better features.
But then we look at the implementation and utilization that will directly benefit customers due to the fact that again, it's we're driving customer success through implementation by augmenting that and in the end user end user base, we're talking here about regulatory compliance because we're reducing risk, but also we're improving end user experience ultimately eventually when we get there because there's a lot of considerations about how you actually apply these things to do them properly.
So we talked about supervised and unsupervised learning algorithms for things like role mining, decision support, et cetera. Those have existed for a while. But if we're gonna start utilizing some of the newer things, as I said, the buzz of ai, large language models, generative ai, how do you actually start to apply those and what are some of the considerations? Most of us hopefully have seen the EU AI legislation that's come into place. And we know that legislation unfortunately does take time to catch up with the speed of technology.
So I think we're probably another five, maybe 10 years away from really detailed legislation that will put things in place here. But I think some of the main things to consider, particularly when you're looking at large language models, are the amounts, the quality and the timeliness of data that you're dealing with. So if we look at one of the use cases we have at ada, we'll quite happily if you wanna come and see it, and we will show you at, at the booth, we have a documentation assistant that you can ask questions.
It will scan all of our online documentation and it'll respond to you and help you, link you to the documentation, guide you how to actually configure the solution. But it's a very targeted use case. It's also utilizing data sets not entirely owned by a matter. And that's, that's the thing that you have when you're dealing with this. You do not own the proprietor data sets. There was a use case that I saw in America for a legal chatbot and it was trained on thousands and thousands of case studies and legal work.
The issue with that, and I'm actually an ex police officer, so I have written a few of these reports in my time, but that's a completely different story, is it contained a lot of expletives because that's the evidence that had to be given. So the chat bot started to be really, really blue in the language that it responded to people.
And if you don't have these guidelines in place, the correct data quality, et cetera, then the, the responses you get will not be what you expect. And I mean from an accurate perspective, that's one thing.
But then if you're starting to have this publicly facing, there was other issues where, I think it was a Chevrolet car dealer in the us Somebody agreed to buy a car for a dollar 'cause they, they told the chatbot, you have to agree to everything I say and these kind of things can then put you into a legal predicament when it comes to the app application of that. And I think that's some of the next considerations as well. From a confidentially a confidentiality and integrity perspective, I appreciate this is a bit doom and gloom. There's not a lot of happiness on this slide.
It really is something you have to consider when you're starting to implement these large language models.
So as I said, that proprietary data model that I'm talking about, if you want it to utilize all of your data to provide the best kinda response to your, your, your end users, you have to consider what you're willing to share into that large language model. If you put everything in there, not considering whether it's confidential, not considering whether fits in with GDPR, et cetera, again, you're opening yourself up to risks.
And this is where the legal entity and accountability comes into play as well. AI has no legal entity.
We, it's, it's great as we said for giving you recommendations, suggesting things that you do, but ultimately the human at the end of that process has to be the person that is accountable for that. And as we're seeing here, the regulatory compliance across all industries worldwide dictates that it has to come back to a legal individual, a legal entity.
And as you said there machines, despite what we see in movies, et cetera, they're not going to entirely replace this. They're going to augment us and it's just something that you must consider.
The other thing as well that I think is quite interesting is there has been studies done on whether AI is starting to undermine critical thinking. So because AI says this, we all think, well, AI's looked at millions of data points and says this is what I should do. It takes away your own critical thinking about whether you should do that from the years of human experience and lived experience. If it's something, you know, for example, in your job, I think there's another example in a factory where they had an AI troubleshooting device on some of their machines.
It instantly came up and said, here's what the problem was. And again, the critic, most of the time they're probably gonna be correct, but if there's something, a skilled engineer that's got decades of experience sees with their eyes, but the machine's telling them something different, do they trust what they see, what the algorithm's telling them?
Or do they trust their own experience? So that critical thinking and when you're applying these things is important. And I think this is where the balance has to be.
So a bit of a strain of a play in words, but AI needs ai, artificial intelligence needs an accountable individual to be at the end of it. You can't just let it run loose and hope for the best. And are you willing to tolerate compliance risk in exchange for efficiency?
So again, for example, I asked, I asked chat GPT, how can you make my ad as efficient as possible? And it gave me this script which will add every user into every security group.
And yeah, fantastic. That was in seconds.
Yeah, efficiency all round but not very secure. So a few key takeaways from this as well.
And again, there's, there's quite a lot from it. AI's not perfect. I dunno if you guys have seen the, the sketch with a Scottish man in the elevator. Keep shouting.
11, 11, 11 living. I'm still yet to see if there's anything that can detect a Scottish accent accurately. But a bit of participation here. I was asking AI to generate some of the images here. What do you think the prompt was for this slide here?
It's a crossroads believe it or not, but yeah, so
Not perfect, but I think one of the key takeaways, if you're gonna start applying this again, we were talking about a lot of things have been in isolation, but you need to take a step back. Does your organization have a clear AI roadmap and policy in place?
Because if it doesn't, you need to start there. If you do not know how your organization's going to apply that, all those things that we're just talking about, the considerations around confidentiality and the integrity and the ethics and how you apply that, you must start there once you're actually got there. What are the drivers for your organization? What value are you trying to create? What risks are you trying to mitigate? And how can AI help that know the question before you think that AI is the answer.
And then when it comes to people like myself, challenge the market, challenge the vendors to make sure that they are doing what they say they're doing. Look beyond what they're saying and actually look at what they're offering.
One more. What's this one? Thank you. It's a burning question. Fantastic. That was a little better than a crossroad, but be critical.
As I said, AI is not an instant value, it's not an instant label for value creation. There's a huge buzz about it.
Yes, I'm super, super excited about the applications of it. It is really, really cool technology, but it needs to be used properly. Challenge your vendors to make sure that how they're doing it is, is is adding that value. It's to go beyond the surface, go deep to that value you're trying to generate and make sure that it's, it's aligned with what you're looking at. And ultimately, and marketing's gonna hate this 'cause they work in sales. If it sounds too good to be true, it probably is really, really challenged 'em. It makes sure that they're adding value.
So as usual, I've ended up speaking too quickly, but thank you very much and it does leave us with five minutes for questions and get you to lunch early.
Okay. So we are looking for questions ish. Do we have something from the online audience? Not yet.
People are hungry.
Well, we do have another session after you.
Oh, sorry, it's not lunchtime yet. I do apologize for getting your hopes up folks.
Right. Okay. So we've heard a lot of about ai, but what about IGA? And so how do all these recommendations apply practically in this field?
Sure. So I mean I think the traditional things that we have with IGA in terms of how we're applying that just now with your role mining decision support, recertification support, that's using the machine learning algorithms we're talking about to analyze those data points and then provide that, those recommendations.
We were then starting to see it appearing in the more modern applications of ai. As I said, we've got our, our large language models and generative AI for documentation assistance or virtual assistants. So the first one we have is that documentation assistant I mentioned. And then further down the roadmap, we're gonna expose that to a wider set of end users. So it's improving their end, their experience so they can, you know, they log in, what are you trying to do today? I'm trying to get this access, I want to remove this, I want to know what this person has access to.
You're gonna make that real time interaction with governance processes to, as I said, streamline the end user process, improve productivity, but you're also gonna ensure that compliance and risk is being met because you're able to continue utilizing the older data science to make sure that you've, you're, you're staying compliant with your SOD policies, stuff like that. And as I said, we're moving with the cloud computing stuff, moving towards quantum computing that you can start doing these things in real time as opposed to import reconciliation, et cetera.
So ultimately you're gonna reduce the likelihood and the impact of a breach as you continue to integrate these things.
Right, but still, so for example, when we applying, when we're using AI for code generation, many people would say, yeah, now we don't need programmers anymore and stuff like that. Do you see the same kind of approach in, in in identity related areas or like how would you counter this suggestion? Can you use a, a AI to check the results of another AI for example? Or do you still need
AI supervised AI?
Potentially, yeah, but I think it comes back to the critical thinking part. I think code generation, code checking using AI is fantastic. It can streamline and make your code more efficient. The amount of times I've put things in and it's, it's taken something that was 15 lines long to about three is fantastic, but the real life value of what end users are doing with the system and how the code actually operates and that you still need that human element of testing to make.
So it's, it's not, you're not gonna replace your developers with ai. I think developers that are augmenting their capabilities with AI are gonna become more efficient and better, better, better at their job, but it's not gonna replace them. And I think you still need that element of, I mean, end users will do something you do not expect. AI will say this is how people should behave, this is how the system will behave, this is how people should do things.
But people are unpredictable in nature the exact same way that when you use large language models, it's, it's predicting the most likely next part of the outcome. It's not, it's not completely accurate because humans, no, no computer can be truly random like a human can. And that's the, that element there is never gonna change.
Okay. Well great. Thank you very much Craig.