So my talk today, while that's coming up is about complexity in it. And interestingly enough, I thought it seems that we're all being overwhelmed with the idea of complexity. Martin's talk was very illuminating. I wish I would have been able to hear that while I was writing my talk, because there were a lot of very good nuggets of information in there.
So, but I think we've all mentally, it's kind of a, consilience the consilience is where lots of different sources come to the same conclusion and because different sources come to the same conclusion, even without an overwhelming amount of evidence, it proves a truth in science. So I think we're at a consilience that we've reached a critical tipping point where the complexity is becoming too much in this multi-cloud multi multi-hybrid world.
And we'll see if we get up and the monitor in front of you when it starts so I can keep talking. Sorry.
Okay, I'll keep rolling. I'll keep rolling. So basically the idea is that in the multi-cloud the multi-cloud multi hybrid world that we're now we're being, we're adopting systems in the cloud in multiple clouds at a much greater rate and from an identity and access, man, I don't wanna give the end up, but from an identity and access management perspective, this has spawned whole new areas and whole new product segments like Kym cloud infrastructure and entitlement management.
And that, that is really a reaction to the new complexity, where we're dealing with platforms that each have very, very sophisticated security models. And we, as it are being tasked with responsibility of understanding how they work, managing them, ensuring compliance, and ensuring that we're minimizing risk.
And really at this point, it has become an overwhelming and too daunting task we're in, it's trickling down to effecting even the business users, the managers that have to certify the access and say that this access is appropriate.
The end users that have to request the access pretty much everyone is being exposed to this increase in complexity, help desk users, where we're outsourcing the support and maintenance that administration of these platforms. So at this point, everyone is becoming overwhelmed and we're not, we're unable to make rational decisions, any slides?
No, no. We'll see how it goes.
If you, if you want to switch into a talk with me as a certain point on time, let me know. Otherwise we, you talk, just let me know. One of the first to theorize, this was Toffler and his 1970s book, it was on Infor information explosions. Basically the idea is that human beings can only think rationally when they're presented with predictable, steady streams of information and that when they're presented with calm, overwhelming amounts of information in a constantly changing environment, that they're unable to make rational decisions, he coined the term information overload.
I think that's what we're all seeing here today is that we're overwhelmed with the amount of information that we have to process. So if you're reviewing a slide, we'll imagine how have we handled this overwhelming complexity?
Well, if you look at the number of job titles from the us census over the last three decades, it's dramatically expanded. So the idea is that your micro segmenting, hopefully with more and more specialization, so that within a job title, you ha you're presented with smaller slice of the complexity, but this only really helps. There we go. Here you go. So future in the, in the book, future shock, that's where he posited that information overload would cause us to basically lose our minds, lose about 10 IQ points on our decision making process and being unable to make rational decisions.
So we try to segment that by dividing it up into an ever larger number of specializations.
But the challenge is that doesn't really insulate the business users from the complexity because they have to use the it systems. They have to request access. We're outsourcing the management of these systems overseas to where the typical help desk person is managing access and 150 applications, which isn't reasonable that they would ever be trained properly. To understand that as during the last couple of years, we've adopted SAS applications at a dramatic rate.
The work from home really pushed this, that if we're all homeworking, we're going to find the best system, get it online quickly, less infrastructure, less to worry about back in the office. COVID has dramatically increased this trend. So finding the best application out there, not on prem ink, dramatically exploding the number of applications that we have to manage. And also that pushed us into the multicloud world hybrid multicloud world war, almost all organizations are now multicloud.
So we're all dealing with very, very complex systems that were designed to have automatic infrastructure capabilities. But along with the benefit of this expanding and contracting infrastructure, that includes databases, functions of services, Kubernetes. This is the complexity about how are we securing this underlying infrastructure? So these systems have ballooned. So from 90 perspective, what we feel like is that the business is ordering at a restaurant. And they're saying basically I'll have one of everything on the menu and give me a side of everything.
And we really are having to deal with the consequences of that to keep things going well, see, I hear, see those talk about this, but when it sees it, it talks to another season like in the presentation earlier this morning, they say welcome to the club. I mean, it's not anything unique in any company. It's just really a new reality that we are dealing with ever increasing complexity, large numbers of applications.
So besides the obvious asked, we'd have about dealing with complexity and it triggering us to not be able to make rational decisions.
What is the actual price or cost of this complexity? I definitely drives up costs because the more pieces, the more likely things are to you have to manage them. You have to people who know all of these various systems and their interests, intricacies and their idiosyncrasies, it drives down reliability. We all know that the more moving parts and the more handoffs between applications, the more likely are you to have a fault. And the more difficulty it is to troubleshoot it from a technical perspective and also from a vendor perspective.
And it also drives up vulnerabilities because complexity is the enemy of security. It's easy to, if you have very few things to monitor them, to audit them, to ensure that they're secure.
But once that explodes into a million non-human identities, applications, container-based functions and things that pop up and disappear quickly, it becomes a very, very daunting task. So I've talked about the complexity, you know, as great to identify the problems, some of your thinking, okay, that's great. Anybody can, I hope that there's something coming.
Some of you are probably saying, okay, I knew this was going to Sockeye. This is going nowhere. Everybody can talk about problems, but is there a solution? Cause it seems like a very challenging and daunting problem. So what can we do about it from a security professional perspective, identity, professional perspective, we really have to break it down to the core. So what do we add?
A minimum, minimum MVP, subset, what are we trying to do? So very, very simple. We're trying to control what, see what can people do, look in the environment and understand the access.
You can't understand the risk unless you understand the access. What are people doing? So one of the big areas of Kim and a big area that a lot of vendors are pushing into is, is usage of privilege because it's okay to give you a bunch of privileges, but then if you're not really using them, that's really more attack surface out there than is necessary.
So really pushing towards some type of autonomous self-correcting self minimizing functionality, and then what can people do? And what are they doing? Is this appropriate? Does this fit within the risk and policies within your organization? Do you have the proper mitigating controls to allow this risk as in the course of your business, but have a handle on it and feel like you're doing everything you can. And then lastly is how do we maintain people with the appropriate access without hiring an army of human beings that are banging away, entering in data, manually managing these permissions?
So we don't overwhelm the budget.
So the first thing is that you, you cannot manage what you don't understand. So there's a traditional quote they attribute to, oh, what's his name?
I forget, but you can't manage what you can't measure. It's actually not as quote, but more importantly in it and security, you really cannot manage what you do not understand. And these systems have become so complicated that the traditionally trained it, folks are not really ready for it. They're not really ready for Kubernetes permission, then ingress rules and all of these different things they have to deal with. So we've been leveraging more and more AI. Everybody says, okay, the future is AI.
A very famous article in wired where a Jew, a reporter got on container cargo ships, carrying containers around the world. And he watched what everyone was doing and the captain would slow the boat down or he would speed it up and he would ask, why did you just do that?
And the captain said, I have no idea. The AI told me to the AI, might've spotted a storm over Indonesia. Might've saw a backup of container ships that the people on the boat are moving containers. The captain is, is controlling the rudder, and none of them actually understand why or what they're doing.
So it's almost like we are the ants that we might be doing. Something that looks really complicated and we feel good about it. But at some point we don't really understand what we're doing. We're seeding some control or even the ability to understand over to machines, which is a scary thought. It makes you feel very insignificant, like an ant. So all of this is coming about, and Kim is really a reaction to the multicloud complexity. All of the customers realize that, Hey, my traditional IGA is not covering the complexity. I GA traditionally, it's looking at the users.
It's looking at the group's very core screen, but with the explosion of permissions in these cloud systems, really the user and group layer. If you're looking at that, you have no understanding of what you're allowing people to do and the risks associated with it and whether or not it is truly appropriate. Lots of statistics I saw Jackson gave one of them, the scariest ones, the last one at 99% of cloud security failures through 2023 will be the customers fall. And will it really be their fault? It's really just, the complexity has become overwhelming that it will lead to human error.
And the consequences can be relatively grave because systems have 40,000 unique permissions. We're overpayment permissioning, because we don't understand which ones they actually need. None of them actually make sense to us. The best we are doing right now is trying to make it temporary.
So you're only super permissioned when you need it and request it. But we don't really w we're not really doing least privilege. So if you look at the different platforms, they all do their fine-grain permissions very differently, Azure. And it's our back model. It has the Azure ID directory permission side.
It has the Azure, our backside, those are different. And their permissions, AWS, very Jaison policy document focused. And even it even had a couple of different permissions models and then Google GCP, which is probably more like Azure it more role-based and fine-grain permission-based. So from an it perspective or from a business user perspective, how no one really knows that, Hey, if I have this bundle of permissions, that means this person can manage virtual machines in our cloud infrastructure.
And what are we, and really managing virtual machines is the level we need to get to so that people can understand the business risk and the appropriateness.
If I look at my sales person and it pops up and says they can manage virtual machines, I instantly and intuitively know that's probably wrong. Whereas if I look at a series of permissions, they're very technical entitlements.
I'm just going to rubber stamp it and hope to get back to my day, we try to accommodate this with role names, but you can't trust a role name, a role name is just a description that someone entered in at a point in time that may or may not have been accurate at that time, but you have no, and you can't really put any faith in it. A role could say that it's something innocuous and really be granting troubling, very troubling permissions. So how do we handle this?
Well, Tesla is law, which is often quoted in user experience. Designers is that you'll never decrease the complexity in an it system.
The complexity is only ever increasing. So all you can do is decide who will bear the brunt of dealing with that complexity. So if you look at the history of coffee machines, traditional Italian espresso machines, basically you, the, the, the person operating the machine was faced with all the complexity. They had the grind, the beans, they had the tamp, the beans. They had to adjust the pressure, everything.
They had to be an expert to produce something that people are willing to drink. Now, coffee machines evolved.
They, but they went up the engineering scale. So they became more complex from an engineering perspective, but they went from a user simplicity perspective. They got easier. You were still grinding beans, maybe, or you're buying beans. You're tamping it. You're putting the water in, but you didn't have to deal with it because they did the engineering. You did.
Weren't faced with all the complexity. And now today with Nespresso, you know, hundreds of millions of dollars of engineering into the capsules, and it's a no brainer. You pop the capsule and you hit the button.
If you, if you want milk, froth the milk. So huge amounts of engineering went in to handle the increase in complexity of the system and taking that away from the users. Because the idea is that the few of us, the engineers, if we can deal with the complexity in a way that insulates the larger user population from it, then you'll have increased security, increased user satisfaction. People will feel like they're more in control and you won't be making everyone facing information overload, like they're losing their minds.
So how have we traditionally dealt in computer science with, with this type of problem, how do we engine or near our way out of it?
So the key idea has always been abstraction, you know, moving from machine code and punch cards, abstracting up to higher level languages. So abstraction is really the idea that before you, might've been dealing with the Bunsen burner, moving the parts yourself, and now you're abstracted. We give you a button. We give you the, make it happen. Button. We give you the lever and you're not, you really disconnected a few layers from the complexity within.
So abstraction is really the solution. And if you look at Kim, when they were first talking about Kim, it calls out for something. And this is where I feel that the key is, is a in Martin Martin said exactly the same thing, a unifying concept that stretches across multi-cloud. And I would say, why stop at multicloud? You're still dealing with on-premise legacy systems. So a unifying semantic layer to where you're normalizing the business users, view of permissions and entitlements into something that makes human sense that we can quantify.
And we can discuss.
So semantic layer, often it's a term used in a BI business intelligence, but it's really, you have all this different data, different systems, different concepts, but they're all talking about the same things. So why not use the same terminology so that when you, when you are talking about these things, that you're actually actually, you know, that you're talking about the same things.
So NAC the AI, this is big for AI because AI needs to communicate to you from raw data that it understands into something that you'll understand a common semantics, and you need to be able to communicate to AI, to tell it your intentions and what you want to do. And it's, so this is a huge area for AI. So the idea is that you need this shared understanding where the people in IGA like us, the people in BI, the people in seam, that we can all talk to each other, and we're not using, you know, different words.
That mean the same things we're communicating, and we're moving things forward.
We're taking complexity and we're simplifying it. So let's take a real example.
Well, how would this work? So this is from elastic seam documentation. So they're categorizing, they produce these packs where you can identify things that you might want to watch out for. This is Azure application credential modification. So it's a query of the Azure logs where someone's updating applications certificates.
Well, that we would say is the same thing. As someone in Google GCP doing this count key creation. So those are two different cloud platforms, really, in an organizational perspective, we would not want to know the individual details of each. We would like to say, okay, that is managing application permissions, who needs to do it? How risky is it in which systems do you have access to it? How do you get access to it?
So if we think about from 2001, a space Odyssey, you have how the computer and how interacting with Dave, how the artificial intelligence, that kind of went a little crazy people debate about that. So first off is Dave I'll use Dave. And my example, Dave, what are you doing? Which was the first of our questions, one of the basics, knowing what w what you're doing at the second, actually. So now seam example, you can have these inputs, these evidence, these signals were being fed in from other systems.
And as long as you have this unified semantic layer, it's like, okay, that means you're managing Azure app passwords. That's kind of like a local semantic assistant semantic. This means if we detect it, that user has been managing Google passwords.
Now, from a business perspective, we can unify this up into one activity or function, business function, which is managing app passwords.
We can assign a risk factor to it, and we can talk about it because we can easily see then who can manage app passwords, who are, who is in that system, managing app passwords, who's doing it, who can do it.
And then based on our policies, who should be doing it, the next question you could answer with the semantic model to unify it into these business, functional concepts is, well, where did you get this access, Dave, I see that you're doing it, but where did this access come from? So mapping this out. We can see the fine grain permissions within Azure, if we can map them to say, okay, these fine-grained permissions actually mean manage app credentials in Azure.
Well, and then the system will know which roles or how could someone be granted the ability to manage Azure app passwords? Well, if we look at it, we know if you're a company admin, if you're your directory writer, if you're an app admin or cloud admin, so we're tying it together, do we say well, okay, if someone has these permissions, then this is how they would get them.
The next step is putting it all together to say, okay, Dave, Dave has the functional access that he's managing app passwords. We can track it and say, how frequently is Dave managing app passwords? When did he do it last?
How many times has he done it? Because maybe he doesn't need the right to do it, but he can do it. How did Dave get the ability to manage app passwords through which path? And then how does that filter in? And then the feedback loop is that you can detect signals of this functional usage to tie it all together. Who has it? What can they do? The manager can say, is this correct? Does it make sense? Does it fit with risk policies? And then are they doing it? So you kind of pull it all together. And the last is why does Dave have this access?
And this is where us in IGA, we can help.
We add the, not that you, you have the access and how that you were added to this group, but why tying it all together to have a meaning and an automation. So this HR says you do this. The IGA says system will put you in this role, which by policy authorizes these people to do these types of activities, which then will fulfill and allow you to have this access, or you will be allowed to request the access. So that ties it all together. And the last is when users want to request access, they don't understand technical entitlements.
So the bot or the AI can allow them to interact and say, Hey, I'm trying to manage Azure application passwords, or I want to create a purchase order and let the AI share this semantic meaning to help me get the least risky access that I'm allowed to request to be able to fill my tasks.
And if we can do all that, then really for IGA people, I'd be like a mic drop moment that, you know, we did it all. We understand everything. We can automate everything. We reduced the complexity into human terms.
And the key of all, this is the function, which is really this common semantic glue that ties together, the signals, the technical access, the risk policies, and the job appropriateness to make sure we all understand. And we're all on the same page and we're not dealing with all this complexity, let the engineers deal with the complexity and give us something where we can make forward progress and not be overwhelmed. And of course the benefits are huge. It's going to lower your costs.
It's going to increase reliability and make, make everything more secure, because it's easier to secure things that you understand the impact and the risk. Thank you, Patrick. My pleasure.