Everybody. First of all, I want to thank the previous panel to keep us alive, to keep us busy. It's the first fight I've ever seen in panels for a long time. Okay? So we just wait. People are coming in. I just give my introduction. My name is VLA Shapiro. I'm originally from, as you can see, Ukraine. I'm a mathematician by education and I'm a LifeProof that mathematician can do human factor because we are humans too.
You know, some people are, you know, not sure about that. So I'm in identity space for a long time.
I, as I said, I'm from the United States right now, but I live, I, I'm from Ukraine, live in the United States. But there is a little but here right in Monton in Berlin.
One identity place.
Okay, but I'll switch to English for people who didn't understand. I lived in Berlin and I had a couple of professional dresses.
So my, my second professional address, or is definitely and Kuppinger, sorry, Cooper Za, if you saw the name top is a . So my conversation today will be about human factor and access governance. I would like to follow up on the previous conversation about policies. We will talk about that. So basic agendas very simple. We have very short time. So what I prefer to do, I'm gonna start with slides, skip something in the middle and go straight to the end. The most important thing about this, I wanna say we're gonna discuss human factor and iga, A little bit about cybersecurity culture.
Have you heard the term before? Yeah. We're gonna discuss human facts assessment, government, how can you measure that?
And the most important thing, how can you convince your boss to spend money on fixing the problems? Because I as a mathematician will introduce you a mathematical model, extremely complex. But at the end you will see the money slide. Some of people who was at Gartner before saw that. Anyway. So here's an interesting story. I'm not gonna go that you guys all know that, that that's for your boss. 85% of data breaches happened.
Hmm, right now let's look at this. What the statistics tell us, what do you think we're gonna spend percent wise? What do you think?
Oh, well, well it's very interesting. Everybody knows what per who PERIC is. By the way, have you heard about a company called No before?
Oh, that's an interesting one because this is the company who is training you guys on cybersecurity culture and they're doing deficient training. If you heard, and there was a certain guy who runs this company who was one of the first hacker to put imprisoned in the United States, right? Maybe somebody from you knows the name. But anyway, this is an interesting research. I'll go and talk about more about this. So what do we know about the human factor? Anything? Say anything positive about human factor? Can you tell me anything positive? No. No. Anything? No. Usually what we know is this.
The automation will reduce the human factor influence, right? Do you think so? Do you believe in this? Do you really? Who is doing automation? Robots are people, okay? Strong access control policy will lower cybersecurity mediation.
Yes, but with the detail, because only policies are done by who? People, right? Human factor is this is real for pillar of people unknown. It's a big unknown. Unpredictable and non quantifiable. Guess what? I'm gonna dispute all of those. So here's my vision first. Cybersecurity culture is a real source of risk. It's not people who's the risk, it's the culture. If your boss avoids doing MFA from his house and he pushes you to do that, what's gonna happen?
Right? Nothing. Nothing. Good Automation will increase the risk in case of low IGA maturity. Anybody agree with that thing? Why?
Because it's done by who? Duh people. And if they don't have a good culture, don't have a gga maturity, your automation will go really wrong. You know many examples of those, right? Next one. Strong access control policy can increase the spending and risk if they prevent people from doing their job. Would you agree with that? How many of you were not able to do the job because you did not get the right access? Raise your hands.
Okay, we are talking about I am professionals. Just think about normal people. Human factor can be quantified and mitigated, especially in excess government. I'll teach you. How is that interesting? Getting ready.
Okay, now we're gonna go through some slides which you can share with your management. So the goals from big unknown to mitigated measurement is based on security culture, fulfilling business needs and receiving services while staying within regulation is correspondence with culture and people involved.
Again, this slide is for your management because we are all know that this is great message, very hard to achieve. Agree Moving along.
So, oops, sorry, too many. So basically there is a direct link between security culture, ag program success, right? If you have a very low culture trying to explain someone, why do we need to have a, like an ag program is extremely hard because people are saying, you know what? We lived it forever without this program. It didn't work before. It doesn't work now. And when you build this program, guess what? It will not work again. Why we're spending time in mind, let's just fix it one by one, right? Second thing, policy making and implementing policy process reflects security culture.
Yes it does. I will strongly recommend you to take a look at this report because no before is the first company ever we introduced human factor measurement in the security space and measurements.
This is the model coming straight from them. I'm not gonna spend a lot of time if you have any questions, payer competitor was a certain Analyst of in work with some of the people working in this room with other company. I'm not gonna name it.
So basically they introduced, I'm gonna go really quick so this because I don't want to, if you wanna spend time with this, we can discuss, okay, these are the measurements. Can anybody find any technical measurement here? Like knowledge of what the a sacl is something or like how MFA works.
You know, what is all this about? It's about how humans respond to the cybersecurity requirements. If humans don't know why we're doing it, they're not gonna do it. Make sense? Okay.
So again, anybody has a question about that? I wanna move really fast because we have to some. So we did the survey.
Question number one, top iga, human factor topic. Anybody gets the answer? By the way, I have some Ukrainian stickers as price, if you can guess, right? And I have some candies from Ukraine.
Seriously, anybody would like to get one come after the lecture, I'll give it to you from President Paul Shanka, his company. All right. So yes. Okay. human factor topic. Anybody any guesses? All right now gonna kiss you getting access. That's number one problem.
Okay, next one. Top access, control risk. Come on. You have to be able to answer this enforcement, right? Do you wanna have rules with no enforcement? And everybody knows there's no enforcement. Anybody wanna drive on the street when there are laws, but there's no cops at all and no lights. So I better buy a tank, right?
No, this case, that would be nice idea. Next one. What's a top human factor?
Cost. What do you think? Time. Time?
Well, let's see. No, it's deviations because as soon as you create bad policies, what's gonna happen? People will, what do DV eight. They will find another way to do the job. For example, long time ago, printing caller printer was very, very, very expensive. Remember those days? Remember when you have an HP uniform, you can walk to any building and say you are fixing printers. They just bring you straight to the printer and you with them whatever you want. Remember those days, eh? So you know what the policy company had? You can print color printer only if you are a sales guy or big boss.
I worked as a pre-sales guy and my son just created fantastic presentation for his school and I didn't have a color printer at home. So what did I do? I find my friend who is a sales guy and ask him, do you mind to print this presentation for me? And he said, oh sure why? You're helping me so much on the sales process.
Yes, I will do that. What is that deviations result of the policy? Nothing. And finally, the top raise of deviations. Everybody can say to do the job.
How many people do deviation for the bad reason? Nobody is this room?
No, no. Everybody's doing for doing the job. This is really the true. 90% of the time we have a problem because people need to do the job. They can't.
All right, there are three types of people in this world. The policy owners talk about policies, gal, where are you guys from?
Plan id, I wanna see them. I don't know where they are. Policy owner. This is how policy owners in scene by everybody else.
Sorry, I'm using politically incorrect cartoons. Sorry to all my American friends. Disclaimer, politically incorrect. That was created before it became politically incorrect. I can prove it. Okay. Anybody agree with this? Yes sir. Does any policy owners in the room? No.
No, not really. You, okay, you agree that's how people see you. Who are policy enforcers?
Everybody, because everybody in I am are policy enforcers. Do you realize that we are enforcing policy created by somebody else? Do you know what people think about us?
That's us. Okay? Not security people. That's us policy enforcers. If you care about the policy. Because if you don't, what's gonna happen? You're gonna ignore everything. Right? And finally, haha to the policy S. That's how we see them. By the way, sometimes we are policy constituents, right? Agree. Well makes sense Now let's talk about it. Okay? All right.
If policy owners doesn't know what the policy enforcer is gonna do is enforce the policy and doesn't know how the policy constituent will react, we gonna have this, what I called Cuddy, which is cost of curiosity and stupidity. I created this word by myself as a Ukrainian guy. Not gonna pronounce it outside of this room. Complete secret. Don't say anybody, eh? But what am I saying is there is a cost of enterprise availability to the human factor. And this cost contains or can be quantified, measured and show hidden cost of policy and can be predicted.
Human stupidity cannot be predicted.
But if we know what people are going after, we can, for example, if there is a room and on the room they didn't throw the room, there's a big camera and they said do not enter what's gonna happen? Nobody will enter. But if there is room a jar door and it said, you know, I don't know, movie sessions, whatever model sessions, right? Do not peak, no camera. What's gonna happen? Everybody's looking what's in here, right? They're hiding something from me. So we can predict that. Can we? Yes we can.
Okay, so identity governance process the most telling you right now that most of the cost si human factor are related to these things. Deviations, exceptions, and recorded. That's what's going on.
Alright, so as a mathematician, I created a three-dimensional model. And this you can use today.
Here is what's the data is in. I'm collecting every access request ever done and split them in two categories. Blue and red. Guess what they are? Blue is the ones which are were approved and the red are the ones who were what? Forgot.
I'm sorry, denied. Alright Sue, I'm taking at station and all of them are blue, right? Cuz I have it. And I checked them against the policy and finally I checked them against the business need. Do I really need this policy? Do I really need this entitlement?
Do I do I, I don't know. So by looking at the data which exists today between business and technology, you can actually sometimes find connections, but sometimes you cannot. Which is a good news for you. Because now with this you can put the dots in one of how many OCNs we have around here? Anybody?
How many? How many of those little cubes we have here called OCNs? Octa means? Oh come on. It's the biggest access control company in the world. Come on.
I says, what are you talking about? There are eight of them, right? So there are eight octas I want everything in.
Yes, yes, yes or no, no, no. Right? That's what I want 'em. But in reality, if you take your date of your company or your client or your partner, whatever, and you put it in this cube, which we know there's a program we created which can do that, you will find out there are gonna be a lot of dots in the wrong place. How do you like to see a certain entitlement with no business need against the policy? But you have it. Isn't that great? How about this situation? You need it. It's against the policy and somehow you got it.
So the all of them in my my statement theorem, mathematically every dot on your diagram will create a problem for you, which you have to fix either today or tomorrow. You cannot avoid it. It's gonna be found. These are in alert of your sock system or it's gonna be audit people who's gonna come in or some kind of a consultant coming and say, what is this? I have no idea.
Well, so you have to figure this out. Now interestingly enough, we did this work with one of my small university in the United States. Here's what we found out. We found out that first the sample size of positive resolution is 1820. The negative side of resolution is a hundred. Now here's my question, why in the hell we're approving things if it's already automatically approved all the time, right? Another thing we found out, it's only 57% of what we have is really good and 43 are marginally.
And the biggest one is what?
Yellow, which is no need policy adherence. What does it mean? It means we have no idea. It doesn't mean it doesn't need it, it means no idea. And finally the money slide. That is what you show your boss. If you want to take all of those dots and fix them and spend one hour of work on each dot, which is the wrong place, it gonna cost you $92,000 and it's very small percent. You can go to your organization, you'll probably see this numbers are ballooning.
But if you start working on policies and defining the entitlements as a part of your task, you can eliminate in one swipe, you can eliminate thousands. Does it make sense? So this is, this is what you can show it. By the way, my son and his friend created a little program and we're gonna write a paper about this.
So the most important conclusion I would like to get out of it because we're all done all time. The start of the process of creating policies should not be technology should not be entitlement, it should be business tasks.
Ideally, I see the beautiful world of the future described by previous, you know, several of our colleagues as the Jira or any kind of project management tool you have, you got an assignment, whoever signed you this task has a due diligence. Find out what prerequisites you need. Add those prerequisites in the list technical guy, take a look at, translate the business prerequisites into technology. Like ad group membership. Say yes. As soon as you assign the task, you got it automatically. As soon as you finish your task, guess what's gonna happen?
Your deprovision automatically, that saves your company tons of money. When you talk to business people, do not talk about technology, do not talk about, you know, how beautiful we have better versus talk about money, talk about efficiency. And one more announcement. If somebody wants to hear the continuation of this beautiful story. And we'll be at I next months in United States. I will be talking about the next step, which can show you even better way of dealing with this. So this is my recommendations. This is some readings you can do.
You can get all the slides and wanna spend a lot of time on it. Any questions? Thank you.