So let's jump right into it. The, so this we're moving now, as we are anticipating that last panel into discussion of metrics. The first it's two discussions we're gonna have jammed into the time. We want to talk first about metrics generally in this context, and yes, get folks ideas about what kind of metrics might be appropriate and helpful to individuals, institutions, and management capacity or audit capacity, et cetera, in the space to kind of lead to greater reliability in these systems and other desirable qualities.
And the second panel, we're gonna talk specifically in terms of access risk, which is a subset of risk, the risk issues, but obviously a critical point. So we're gonna do the same thing as we did before have ask each of the presenters to please introduce themselves and then just give a couple of brief discussion. We don't have to be that brief, but in a discussion points and then we'll just move right into discussions. So if don't mind starting off, that'd be wonderful.
Certainly thank you. My name is Nathan Wesler. I'm the senior technology evangelist for psychotic software.
We most known for a privileged account management solution, but I've been, I've been in the security industry for the last 15 years now in a variety of different roles, both the private and public sector, but focusing mostly on program design management, that kind of thing. And the metrics question is, is always the big one.
I mean, at the end of the day, as maybe the topics are here talking about, you know, what gets measured gets done. And so the question about metrics in terms of identity, I, I, before I even jump into the suggestions, I think is one of the biggest challenges in the identity sector in the way that it works in the sense that you can talk about when does someone access a resource or how long do they access a resource?
You can do it in time based sessions.
You can do it on how often it happens, the frequency something happens and they can be triggers, but it also can generate a lot of noise, false positives. And so, you know, in speaking with other folks in the security side of things, at least it's, I don't know if anyone's actually got it right yet.
And that's the, I think the challenge and part of what the discussion here might help to bring about is there's lots of ways to measure and lots of ways to automate the measurement I would argue, but then how do you make it something that's valuable and interpretive that isn't just a, you know, when Nathan logs in at three o'clock in the morning, we should shut his access off, except that I was also trying to get the exchange servers back up and running because they went down and I'm supposed to.
So usually, you know, my stance when I talk with a lot of customers or even previous roles with consultations that I've done is to step back a little bit from the metric side of it and really talk about the plan, you know, have the organization really understand what is, what they want to accomplish. What do you want done if you want your system administrators to only work 8:00 AM to 5:00 PM then great. You've just, you've drawn a boundary line and you can now measure against that. If you want your admins to be available 24 7, when things go bad, then that's okay too.
But then you need to have a different way to measure that or not have an issue when they do log in at three o'clock in the morning. It's a, you know, it's an acceptance of that risk. So I know it dances around the, the question a little bit in terms of the metric. But I think from a starting point, that's the, the consideration I kind of wanted to bring to the panelist.
What, you know, we can talk about those kinds of things all day, but how do we solve the sort of false positive noise problem around the use of those kinds of identities?
And it's interesting because if you talk about dynamic measurement, in a sense, it gets, it is almost like dynamic context, right? Recognizing that the measurement may start off with, and that evolution of the metric as the session or life goes on. So that we'll get back to that one. That's a great point. Thank you.
Look, could you have some initial thoughts?
Yeah. Thank you very much. Good afternoon. Lu Hartel I'm director of identity management for Oracle in Europe, in the east and Africa. I started my, you know, somehow career in the two thousands. And at that time I was delivering the projects that we were talking about today. And at that time we were talking about something like secure identity management or basic single sign on capabilities.
Now we, you know, my favorite topic is talking about digital and our identity management is really part of, you know, the digital transformation, you know, getting to the point of measuring the risk, actually into the digital arena. We are having some kind of situation in which we have to face the social integration. We have to face the devices.
So the, you know, different ways of accessing data and information by actually different kind of audience, the beginning, it was just co maybe employees.
That was the initial, you know, problem for, for the companies in the 2000 right now is mainly related, not only related and in new digital initiative about the consumers. So this is somehow changing the scale of the problem. So it's creating more complexity.
So in the, in the digital arena, you are saying many other things, actually, for example, like for example, the big data, it's an opportunity. You know, we are talking about identity management because we are enriching the amount of data that we can then use as a marketing team inside of our organization. So there is another complexity also to be added, which is the number of people we are talking to.
It's not just the it guys actually, but the people which is really interested into the metrics, I think are two kind of people, actually the, let's say business team that want to be sure that the processes, the business initiative they're driving are on the right track.
Maybe for them is good enough to have a red and green.
Maybe I'm just joking a little bit about that, but maybe then there is another kind of person which we added to invent recently, not who as Oracle, we, as you know, professional, which is the CSO and the CSO, you know, recently we, you know, it has been invented somehow the role of the CISO because we, as human are typically optimistic. So no one into this organization was really taking care of that. So getting back to the point, you know, in this wider context, I would say that it's somehow a benefit of doing risk measurement.
There is the problem of defining for any single company, which is really the threshold that is relevant for them. And it's really depending on the kind of project they are embracing, if it is an employee project, or if it is a digital project, in any case, there are few things that are common to all this initiative, in my opinion.
So you were mentioning, you know, the topic related to the system administrator.
And I agree actually, it's something that you have to do even more now with the digital context, even more now with the big data environment in which you are enriching content, and you need to have some kind of help for doing that for somehow automation, let's say not just, you know, a red line or red or green light. Actually the second thing is about the end user, the consumer ourselves, actually, we are accessing the systems and we want to understand that there is some kind of risk, you know, comprehension from the services I am accessing.
So it's something which is also a benefit for a consumer from a communication perspective, to communicate that you are doing that kind of activity. So talking about identity management in the past, we were also talking about being the bad guys, keeping, you know, the other outside of the boundaries.
Now we can be seen, I think, as a service provider, also a service provider of identity, risk evaluation.
So, you know, for introductory discussion, I would say that, you know, this is opening another item, I think rather than just the system administrator, but it's for sure not completing that. But bottom line, I, I see that all the identity management initiative, especially the access management ones that we are starting are really to be seen as a service to the world organization. So the risk metrics must be a service too into the organization from day zero, because could become a benefit to actually
Thank you.
Look, we have that paradox of scale is the kind of intrinsic in there. Well, let's get back to that one.
Robert, do you have some initial thoughts?
Oh, my name is Robert lakes. And
Can you hear me okay. Okay. So I'm Robert lakes from cap Gemini. I head up the advisory unit in the identity practice in the UK. You hold the microphone. And so my take on this problem is that, well, just to use a bit of an analogy to go something to something that we're familiar with is with software testing. You can always test for the presence of errors. You can't test for the absence of errors and you can apply it to risk.
You can look for risk, but you can't always look for the absence of risk because there's problem space that we have is in two halves. It's the stuff that we know about. And there's the stuff that we don't yet know about most of our policy and rules and the way we do things and our belief systems of how we deal with this are based in case history and law it's evolved over time.
And this is what our normal is.
So we have to split it into two halves, which is we have to make sure that we deal with what our lessons that we know about and the lessons that we've forgotten, the, the lessons unlearned and make sure we deal with this to make sure that what we do today is okay and remains. Okay. So you're looking for familiar things, but then we've also got to deal with the things that we have yet to know about.
So in, in your organizational boundary, you need to make sure that, you know, you know, the semantics and the ontology of what you're dealing with. Do you know what it is that you're dealing with and do you know the right words to use, to communicate effectively within your organization? So that's just within your organization, then you have to have the stuff which you're gonna have to learn from outside.
And you're right. And there is two things.
There's your known unknowns, the stuff that you know, that you've got to go and learn about, which is innovation going and get some information that probably exists outside your organization. So you need to have, so you have to have different sorts of people that you've got your, I don't know, what are the farmers and, and pioneers, whatever the words are, these special people who are happy working outside your boundary, who are happy, who are comfortable with the uncomfortable, as opposed to the people who are comfortable with the baseline and they're your innovators and your inventors.
So your inventors are dealing, going and finding the unknown unknowns. So that's part of the challenge is that we've gotta have different sorts of people for doing these things. And we have to, you have to know that you need these people in your teams. Otherwise you're not gonna fix the problem. You can't expect your operational staff to be able to necessarily predict the unknown they're dealing with the known and they're dealing with the current's normal. And then you need another set of people to deal with the new normal,
You know, that, it's interesting.
That seems to go back to Nathan's point about when we were talking about the dynamic metrics, dynamic context, in a sense, because if you're normal, it seems like it's a system. So you have the, you know, system defined and what's in there is normal. And it was interesting when you said, Robert, you gotta get the people in your team.
Well, by getting more people in a team that are from outside of that usual system, you're essentially expanding context. I mean, it's very similar kind of idea. Ultimately, the recruitment sounds like it's the entire earth that you need to recruit into your team in, in a sense. And maybe that's a joke and maybe it's not, I mean, that, and one of the questions, I guess, is that metrics, how can we borrow metrics from those contiguous areas? Cuz you know, we can't, you can't get someone in the team. I would think who has no outside knowledge of anything.
And is it, is it, are we looking for things that are unknown to anyone or are there ways to identify things that might be the, in the triage of the unknown? Do we, where do we go first? Do you have any thoughts on that in terms of metrics that are contiguous metrics that might be useful? Things like that?
Well,
I, I think it still, from my take of it, I think it still goes back to
Understanding what you as your organization is willing to accept. So even when you have that line between the known and the unknown, there's still an idea of, at some point you must draw a line to say, this is what I know is my normal. And there's going to be cases that exist outside of that. But what level of, what kind of thresholds or outside those places am I willing to deal with?
And again, I mean, I feel like we could talk about types of metrics all day, you know, time based, period based. I mean all these kinds of things, but it almost doesn't matter.
I mean, it really comes down to the organization's tolerance for risk, what they're willing to accept. So if I, you know, if I'm gonna come back and say, these are the knowns, this is my operational environment. So we're dealing with the pieces that we do know.
I expect it to work this way. Great. You've defined a line, you've built a box and now you can automate tools around that box to make sure that you're alerted when something happens.
And then from there, you, you know, the unknowns are the harder part granted and you must tap those resources to help you understand that, but it's still gonna be a matter of, you know, can we prepare for a meteor crash into my data center and kind of going to disaster recovery kinda thing? Well sure, but how likely maybe that's not the focus of the, you know, your efforts, you focus on the pathway between that. So getting into those unknown areas of metrics, that's obviously the most complicated part, but I think it's a matter of, of how much you want to accept.
If we want to try to plan for anything that's unusual, then we have to deal with the volume of, of notices and alerts and, and you know, where those, the numbers start to come in and how you analyze and break that down.
It's very resource of intensive. And so then you have to prepared to deal with that. If you can address say, I mean talk about maybe geographic areas.
I I'm, my unknowns are that I don't know where my admins might be in the world when they log in for an emergency, but I know that it will never be from North Korea, Iran and Iraq. And so now I can sort, I'm starting to slowly, you know, put another box around the unknowns and you can flag for that. It's still very, very hard at that point, but I think it comes down to really the, the plan. It comes down to having a process and, and understanding what your program is going to do once you have that, then the, the tools and the automated solutions you can find and design around that.
Yeah.
Just to extend actually what nothing is sharing. Somehow the technology I think could help to reduce the number of unknown somehow just to provide an example into the consumer context. Maybe you want to be sure that the user with accessing your e-commerce for example, is really that user you, you don't want to be disrupted, you know, and you are, for example, having some kind of behavioral pattern from that user that you are somehow analyzing during time to time, actually. So you are understanding how the user is really used to behave.
So could be an unknown that that user is somehow moved to Romania or to Italy or whatever. It's an unknown actually. So you could have some help from the technology, for example, asking that user in that moment, at that time to do a step up authentication for example, activity. So you are trying to have some more help from your end user in order to reduce and to up yourself, actually the number of unknown activities in order to reduce the risk.
Then for sure there will be something that will be completely unknown.
Just another example, coming to my mind, you could have, you know, into the steel access management area, some social integration without Google Facebook or whatever it is, or any other kind of identity provider you are working with in order to deliver your service. And sometimes it's happening. This is going to, you know, be attacked and having an issue. This is unknown or an external, you know, side.
Actually, it's not something that you can really predict in into your company. It's something outside your company. This is even worse actually, but you are dealing with that. And for managing this kind of situation, you must have the ability to be very fast in doing the next action. That could be, for example, to, you know, for a limited time period, until you understand what the real issue is to unlink the Federation that you have with those guys. So it's not just the technology as you were discussing about just another example coming into my mind.
Robert, did you ever thought
Well, yeah. I mean, one of the things you clearly got to do is to improve your signal to noise ratio.
So, and is probably not sustainable thing to do is just a collect everything just in case. So you, you don't, nobody has an infinite budget to do these things. So you will need to focus on the events of interest and the, and the metrics, which are good diagnostic for the risks that you, that you want to manage. So this is where you need to, you need to understand the value that systems in your business, what it is that you value. And you need to look at the, the threat landscape and your attack surface. But I think that cuz we know things get faster and quicker and more complicated.
You, if you don't invest continually in improving your risk management strategy, then risk slowly creeps up.
Not because you don't have the best thing that you bought last year or the year before. It's just that that outside threat has increased so that you are now in a position either knowingly or unknowingly that you are above your risk appetite or your perhaps your risk tolerance. So you get to a certain point of inflection where you either you choose to do something about it or not do something. If you don't do anything, it gets worse.
You have to do take some action and try and bring the risk to within your tolerance and appetite. So it's and so part of that has got to come from the organizational way in which they govern risk. And so you gotta have a team that's going to go, well, we need to measure things and take an appropriate action. And when things are going outside normality, and a lot of this stuff is we've done this stuff in.
If you go into automotive business and statistical process control, and we've seen a lot of far more in, at this conference than in conferences I've been to elsewhere in the UK, there's a big influence on OT here. There's a lot of manufacturers who are interested in process control.
So, and that's it's well understood in those areas. We could probably take a lot from the engineering sectors and bring that into the computing sector to look at statistical process control. Are we in control of what we're doing?
So, and then we gotta try. You gotta look at those metrics, the dynamics of the metrics that you're collecting. Is this something that is changes quite rapidly or is this something that's changing quite slowly? Have we got time to deal with this in seconds, minutes, days, some of these things you don't have to, oh my goodness. It's absolutely gone outta control some of these. So I think that's one area that we can get some sort of innovation from. And whilst we've been talking, one of the things that have been coming to mind is anybody read the, the Freakonomics books.
So those, those are the sorts of things where the indicators of whether, whether something is a risk or not, or the typified stereotype or architect, these things are not always what you think they are. So you can be quite creative in some of these metrics and indicators.
You know, it's interesting, you talk about how we govern risk. And it occurred to me that we in Lucas's earlier point about the, that sounded like kind of a paradox of scale issue. We have the possibility of governing as we've been talking about governing to try to incorporate input at the front end to evaluate risk.
And one of the things I was wondering is, do we need an Airbnb or an Uber for data or for risk, right? Because you're sharing essentially, you know, a lot of trucking companies ship half empty trucks all the time and they just won't share the trucks and they're wasting a lot of money. They don't yet do Uber for trucking in commercially, but it also occurred to me that governance of risk. You could also look at in the insurance industry, talk about, take collective addressing of unknown risks. There. It's not the prevention really, but the result that you're sharing, right.
And another form of governance, do you guys have comments on those forms of kind of before and after governance,
They're starting to do this. So the insurance companies and some of the payments companies now are starting to pool some of their intelligence. Yeah. So it's kind of a shared intelligence as a service. So yeah. So you've talked about quite a lot about communities and interest. And so this, you have this dichotomy of, of do you share and perhaps the fear of losing some sort of theoretical advantage by giving something away.
And this is where you have this, you need trust starts with some sort of leap of faith. And so to make it better, some people have got to risk something to gain something to come back again. And that's where some of these organizations are perhaps being a little bit more risky to gain. So I know certainly in payments industry in, in Switzerland, there's, there's an initiative going on there to share collective intelligence and in the insurance, in the UK, they share. And I don't know whether it's the same across Europe.
There was some people here from talking on access control about from the insurance company. So we could ask them,
You know, what I'd like to do, I think, and this is the unknown unknowns I'd like to ask our next three panelists to come up and ask you guys to stay up. And we're gonna put a couple extra chairs here and continue the conversation, ask you guys to pitch in. And if you don't mind, if that's okay, little unknown, unknown for you.
So let's get another chair and then we can just expand the conversation, just put one right over here and we can, this is the flexibility that we need to show sometimes. Thank you. Sure. If that's okay. There's enough to chance. And we have one over here.
Also, you have a third person or two of you guys. Fantastic. And so if you guys don't mind, if you can introduce yourselves and get you in the discussion, a couple of initial points, and then we'll continue the conversation from where we left off there. Should
You
Hello? I'm Neil it's from Peter systems. And we are also in the process of, since you're also introduced the second panel to be more on the access, right question of, of, of risk that we see and, and your discussion if I got your right worth more on the, on the unexpected risk coming from uncertainties and so on.
And what we see, what is a driver today is really that the, there is a standard incorporated risk where not speaking just about the risk coming from extra ordinary events or things you have not considered just by the regular use of systems. Risk is incorporated. There is even, it is of course, coming from a certain misuse, right? You can call it criticality. You can call it risk, or you can, you can call it responsibility of the asset holder in a way that the stronger, the access rights are he is getting, the more risk is incorporated in that, in that access, right?
And, and this aspect is on the one end, something you must consider for efficiency reasons, because there are so many access rights, the world of identity management exploding in numbers of objects from externals, what you mentioned. And so, so in order to steer to control your access management, you have to find some driving attributes to, to, to take control of that process.
And, and the, the strongest driver for that from our point of view is the risk.
So it sounds like there's a, well, we'll get back to this, a changing nature of the it's a boundary condition more than we were kind of alluding to here. We didn't know it's a specific type of boundary and we'll get back to the, that changing nature of that boundary. It sounds like some initial thoughts.
Yeah. My name is Wolf progression, Tay systems out of Munich, actually. So it's kind of a home game for me.
I, it was an interesting discussion about the unknown and the known. I think if you wanna measure risk efficiently, you need to know what you do with it and what conclusions you draw from what you measure, because you see give you quite the simple example. The Zurich police uses an algorithm to predict and prevent burglary. And so they analyze typical patterns of housebreaking and then identify areas of threat and increase the police presence.
So the number of housebreaking goes down, what conclusion do you draw from it was prediction wrong, or was the likeness likeness of burglary wrong or whatever, you know, so risk itself is not a risk. It's it morphs from a ecosystem of events and drivers and influencers. And if you want to measure it efficiently and, and use it for management purposes, you really need to know how to interpret the data. And then from that you can set right measures and set KPIs. I think
That's very interesting. It goes back again.
When Nathan, we were talking about dynamic context and in a sense, the observation is that the context is dynamic. Not just because of an externality, but because of the behavior of the system itself. Right. So it's constantly.
Yeah.
And, you know, I made a statement yesterday. I think it was in our joint panel as well. If you take this definition of risk as a level of uncertainty into complex system.
Well, that's it. So according to the uncertainty principle, you know, that when you start to measure it, it changes. So this comes back to what, what you said about the known and the simply, but by the act of measuring it, you, you change the system itself.
So we've met the enemy and he is us. Yeah.
Well, let's talk about that in terms of the boundary idea that we were just alluding to a few minutes ago, that notion of the dynamic element of the user base and that, that dynamism is in a sense, the source of the problems. I mean, it's a paradox, right?
It's an, and, and I always tell my kids when you're seeing paradox, you're seeing reality, cuz if you're not seeing paradox, it's a model of some sort and you're missing something. But let's talk about that in the, in the management context where you have something that's intrinsically dynamic has an externality and has the internal dynamic of its own structure.
What are some of the approaches that are available to people both technically and from a policy perspective, to start to understand that in the metrics and what, how, how can those metrics be fed and what, and how can they be dynamically reset when you have such a truly intrinsically dynamic setting like that, it's almost like an organism. It seems like,
Well, I think this goes back to traditional risk management.
And this is one of the reasons why many of these risk management implementations are so poor is let's they simply aggregate the scale of risk and the likeness of risk and then document it. They simply document it and that's it. They do not do anything with it. And so you set some boundaries, but then you leave it there for a while.
And then you maybe reevaluated and back with a different likeness or different damage or potential damage or whatever the, the, we can learn really from insurance companies, for instance, who use this as their day to day business operation in effect in, in, in, in operationally managing risk. And this means that you need to understand the, the drivers, the influences and whatever. So increase police presence, for instance, an influencer, you know, making water around our house or a driver is the attractiveness of a target maybe. And as a driver is how easy is it to get into a house?
You know, so understanding really the, the, the, the, the drivers, the influences, and then managing these might be much more effective than simply looking at the risk itself.
Yeah. In addition to what you're saying, well, I think we have two, two aspects in the risk consideration in identity and access management. One is of course what you said, the risk management that helps us to find some, some, some already matured metric to, to value risk. That is helpful. Of course. So there is some foundational work already done by the traditional risk management.
And the second, which is also beneficial for a fast use of the, of the risk as a driver in that system is the lifecycle experience in identity management already. We have already years where we learned that groups, that roles, that users are not static objects, but that they are part of a life cycle of somebody has to approve something to review, to certify.
And these mechanisms that are pretty much matured in identity management can be also used in order to run the, the risk assessment, the risk metric process, so that you're not making the same mistakes again to say, okay, I have to run an initial project to do risk assessment in my organization. And that's it that's done now for two years or 10 years, you're, you're quite aware, or most customers are already aware that we are speaking about a dynamic perpetual process that the risk I have assessed yesterday is today already outdated. And that I have to, to reevaluate that again.
And, and that is, that is a good signup thing because it's new to the IM world. But on the other hand, we are using the methods, the met methodology that is already experienced by most customers for other objects.
So it sounds like in some ways, the source of the problems is the source of the solutions also because you have the increasing attack surface in a sense, but then the system has a thousand eyes the same time. Right. So we're one big happy family now. So if anybody has any comments, please.
Yeah,
I actually, I kind of wanna go back to the interpretation comments about some of the risk pieces of this. So it, again, coming from a security background, myself, this is, you mentioned the insurance companies is probably the only example that I would actually say regulation actually brought some security benefits, you know, compliance in a lot of cases, just a checkbox, right? Doesn't really provide security.
But here, here, you have a group in the insurance groups, regulations kind of dictate boundaries, right? You need to do certain things with the data. You need to protect certain ways. You can build metrics around that, but not every industry has that luxury. And I think that's where, when you're talking about the interpretation of risk management processes, that becomes really key. And I can give a, a very specific example.
I, I come from the west coast to the United States and I did a lot of work with movie studios.
All right. So here's an industry that is very similar.
They, they all make money the same way. They all build out product the same way, right? You work with someone like Disney. Now here's a company that the only thing they care about is brand reputation, right? They're targeting people two and three and four generations out, right? So anything that damages their brand is their number one concern.
So when they talk about measuring user activity and their system administrators, or their, you know, even their clerical staff, that kind of thing, it's really all about, are you doing something that might go outbound that embarrasses us, that tarnishes the Disney name. So they have to look at their risk factors and interpret that way. But you go across the street to Sony pictures who, as we all know was recently compromised. They have a very different problem, and they have a very different interpretation of their risk levels.
They, they do care about brand, but not to the scope that Disney does their thing. And maybe a lot of that comes from the Japanese corporate environment too. Their thing is very much about, we need to keep everything in house. Everything must be inward focused.
We, we stay internally. And so their control for their user environment is much, is much less about, let's see what they're doing. Outbound. It's much more about what's everyone doing inbound. Who's such a, what, where, so here you have an industry that's the same, you know, kind of thing. They don't have that, you know, any kind of boundaries or regulations around them. And they have two vastly different strategies and interpretations of the risk model. So they couldn't leverage the risk for one.
I mean, if those two ever sat down together, Disney could go to the Sony guys and say, this is what we're doing. And it would be useless to Sony cuz they're like, well that doesn't do what we wanted to do.
So having, going back to those risk management processes, and again, it comes back to a plan, you know, the basic understanding of what your business is, what your risk tolerances are and how you will interpret it for your, the benefit of your organization. Really, I think becomes the key. Once you do that, then the metrics kind of explain themselves and you can build the tool to measure what you need.
You know, it sounds like it's interesting cuz when you compare two companies, they're in the same sector, but you could, all one might say, well then Sony might be in more comparable to another Japanese company. So it may be, there may be a cultural. So the communities of interest, there's no such thing as one community of interest. Right? Right. So there's these intersecting communities and selecting the metrics from the right community of interest is an interesting notion.
The prior panel is talking about software driven management kind of elements and the assistance that can be offered by statistical analysis that can be run in the multiple communities of interest in order to push content in a way to the push metrics, suggested metrics in a way to help in that advice would be really interesting. Robert, did you have a point?
Yeah.
Well a couple of comments that I wanted to pick up on is the, the compliance and the regulation thing is, is a bit of a red herring really because that's a correction control when people have been unethical and unprofessional and kept being unethical and unprofessional and had to have their wrists slapped and said, come on, you better step up because what you're doing isn't good enough.
So we should, as professionals be setting a good example of this is what good practices, but also one of the problems we we have is that many of our clients have quite differing maturity levels in the way in which they operate. So we have to try and find what's right for our clients. And some of them have more risky environments than others. So we have to try and find this a set of different paths, which are perhaps you, you need to sort of profile your customers and help.
We were talking about this earlier about, can we, can we profile?
You're a kind of this, you probably fall into this category. You're a low risk organization. You're dealing with these simple things. You can have these things outta the box and this will put you in a good position to start off with cuz many of the clients lack the expertise to deal with this. They don't understand the language, they don't understand the concept. So they need us to do it for them and put them in a reasonable, comfortable position.
And then you've got the other people who are really got a lot of risk that you have to deal with where we are probably going to have to do some configuration and customization, especially where you need to understand things, but we, we can split the problem up and deal with it. We can take, you can take a binary split, but, and look at this as we have the meta problem, which is the underlying service, which is the, the warranty side, the risks in the warranty that have got nothing to do with the business at all.
It's just the service that supports the business.
And that that's one area of risk where, where you have things like privileged identity management and so on. Then you have the, the utility side of it is the, the risks to the utility of the company, the usefulness, the way that it actually realizes their value chain and the risks to the organized the company itself, why it's in business.
So, and that's a broader scope that would be more difficult for us to help with because it's not in the identity space per se, but is probably things that we can may well be able to help with diagnostics and indicators. But that that's a broader problem that we're not necessarily equipped to deal with.
It's very interesting that you, I want to bridge from that over to back again to the access notion and that question of the, when we talk about access risk or access management, you know, internet of things, you have the, it used to be that I had a thermostat.
Now I have a thermostat and a data stream issue. So the access of the data stream is one issue. The access of the thermostat is another issue. Now thermostat's not such a big deal, but if it's a pacemaker, then it gets to be a big deal. Right? So when we're talking about the internet of things and we're talking about access management, the, it seems like the threat surfaces, there's a physicality to the threat surface. There's a behavioral part to the threat surface of the user. And there's the data part and they're all access issues.
Are there ways when, when people are talking about access risk, first of all, are they usually talking about one of those three or all, or is it all conflated and should it be unpacked? Cuz it feels like the structures, the incentives, the penalties to drive behaviors, both in terms of programming and use and hacking and whatever it seems like they're mixed in there. And what what's your impression of that? Are people usually talking about accessing a data set or in the internet of things? Is that a misimpression cuz it could be harmful or operating technology, same thing accessing a furnace.
And so it can't turn off those kind of things. Well
See from the discussion that I have, I, I think they, they, they were more focusing on the data access, really not seeing the, the threat from other other things and, and don't get me wrong. So talking about the influences, influences and, and R I'm glad that we have these boundaries because in the past we were too easily giving access rights in relief of someone who the drop, you know, so setting some donors is, is the right thing. And we need to consider the same thing.
You made a comment before about some companies behaving more risky and other less risky. Typically in our industry, we are talking about opportunities and risks and this is not at all taking into consideration yet. So it's not only the risk management and mitigating risk. It's also taking controlled risk to achieve something, you know, to be, yeah,
That's an entrepreneurial risk kind of idea. Yeah.
Perhaps in answering your question, giving a comment also to what Nathan said to subscribe, I think the metric trick and we started with the metric track, it's all the same and it's, it's defined already years ago by risk management to, to quantify risk it's prob probability times the damage and you, you gave a good, good example, the damage that is always a very individual score. So for a company like Disney brand reputation, the damage is completely different story. Like a film studio of Edward who was known to be of no good quality. So his reputation damage was probably not at this stage.
So the, the que and, and that is probably also why today you are concentrating more on the quantifiable damage, why technical risks are more in the focus because they are easier to answer than intangible, like brand reputations, things like this, but nevertheless, they are there and they are more business critical probably than the technical risks involved in, in aspect.
But it's of course easier for the risk management to consider what is the damage.
If some transactional access right, is used to transfer several millions of dollars, that is not disputable in a way compared to the image damage that is occurring from hacking the Twitter account and, and sending out by the cooperate account, some statement that is not backed by the board, but nevertheless, it's the same trick. It is the probability times the damage that is created.
And I think it is one more step where access management is getting on the level of the business in a, in a different way than it was in the past, where it was more about the business process, including access management in the business process. Now we are on the level of the business targets in a way to secure them with the access management, to give some security for them, that the business targets are mad and not endangered by
Things.
No, it's interesting because in a way gets back to the insurance point in a way, because risk appetite, maybe you can't take care of it for prevention, but insurance, you can decide, do I want this much premium payment to have this much coverage this much? And sometimes you see industries where they say, you must have this much coverage. So there's a conversation that can happen about risk in, even if it's an unknown risk, because you can ensure against loss, which doesn't, it doesn't matter if it's a meteor hits or an earthquake or a hack.
If you, if your, your business is out of commission for a week, you're gonna be able to measure the loss associated with whatever the source was. And so it's interesting that that notion it kind of goes back to the risk. Appetite is something that we do have precedent for how to make it. So it's available that people can have those, those differences. The other folks have comments on the comments we just
Made. What I was just gonna say about it. Some of these things come down to sort of, again, mature the type of industry and the client that you have.
And so some of the things you can do very simple, I mean, you talk about risk management and you it's about monetizing the risk and making a good business case for doing something about it. So if you are, if you walk about, as your mode of transport, then your risk analysis is about crossing roads and things. If you cycle a bicycle, you do something a little bit different and you might wear a helmet and a high vis vet or, or, and, or you might just cycle a little bit slower and take care and, and not wear a set of headphones.
If you're driving a small cheap car, which is not a significant proportion of your salary, Bango, Nomics, there's, you know, driving beat up cars and you just, oh, well, I'll just get another one, no problem. But if you're NASCAR or formula one, you metrics at the hell of the thing in real time, because it's a valuable commodity that you don't want to blow up. There's a race to win at stake here.
So, and we can probably, you know, I don't know whether there's a good analogy to you. You can think of, well, where is my customer is my customer's risk profile. The person who just walks around or are they at the other end where they're prepared to invest an awful lot to minimize the risk?
Well, see, and I'd want to ask a question about this and it, and it feeds back off of what you just said about the, the risk, the probability times damage kind of thing. So I agree.
I mean, we need to help our customers interpret a lot of these things. A lot of these customers don't have the understanding of what risk management processes are like and that type of thing, but like in your analogy here of, you know, walking bicycle car, that kind of thing, who, it becomes a chicken or the egg thing who determines that damage value, do we, as the consultant,
I was talking about this with someone else about there are different viewpoints. And according to what your viewpoint is, you make a different decision when it comes to different decision theory.
And in one of the other streams earlier on today, we were looking at access management and there were some very strong views of, and a viewpoint is just a viewpoint. You change where you're standing and you'll see a different viewpoint. And I chatted with one of the other person who would ask question and didn't really get it answered and to use the chicken and egg metaphor. I came up with this to try and explain it to this guy.
I said, you're familiar with the joke of why did the chicken cross the road? So if you turn that into an access management problem, well, what, how would you decide whether the chicken could cross the road?
If you're the chicken I want to cross the road, that's what I want to do. So of course, I'm gonna allow it. If you are the road and especially at German road, you knows a very fast way you would have to, well, there, isn't a crossing point. That's safe for him to cross the road.
I can't let him cross until it's safe for him to do so if he was in, I don't Bombay, he had a chicken would probably go, you know, that you'd just have to find your way across through the traffic. I can't do anything about it. These are crazy. They go in every direction possible.
I, I, I can't help you here. And if you're the other side of the road, what do you, I don't want chickens over here. I've got nice grass.
You know, I, we have chickens at home. We have the bit of the garden where they can go and the bit of the garden that they can come out for a treat,
But this is good for you to, it's talking about risk measures and risk metrics, you know, and, and the mitigation. So this depends on the culture, on the risk culture. There is a joke that we in Germany wouldn't have needed the all, because some red lights would've would have been sufficient already.
You know, so therefore it's not only the metrics, but also how the, the measures that you take to, to mitigate risks. And they can differ from culture from, from company to company, from, from department to department, from culture to culture.
So just on that, let's finish up here because it's a great conversation, but I wanna make sure we stay on time. Robert just let's start with you in just a couple of concluding thoughts. If you don't mind, you can continue that part portion or just some summary thoughts if you don't mind.
Well, I think that it comes down to if we want continuous process improvement, no matter what the process is. So in this case better risk management, we have to invest in continuous education. So we need to have, and need to put in the research required to make this better and help teach the others.
So, you know, you need apprenticeships. You need to get the people, encourage everybody to step up a level.
So thank you.
Yeah. Two
Comments actually in line. One is about monetizing. The risk that you were mentioning actually is a great exercise, very difficult, depending on where you are actually today, but interesting. Interestingly from a couple of experienced personal ones, I'm seeing when the, you know, team manager or they may be security manager is trying to justify an investment.
Also in, you know, a project, a technology. Sometimes I'm seeing, you know, they try to use Excel, the monetization of the risk to communicate the CEO, actually, why there is a need for doing such a kind of investment, not on the short term, on the long term.
And this is something which is really changing the game, in my opinion, in having that kind of enabling that kind of conversation at sea level about, you know, the consciousness of risk and a second item, actually, I'm saying now related to the collective intelligence, I'm, you know, we at, at Oracle, we are having a community that we, we built, I think five years ago, we call it community for security.
It was create initially in Italy and now it's Europe wide. It's part of it was originally part format by partners only. And then we opened that to customers, willing to participate.
And we were not expecting that the first year, two years ago, we opened that program to the customers. We had, you know, 50% of the attendees coming from customers willing to share. And the output of this start, sorry, it's something like six month journey. So several appointments, several meetings, several discussion, and the output is a joint document sign, but all these guys, and, you know, just two years ago, the topic was fraud prevention, you know, in relation to access management. So I think, you know, it's something that the market is ready to, to do.
Actually they collective intelligent things. We could stimulate that, that more in our role too. Actually.
Thank you, Nathan.
I think I'm, I'm gonna fall back almost to my, my earlier original comments, which is that I, I think that at the end of the day, it's, it's, there's much that I think everyone here can bring and help their customers. There are technology solutions that can help with accomplishing a lot of things. But at the end of the day, the people that are running these organizations need to take a little bit of ownership to come to the table and be part of the conversation.
If you have a, a CISO or a CIO who just sits back and says, well, you should tell me, I, I shouldn't have to think about it. I shouldn't tell any, I shouldn't have to make the determine of how much damage this causes me. I want this software tool to do it for me. You should just do it. We we've kind of already lost the fight then right?
At that point, you can't start to work on that. So I think that the, the education piece is really so critical.
And part of that comes from the, the security side of life, bringing those people to the table and sitting down and having the conversation to say, help me understand what's really important to you, cuz if you can do that and they can be honest about the fact like say at Disney and they say, well, brand reputation is our number one thing. Great. Now we can build everything else. Now we can build the products. Now we can build the metric systems.
We can do all the things that we need to do around that, but it's getting those people to the table to have that conversation of saying, you need to own your organization and your risk tolerances. Then the rest of the technical community that's here can help them be successful in getting there.
Thank you.
You made this comment about allocation. I could agree more. Your employees are the first line of defense against any kind of risk and it's not CISO or the complaints or office, whatever, whatever it's the employees themselves. So I think we need to turn around a little bit perception.
We are not hindering people doing things. We are allowing them to, to do things.
So we, we give some, a certain risk profile to allow them to be entrepreneurial, to, to make the business, you know, so if you can turn this around and make them aware that they are the one in charge of not putting too much risk on to what they do, then I think we, we are much step beyond what we are today.
Being lost on the road is always a danger to repeat something from the, from the other participants. But I think considering risk, especially again, with the view on the access management is a journey we just have started. So there are many options we can get in considering the risk.
What I think the mistake people try or expect from, from risk management. Since we also started with risk metric, is that there is a mathematical way to help them to manage the risk challenge. And that will definitely not happen similar to the role challenge was not solved by a role mining processes and mathematical algorithms that that can be tools that can help you in analytics. But the answers, as we agreed in, in several statements is given by the individual values of a company by the individual targets of a company. So that is something that cannot be done in, in math and in technology.
It is something that must be considered on the business level. So there will be the methodology that was taken from other aspects like role management being reported to the, to the risk management level. And there will be probably one day a risk owner who is syncing his responsible risk range with other risk owners in order to come to comparable metric in order to say, which one's risk is the biggest, but we are not at this level yet.
I think it is a process now to, to consider that to, of, of, of education in the organizations to accept that there is something like a risk that must be put into consideration for the, for the access process.
So it sounds like embrace your risk, right?
Well, thank you panelists. And please join me in a warm thank you for the panelists and, and applause for the flexibility for everyone to come up here this way. That was great. Thank you.