I am happy to introduce a model to quantify it risks and we are getting a lot of attention in this area. Not only because we want to justify the investments that we are doing in terms of cyber security in which we need to know what is exposure for different risk and different threat and outdoors, but also of cyber insurance. We need to have yeah and strong model that is able to quantify potential losses when it comes to it when we are cyber insurance policy, when we need to also measure the efficiency of the different activities that we have in if intensive information security.
And it's very clear that going with the wet finger in the air, talking about red, yellow and green risks using adjectives and scores in order to quantify risk is a mal practice. We have plenty of scientific evidence that in a good day this type of analysis based on qualitative criteria is just a waste of time and in a bad day this approach is providing wrong decision making.
If we really want to use an option centric data driven risk assessment, we need to have a model.
We need to talk about distribution and losses even though when we are lacking of data we can use calibration exercise, we can use estimates, we can get data from different sources in terms of threat and value for different asset and we can estimate how much is going to be the cost of down times with a better approach. This is a team and I hope that the presentation is able to open some ties in order to advance in the way that we are deciding and planning how to protect it.
Assets, services and processes is my recommendation is a start from the ISO controls. We have a new set of controls coming from the ISO 27,001 and they are fully listed in the, in the ISO 27 south.
And and we can have a taxonomy based on organizational controls, people physical controls and technologicals. We to the way that we are foreseen data on, on attacks. So we need to monitor and get a database in which we can tell how much a different actor is affecting the assets and the services if there are losses, if there are attacks, if there are near misses.
So we need to improve that intelligence. We need also to improve requirements. There are a lot of attention in this area in order to define performance requirements. When we are signing a contract and we are making commitments saying okay this is going to be the recovery time option, this is going to be the performance, this is how much we want to have a full accuracy of data. It really needs to compare what is the price that we are getting for assuming those risks.
And also there are a lot of attention in the way that we are dealing in risk assessment in order to prepare different service providers in the case that we need to share responsibilities in the need in the need of SARS also services because there is a non-performance on the contract. Now there are more discussions around exit plans from third parties. How we are assessing third party risk, whether we need to select a partner that is more expensive but on the other hand is exposing the organization to a lower level of risk.
At the end of the day we need to have a model in order to ensure that we are selecting the right partners, that we are clearly understanding the type of responsibility that we have in many of the key processes when we are dealing with cloud service providers, we have shared responsibility models in which we need to ensure with a much better of which a much better of visibility, what they gonna do on their side. Who is going for instance to set parameters for the configuration of networks.
Who is going to do backups, who is going to great information in transit, how we are going to have a dedicated servers or not or they or they are gonna be shared. Many discussions are asking for a better risk assessment.
When it comes to IT risk, we need to quantify what are our, our targets in terms of asset services and processes. For instance, we want to have a 99 90 5% accuracy on the data. We want to have a response time at this level. We want to have a a 99% reliance on a particular service. Good.
We can only quantify risk if we are able to quantify and make sure what are the objectives in risk. It affects uncertainties on objectives. We need to be very clear that when we are starting risk assessments we are able to know in a quantified way what are we expecting for a particular asset to have an an output, a service or a process.
Those assets can be assessed in terms of confidentiality goals, integrity goals and availability.
Goals and assets are being exposed to threats when they have vulnerabilities And we can measure the vulnerabilities based on the ice of controls, how well those controls for that particular asset, that particular service or that particular processes are being taken. And then finally if we got an asset with a goal that is vulnerable, this vulnerability can be exploited by different suite and we have to have also taxonomy and a profile for threat analysis. The key is that when we are starting assessing IT risk, we need to have quantified objectives.
We need to understand what is control is a response to a risk. It's not the cause of risk.
And this is one of the main issues when we are assessing out the risks. We are not taking the time in order to quantify what are we expecting for those services processes and assets to provide us an output. So that's why we are having a lot of limitations based on the and clarity in the goals. What are we expecting from a third party? What are we expecting for an service service provider? What is a confidentiality reach that we cannot afford?
And a common malpractice also in this area are the data context. When we are adding a model in which we are using genetic data on scripts, on vulnerabilities, penetration tests, thread profiles, assessments and we are trying to put everything together linking the final to different databases and different sources in which we don't have a clear scenario.
So in the model that I am going to share to you, you can have a concrete scenario, a concrete goal and basically you are able to for that particular threat, foresee the distribution laws, what is going to be the number of events that you are going to foresee, what is the confidence level with the information that you got that is able for you to quantify the risk.
When we are, when we are using methodologies based on many of these data sources together in a software or in the methodology itself and we are unable to extract a concrete scenario and there is never going to be the a concrete dialogue in terms of the mitigation plan and what the organization needs to invest on a particular thread vector. Only when we are very concrete on the scenarios, we able to understand the risk factors and we can think what are we going to do? It's a nongo decision.
It's we can have insurance, we need to go outsourcing, we need to invest on preventive controls, we need to invest on contingency controls or we can do nothing only when we are seen and having a facilitation assessment with the different owners of those assets.
We can come up with very concrete task.
So other tip that I want you to consider is to avoid data context to avoid methodology sheet that they are not producing a concrete scenario for this rate that they cannot understanding the prevalence and the losses for that particular goal, that particular objective that you got for an asset, a process or the service. And also you can, you need to to define how do you want to assess raising practice? Are you gonna start from the asset list? Are you going to start from the service list? Are you going to start from the different processes?
If you choose start from assets asset, you are not going to assess risks from the other different approaches otherwise it will end up in double counting. And this is the common mistakes in organizations that they don't have a clear approach on the granularity of the risk assessment. Do you want what? How do you want to start? It's an asset based process based or service rates ones that you choose that all the other alternatives are no longer valid.
So in my model you are facilitating dialogue for the best and the worst and the base scenarios.
So you can establish how many records will be compromised or should we restore it or should we a regenerated in the basis scenario And in the worst scenario how confident you are that the scenarios are gonna fall between those limits. And then another problem, very common problem in IT risk is that we can easily assess confidentiality and ticket availability but then many of our risk that materializes will create a compliance reach will create a CVR reach for it, sorry, a privacy issue. They will create a compensation in the contract, they will create a loss in terms of fraud.
It will impact on the reputation. So we gonna have revenue losses because some clients are going to leave the organization in the model that I am going to suggest in the after minute these analysis have been done. So you have a first tier impact on compliance, the contract on the fraud and the income and also you can add different type of impact coming from confidentiality, inte availability. And then another tool that we are using in order to populate these models is calibration. That is becoming very popular when we are dealing with a new technology.
When we are dealing with a new service, we don't have historical data on threats, we don't have any loss database.
So in that case we are using calibration in which we are involving a number of subject matter expert on a particular threat on a particular asset. And we are asking them to calibrate and we assess the destination that they do in a way that according to scientific studies is very accurate, very accurate, very good. Let me show you the model. Let me just go ly into the excellent pressure. You should be able to see my screen now.
What I added here in the, in the model that you can ask for it, I have two examples. One example related to not being able to protect personal data since they will be an accidental disclosure of personal data and emails attachments with employee records from 1000 to 4,000 employee contractor candidate dataset. So what do we need to describe in the sales? What is the waste and the worst scenario, the number of records that they are going to be compromised And we can estimate how much does it cost in terms of confidentiality, integrity, availability to lose this data.
And we can foresee for instance that forensic and investigation cost in in in terms of the data reach will cost $1 per requisite set.
You can establish according to your own cost structure how long is going, how much is going to cost losing control on a particular asset. Another way that we can do that that is very easy is to consider how much time is taken from IT operators in order to deal with a risk.
And then we can easily calculate how much the cost because if we know that in order to in this example restore 1000 or investigate 1000 or 4,000 record sets, he will involve for instance having somebody in it working four hours or two days and then you can calculate how much it's going to cost based on the HR cost. Once that you got the, the analysis on the confidentiality you established how confident you are that the losses are going to be between the minimum and maximum losses.
When we say that 90% of this particular scenario will cost losing control, losing confidentiality because somebody sending the wrong email from 1000 to 4,000 cept at 99 a 90% of confidence.
It means that there are 10% of the scenarios in which the loss will be less than 1000 records or more than 4,000 represents. And because the model is considering a low normal distribution, there is a long tail. We are being very conservative. It means that there will be more cases in which the worst case scenario going to materialize.
So it's going to be more than 4,000 than cases that there are less than 1000. And then you can add for instance non IT consequences. You can say how for instance, how much is going to be a fine in terms of GDPR compliance, you need to report this risk, sorry that there was a lose lost of confidentiality or employee records and blah blah blah. So you may have it fine and again you assess how much probable will be defined also on a multi simulation then you can also describe what are the control that you got in terms of the new eyes or controls, how efficiently it is.
And then once that you got all this data you establish, how often do you expect this way to materialize? How often do you expect an employee sending the wrong email at receive with say a dataset of personal information for employees? Just to give you a very simple scenario.
So in the best, in the best case, you are establishing that you're gonna have one of these events each 10 years, which means that each year you are exposed to 10% chances to have this risk materialized. And in the worst scenario you are having one of these events each five years.
So you populate the annual probability at 20%. Again, you assess the confidence, what is the confidence? If I am saying that 80% of the cases will materialize between five to 10 years, 20% of the cases will be more often that once each five years or less often that once each 10 years. And because you are being pessimistic you think that it's going to be more frequent than infrequent. So you you populate the confident depending on the type of data that you got automatically the model and that is a native excel. So there are not adults, there are nothing here.
It's a free.
So then if you are interested in programming this formulas in r I am very happy to share the coding if you want to and you gonna have in the multi simulations running inside the model so you can see the formulas. If you group the cells here, you gonna have the different probabilities for the sponsor for instance, for this event, for the data that you are assessing here in the 10% of the best scenarios you are exposed at 5,000 Dans in this case a year for losing control of personal data in because an employee sending the wrong email, but they may be an 80% of the cases.
So you are on the on let's say the pessimistic approach. It will trigger an exposure, a level of risk at a 16 south managed trans failure.
Usually what we are taking here just to create reserve to talk about insurance to know if I need to involve somebody else and escalate the risk to the board. For instance, we are using P 80, the worst 20% of the scenarios. We are not preparing the organization to deal with extreme risk like p i that will be very extreme.
We are being very pessimistic but as a normal practice in risk management, at least in my experience and what I want to recommend some single grid is to position it yourself in the 80% saying okay, this is the tolerance and you need to have a dialogue with around tolerance. Once that you got there you say okay, exposure is not so much good, I don't need to do any plan. But in the second example you may have a threat of personality, personal data because you are not increasing the back their backups and they are vulnerable.
And then you may have a, a threat vector that is hacking izing client data, the confidentiality and availability of of a client data and then going to the same scenario. So you describe your rationale, how much it's going to cost in this scenario because there is a run somewhere. There may be some confidentiality losses but most of the impact is coming from availability.
Because of that you also estimate the different type of non IT consequences for IT contract requirements that you need to compensate. The define for privacy will be higher.
And then you can also talk about your controls, your controls with efficiency of the controls if you need to assess inherent risk. We are not using inherent risk in practice anymore since the last, I would say 20 years, but in many industries we still need to to report or to assess inherent risk. So you can link on the efficiency of this controls and then again in this scenario in the west case you wanna have one of these risk we materialized each 33 years. So the Analyst is partially 3% or 20 years and it's going to be five because of that.
The model in the montel when you can look at the formulas here, the 10 south different scenario randomly calculated shows that exposure is much higher and then you can say okay, it makes sense for me to invest, it makes sense for me to improve the encryption protocols on the backups that we are keeping for the customer data in the cr MS system. So that is the the recommendation for having a model here I am very open to share the model and explain how you can use this approach. You can of course problem something in a solution if you need to.
But as a bottom line, assessing risk in colors in scores with adjectives five by five high, low, medium, wet finger in the air is just a ma a malpractice. We are not building corporate defense. We are unable to decide we are misleading the business, we are misleading the IT owners, IT asset owners. If we are going in that direction and if we need to quantify it, have to be clear for the particular scenario, a particular thread and a particular a very concrete way, the risk is going to materialize.
So we are also avoiding data cocktails in which we are trying to make an automatic assessment on the risk. Any question if we got time? I think that there is some time to
Unfortu not hat on because we had a bit of a late start and then we had the full start and so on. So before we move on, is there, is there any kind of maybe contact address that you can give people that they can maybe follow up with you if they want to or?
Absolutely.
So you're able to connect with me on LinkedIn or Twitter and or to write to me through you guys and I am very happy to to share the this model in the presentation. Okay. I wanted to put something very,
Okay, thanks very much. Thanks very much indeed.
Thank you.