Welcome to our KuppingerCole Analysts webinar on Urgent Find and Block Identity-Centric Security Threats Today. This webinar is supported by ShareLock and the speakers are Andrea Rossi of ShareLock and me, Martin Kuppinger, I'm the Principal Analyst at KuppingerCole Analysts. Before we start, some quick information for this webinar and first poll as well before we dive into the agenda. So we're controlling audio. We will run two polls during the webinar. We have a Q&A session by the end of the webinar.
And the more questions you enter into the Q&A, the better it is, the more lively the Q&A will be. And we will do a recording or we are doing a recording of the webinar. And we also will provide the slide decks we are using for your download. So all set. Before I start, as I've said, or before we really dive into the subject of today's webinar, I want to run a quick poll here first. And then by the end of the webinar, by the end of my part of the webinar, we will run a second one. So we will talk when we talk about today's subject.
We will talk about, on one hand, the identity threats and then how to detect them, how to respond to them. And detection also brings us to the field of AI. And that is also where sort of the focus of my first poll is. And the poll is around, are you already deploying AI-supported technologies for IGA and or access management? So is there some AI element in your identity management already? So answer options, no. Or you're evaluating, you're in a concept phase, or you have already implemented something. So looking forward to your responses.
And as usual, the more responses we get, the better it is. So just click the button. Don't be shy. Give you another five or 10 seconds, and then we close it.
Okay, thank you. And that brings us to the agenda of today's webinar. The first part of this webinar will be my presentation about, which is a bit about why cybersecurity starts with identity, the role of identity or how to look at identity threat detection response, and also a bit touching the role of AI and ML in this context. In the second part, then, Andrea will talk about making identity threat detection response a reality and how this works. So he will really look at it from a concrete angle and how to implement this and which types of indicators of behaviors to look at.
And then in the third part, we will do a Q&A session. I'd like to start with some numbers, and this is a bit more generic. It's about what are the most concerning attack vectors from a poll we recently have been running. And when we look a bit closer at these numbers, then it becomes obvious that many of these attacks are related to identity. Let me just start with ransomware. So ransomware usually starts as a phishing attack. Phishing is about phishing passwords, phishing credentials.
So there's an identity element in business email compromise and CEO fraud we surely can discuss, but I'm sure there's an impersonation aspect behind that. Attacks on critical infrastructure tend to be, at least the more concerning ones, tend to be what we call advanced persistence threats. So this is where someone really gains access into the infrastructure and then sort of tries to gain access to more accounts, to more powerful accounts. Also an important identity angle in there. Malicious insiders abusing their existing entitlements and so on.
So at the end of the day, when we look behind cyber attacks, then most commonly it's about identities. The numbers that say whatever, 85% of all cyber attacks are related to identity. We can discuss this back and forth. There's this part which just uses vulnerabilities in software. So not going through the identity, but going through the vulnerabilities. But at the end of the day, a lot of this is really, or using weakness of untouched systems, but a lot is really related to identity. And we have quite a number of, I think, threat vectors around enterprise identity threat. So what can happen?
And this is not necessarily complete. So it's a list of aspects. So we have privileged accounts. And privileged accounts are typical attack targets, so to speak. So attackers always try to gain access to highly privileged accounts because this gives them the biggest power when running their attacks. We also have some of these which aren't sometimes adequately protected regarding the password side, like service accounts or application accounts. We have many, still many, many poorly managed accounts frequently. Not every organization, but many organizations.
So accounts that are used by a group of users, so non-personal accounts. And if multiple people don't use an account or know the password, you're in trouble. We have shadow IT accounts which are not managed centrally, and they ask for shadow IT. Sometimes remote access accounts, weak authentication policies, allowing still standard username, password. Over entitlements, very common. A lot of access governance looking at that. We also usually are not perfect in lifecycle management, so often accounts, et cetera. We have exposed credentials. We have plain text.
We have certain types of information which can be, unless sometimes it's a bit more brutal for us, sometimes by just finding stuff in memory which is not deleted again. So there are quite a number of these things. And also for contractors, for new employees, and bring your own device, and for partner access, our approach of managing accounts not always is perfectly good. So there are quite a number of areas where attackers can come in. And I would say at the end of the day, one of the maybe most relevant ones is still the password.
Passwords are very fishable, and they are used in quite a number of attacks. And so as I've already said, a lot of these attacks leverage identities, usernames, credentials, passwords, tokens, tickets, all that stuff. And even what we have seen in the past couple of months is that we see more and more sort of MFA targeting attacks. So MFA attacks or attacks against MFA which try to overcome the sort of the inherent strengths of MFA. And there are various ways to do that.
And I think we always need to be very, very clear about the attackers frequently are, at least the ones who have a clear defined target, which are really going for targeted attacks. They are sophisticated, and they can use quite a number of types of access. So they can work with insiders, or even be insiders. They can ask for information on the dark web. They can use sort of every type of reconnaissance, phishing, not to forget. They also sometimes just use brute force attacks, depending on what they are attacking.
And the older the weaker technology is, the more likely it is that a brute force attack can succeed. And so when we look at, for instance, the Mitre attack metrics, which probably all of you know, but those are the metrics which looks at a range of attacks and how this can happen and so on. So this is a very good starting point also to understand which different types of attacks there are and how they work and what happens in these attacks. I don't want to go into detail. I think this framework is well known. It's easy to find, easy to access. So this is the starting point.
But what I want to do is to look at where does identity come in when we look a bit closer into the Mitre attack framework. Then we have this reconnaissance phase. We have the phishing. We have probing against authentication services, gasser and victim IDs, and so on. And then I take a stride to establish accounts or pawn victim accounts to come into the systems. From there, for the initial access, again, it may go through phishing, to supply chain compromise, to exploit, by exploiting trust relationships, et cetera.
And then sort of go into the execution of code malware in some way at the system. Stay there, be persistent, edit account, create new accounts, depending on where you already are. The more powerful you are in that system, the more power you have as an attacker, the more you can do there. So modifying policies, whatever is feasible, by finding weak spots on the system. So staying persistent, calculate the privileges. So go from there and say, I try to get more. This is really for the targeted attacks. Abuse weak spots in the software, unpatched software, privilege elevation controls.
Whatever is feasible, maybe even domain controllers and stuff like that, depending on how deep you're as an attacker already are in the network. So then work against the defense. So manipulate stuff, masquerade all these things where you can try to hide who you are, that you are in the system, et cetera. That's the credentials. There are a ton of options for doing it. I don't want to go into all the details of that, but I think maybe just look at the pure list, like drawers in the middle, a man in the middle, and stealing tokens, forging stuff.
There are so many different ways to access credentials and to do fraudulent activities around identities as part of cybersecurity, as part of attacks, that it becomes clear it's not easy to defend again. Discovery, who's out there, et cetera. Lettering movement, moving into other systems. So starting in one system, moving across the network.
This is, by the way, where Zero Trust started. It was the sense of, we need to avoid lettering movement. The impact at the end. So denial of service accounts that are removed, services that stop, resources that are hijacked, and all the stuff that can happen with hijacked resources. Data leakage, blocking services, whatever. There are many, many things you can do once you are in. And I think it's an interesting exercise. So I just sort of looked at, did it at a very high level.
But there are so many things within this return to tech framework that if you look at more detailed map to identity related threats, that it's very clear we must understand what is going wrong and are identities used the way they should be used. There are quite a number of things to do here. This is about monitoring the real-time events. This is about detection. And this is really where ML comes into play. And this is where Andrea will give you a lot of insight in this part of the talk. At the end, we need to understand where are the anomalies.
Where are things happening that are not what we expect to happen? So only then we can respond. Only then we can do our deception work. This also requires that we do a better identity proofing.
This is, before everything, who's the identity, a better authentication, all that stuff. And use as many signals as we can from the devices that are used and from the behavior of the user. That also includes the behavioral biometrics, many other things. So what we need to do is we need to monitor. We need to apply sort of a strong security push from the very beginning. Device bindings, strong authentication, passive authentication, behavioral biometrics, whatever. And only then we can move to response and deception because only then we can detect.
The detection, on the other hand, is the challenging part because you're talking about very large amounts of users. You're talking commonly about a huge number of signals. And we need to respond relatively quickly. But we also need to understand what is changing our system. Is all that what we have in our systems that state is still the to-be state or did something change? And how can we mitigate this? How can we sort of harden and reduce the risks from the very beginning? So when you look at identity, it's the right detection response. This problem of identity-based attacks is getting worse.
But on the other hand, solutions are getting better. And a very important part in that place, everything which is done well and based on AI and ML. Why? Because it helps us dealing with a huge number of signals because we can apply it to understand where are outliers in our entitlements. We can understand where are anomalies and outliers in our authentication, et cetera. So we have this corrosive cyber crime. We need fraud reduction intelligence, which is really more understanding where's fraud. And we need identity threat detection response.
And this, as I've said, requires a way more extensive use of ML detection models at various stages at various levels. And so we need to bring in these technologies in addition to what we have, standard IGA solutions. So this brings me already to my second poll, and then I'll hand over to Andrea. And the second poll is maybe a bit of thinking about how would you expect intelligent and ML-based IGA solutions also impacting your IAM workload and processes? So if you apply sort of these solutions, where can they help you best? Is it in the daily routine works for automation?
Is it in a better role management and better access entitlement controls? Do you see it just as part of the zero trust, so continuous verification part, or is it more improvement in compliance versus regulatory requirements? Looking forward to your responses. So come on. I know it requires a bit of reading and thinking. To phrase it simply, where do you think helps AI in identity management most? So we'll leave it open for another 10 seconds. Okay. So we'll leave it open for another 10 seconds.
Okay, and that brings us right now to Andrea. While I've been talking about sort of the broader concept of identity search detection response, Andrea right now will go way deeper into detail and say, okay, how can this finally work? Good afternoon, everybody, or good morning for those connected from the U.S. Thanks for joining us. And so thank you, Martin, for introducing this basic concept that basically states that behind any cyber threat, there is an identity hack. That can be something inside a threat, some malicious insider, most often.
It's some act identity that someone else is abusing from the outside. So this is a great intro for explaining what we, Sherlock, do.
So, first of all, my name is Andrea Rossi. I've been in this space for a long time. And I'm a researcher at the University of California, Berkeley. And I'm a researcher at the University of California, Berkeley.
So, first of all, my name is Andrea Rossi. I've been in this space of identity-centric something for a while. Before Sherlock, I co-founded a company called Cross Ideas, which then sold to IBM. And the Sherlock is, in a way, built on the core of the core team of Cross Ideas, which is continuing a journey of an identity-centric mentality.
So, if you look at what we do, we are essentially a phenomenal behavioral anomaly detection engine built on the obsession that every security risk and threat should be detected using behavioral anomaly detection. And we have pushed that so much to the limit that today, out of our engine, can offer two macro use cases. One is the identity threat detection response, which is the topic of today and works on top of ingestion of human application telemetry.
And on the other side, it's the newest toddler of the family, the ability to monitor security anomaly on Kubernetes cluster, which is a very important unprotected new domain for cybersecurity. Before we get started and to further slides, we'll be addressing essentially four questions today.
So, first of all, I want to explain a little more the way we do machine learning. I don't like the term AI. I'd rather go into the term machine learning and how we can help identity management and security operation. And then we'll continue with other four questions. But I want first to clarify how identity threat detection and AI ML are connected. And I want to do that clarifying a bit the term because I realize there is a bit of a hype in the market. But there's also some disillusionment about using AI and machine learning too much.
So, I just want to bring us back to the same page. So, first of all, machine learning works in opposition to programming system.
So, in the case of machine learning, the system learn from data and react based on data they learn. A lot of security out there, it's still based on the programming. You try to tell the computer what are the data points that they should detect in order to find a threat, which I think you can clearly see from Martin's description. The word is quite complex if you want to describe that in programmatic terms.
Now, when it comes to anomaly detection, we first need to find behavioral habits that are technically called baselines. And that is typically done in opposition to the old way of doing it, which is basing the detection on threshold. A very simple example is what is the risky threshold for fail logins?
Well, it's five for me. It's maybe one for Martin. It's probably three for any one of you.
So, you need to look at personal baselines rather than generalized thresholds. When we talk about behavioral habits in security, there are two ways of doing that. One is building and monitoring behaviors using an IOB, or indicator of behavior, which is in opposition to, I would say, a more traditional way of finding threats using IOCs, indicators of compromise.
Now, what an indicator of behavior finds is basically a behavioral anomaly. That could be an anomaly in the way you navigate through your SAP transactions, or the way you access the SharePoint folder, or the way you use the browser.
And so, we, in the world of behavioral anomaly, we detect, as I said, anomalies over a baseline or a set of habits. And that's in opposition to what typically security finds, which is finding footprints.
So, that's how we get to finding threats. But the very good side effect of what we do is that, for us, providing a recommendation, an identity-centric recommendation for adding, removing entitlement, comes out of the same ability to find anomalies.
Now, going back to behavioral habits, there are two ways in the market of doing that. By the way, the bold part is the way we do a sharelock.
So, there is a non-supervised way and a supervised way. The supervised way, it's basically you need to massage the data set where you train the algorithm, the machine learning algorithm. And that's in opposition to the unsupervised, where basically the system can learn on the data as they are without any tagging.
And last, but not the least important, it's very important to have a model, in our opinion, that is explainable. What I mean by explainable is that you can track down the set of anomalies and explain what was anomalous and, yeah, backtrace the origin of a threat as a composition of several anomalies. That's in sharp contrast with another machine learning technique, which is called neural networks or deep learning, where you basically have an output, but you can't really trace back to, I mean, why that output of a threat or high-risk, low-risk was calculated.
And this explainable machine learning model, it's very important in light of the upcoming AI machine learning GDPR equivalent, the European regulation, that will try to put some basic principle of transparency and ethics, among other things, into the way you do machine learning. So, that was just to set the stage.
Now, you know, I am in security. They are all about trust, right? And that's psychology. That has nothing to do with technology. There are two ways of trust. One is a granted trust, where you basically entitle somebody with a set of permissions, say, oh, I trust you because I define you now what you're trusted upon, versus the gain trust. The gain trust is, well, I don't trust you that much, but behave well, and then you'll gain more authority to my eyes.
So, if we look at how these two, say, psychological principle are translated into identity management today, we see that today identity management has a lot of prescriptions, meaning a very complex role model, super granular, super prescriptive, and there is very little ability to detect anomalies in what the actual users, but not just users, could also be applications, or other identities are doing. We all know this is becoming slow and also expensive to maintain.
So, the way we believe at Sherlock is that the future IM should be more with an infusion of gain trust, where prescription is important, but it's becoming way more important the ability to detect and react to anomalies that are resulting from people behavior. What I mean by a smaller infusion of prescription could be less sophisticated role model, roles that are broader than previously designed.
So, I think you understand the principle, and that is basically cheaper to design, but it's also faster in terms of adapting to a sudden change of circumstances that I think you well realize this is the unique feature of cybersecurity today. The landscape of threat is constantly changing.
Now, there is another aspect that we are putting into the equation, which is the human firewall. Today, we completely forget about asking the end user opinion in a number of things.
So, if we detect something which we consider an abnormal pattern, we ask the user, well, you know, in this case, we're not totally sure this might be a data exfiltration, or there might be an account takeover in this specific situation. So, what do you think, for example, dear manager of the given person where that account belongs to? We might say, well, dear security, I'm the human firewall here.
No, it's not normal. I can tell you it's not normal. And by the way, thanks for asking.
So, we capture that feedback because there is no way security can do security without, you know, extending the perimeter of contribution. That's what we call a human firewall.
Now, let's explain a bit, and that's, again, our interpretation of identity threat and detection, identity threat detection response, how, where, and why it helps. So, first of all, who is benefit from an implementation of whatever you might call ITDR?
Well, first of all, we think that the IAM team, which is typically a different buying center within many organizations, they should be able to protect themselves and augment their IAM infrastructure with a closed-loop detect and react, which is the ability of self-remediate, like locking an account, identity-centric threats, which are typically either account takeover or insider threats, so someone misbehaving from the internal.
And the other thing which is very important is to use some of the analysis that we are able to perform to reduce the complexity, removing accounts, removing access to some stuff that it might not be needed. So, you reduce the attack surface, trying to implement the least privileged set of recommendation. But there is the other side of the equation that ITDR, you know, glues together, which is security operation.
If you look at the guys that are dealing with the traditional security operation center, they are able to detect identity threats out of endpoints, infrastructure, network, but the reality, they can make little sense of any log file coming out of the identity ward or the surrounding business application. So, we strongly believe that ITDR, it's the identity probe, the translation layer that can provide to security operation people identity-centric threat that they can correlate with other type of threats that might be detecting with more usual or traditional security approaches.
So, now, let's ask ourselves the question, where do we place ITDR in the large scheme of things? Well, we say ITDR, it's like a Rosetta Stone for companies trying to build a less silo-based approach to security. And as I said just a minute ago, this identity threat detection response is the identity probe from traditional security operation into identity, but it's also the way for identity people and systems to show a real contribution to a larger, and I will say more complete, security posture.
Now, talking about a bit the way we do, so, Sherlock in action, end of the day, we do simple things on the surface. So, we ingest user activity, also application activity out of what we call the business applications, that can be the SAP, the traditional MS 365 Word, Salesforce, GitHub, imagine the world of application that user are doing, and also the audit trail resulting out of access management system or IGA system or converged platforms. We ingest this bunch of data and we find habits.
Habits, and it's plural, habits for humans, habits for machines, habits for system credentials, and different type of habits that we can correlate, right correlate to find a specific threat, and then remediate. Now, it's very important to state here that in our interpretation of RTDR, looking just to IAM audit trail is not enough.
If an account gets hacked, you might get some information out of the access layer, but it's also important maybe to understand what that account in the coming day or hours could be due on some surrounding application, because that's the only way to really detect a purified threat and avoid false positives. So, it's very important to correlate data across multiple domains, and that's also a shared opinion across analysts, just monitoring AD or just the IAM endpoints is not enough.
Okay, if you just do that, you're wasting your time and money. Now, probably one of the simplest scenarios that we encounter during our activity is there is a lot of people that want to understand the risk of external threats or inside a threat, because it's important to state that with this approach, you can find the bad guys from the outside, but also the bad guys that are already inside, the so-called malicious insiders.
For us, it doesn't make too much of a difference, assuming that we can correlate anomalies across a set of applications. So, a very common scenario, especially fashionable nowadays, is to look at the complexity of user activities out of Microsoft 365 and combine that with the access and authentication anomalies that can be while logging, Azure Active Directory itself, Okta, and also combine that data with the VPN.
So, you couldn't believe how many bad things could be detected if you just look at even some basic anomalous behavior out of these data sources. Now, one very important thing in ITDR is that as we are finding identity-centric threat, which is that account for Andrea, well, risky. You have options to remediate it in different ways, but still using the IAM system itself.
So, a very simple example, you could log the user off in case it's a suspicious activity or block it, or you might trigger a recertification campaign on the IGA platform. So, the remediation takes place in the IAM system itself, besides sending that information maybe to a security orchestration and automation platform. All right. We think that there are key requirements for ITDR, and again, this term is quite new and in a way it's blurring. Everybody puts its own boundaries to the definition of ITDR.
So, these are what we think are the key requirements for a successful one. Of course, it's biased.
I mean, that's what you expect from every vendor. So, we think that the first requirement, and it's been already said in a bit, it must deliver value to identity and security stakeholders.
I mean, the two churches are coming together, but the reality, there are still two different teams inside the company with two different mentalities, maybe reporting to the same CISO, but their reality, different mentalities. So, ITDR must contribute to security operation with external identity-centric threats in an alphabet that they understand, and they understand the word of, this is a threat, a situation I need to manage. If you look at the word of IAM people, they might be interested also in insider threats.
Some bad guy internally, and also recommendation or IAM hygiene insights to reduce the entitlement complexity and the attack surface. So, not necessarily something which is a threat, but just set a recommendation to keep the house cleaner. Second requirement, and I will never get tired of explaining this, because this is the area where the fluffiness out there in the market is marvelous.
So, we spent a hell of a significant time to give you guys the ability to monitor any baseline of behavior on humans, entity attributes, and detecting all the possible set of anomalies that are required to detect. So, that's why we develop, we maintain, and we design our algorithm for detecting anomalies across a spectrum of requirements, because the machine learning algorithm that allows you to understand an excess anomaly is different from the one that detects time anomalies, occurrence, or risk.
Whatever we've built, it's built on the principle of flexibility, because the only way to detect threats is to be able to understand many anomalies, and just the correlation of many anomalies can filter out the real threat you should spend your time on. So, there are a number of technical details here, which I don't want to bother you with, but just to be informed, all this stuff got a patent for the behavioral baseline architecture, and there is a reason for that.
Again, it doesn't have to first surface, but it does have to have a baseline. Now, last but not least, in order to be able to get quick to the market, and ability to get extremely fast to the market, you need to have a very good understanding of the complexity of the user.
That's, in fact, what we do. We don't expose it, but it's important for you to know that this flexibility is important in the long run.
Now, last but not least, in order to be able to get quick to the market, and ability to get external and internal threats, out of this sophisticated IOB, indicator of behavior engine, we built what we call a reference model, or ITDR reference model, which is a combination of integration with the most common access and IGA platform and business application. Most importantly, it's not the technical integration that matters. It's the ability to have ready to use a set of anomalies that you might want to detect.
Believe me, regardless of being SharePoint, or Box, or G Suite equivalent, GDocs, it doesn't matter. The anomalies that you want to look after are always the same. Anomalous download, anomalous folder that you've been accessing, and so forth. Same applies to SAP or transactional application. Same happens to, for example, the access management platform. The attributes that they share, where we can detect anomalies upon, are always the same.
So, this is probably, I mean, the best value that you guys can get out there, because that brings you in, I wouldn't say zero day. It takes a bit of time to teach the system, and to understand what the historical baselines are.
So, we typically either use three months of historical data at present. Otherwise, you need to wait a bit to ingest and learn. Keep in mind, we have no rules, roles, thresholds, white list, black list, nothing. The system learns on typical behaviors, and detects the anomalies.
So, the beauty in the long run is you don't have to manage it, because it automatically manages in the autopilot way. Now, there is one product capability that I want to emphasize, because it's important to explain that recommendations are resulting out of the same behavioral engine, but they offer a different purpose.
So, one very cool thing that we do, it's we compare with peer clustering techniques, the data that we get out of an IGA platform. It could be SailPoint, or One Identity, or even my former cross-IDS IBM IGA platform, and see how the user should be clustered. And that's sort of mapping the role definition that you have done. That's the theory of the books. But then you have on the right, the as-is, which is based on the actual utilization of applications and entitlement. And then you do a different clustering. What's the purpose of it? The purpose of it is to measure a gap.
And based on that gap, provide a recommendation to close this gap. This gap will never be closed, but the smaller, the better. In the old days of compliance, we call it better compliance. In the days of cybersecurity, we say a way more reduced attack surface. We're now towards the end, and I want you to connect you with another interesting term that you might or might not never heard about it, which is CWP. That stands for Cloud Workload Protection. What is it? As I said at the beginning, it's imagine that, you know, same engine, but two different set of data telemetry we connect to.
So Cloud Workload Protection, it's something that we built out of the same behavioral engine on a different telemetry, which is Kubernetes telemetry. I will explain to you why we do that in a second. But for the time being, a year ago, we realized out of a market signal that there was a huge demand for managing runtime security on Kubernetes workloads. Guess why? Because Airbuck doesn't work there too well either. So there is a lot of open holes in Kubernetes-based architecture, which are running significant business-rich workloads.
So what we do here, again, think about the same engine, different telemetry. The telemetry is about system call, network calls, and there is a sophisticated way to collect that stuff, again, on Kubernetes. But the way we do it as a longer term projection, which is on ITVR, you are essentially monitoring the human and cloud application and API side of the equation. We look at identities using applications or APIs. And we signal anomaly.
But if we're also able to look what is happening in the basement on that infrastructure, imagine a world where everything is running on a cloud containerized way. Of course, it's a bit of a philosophical view, but that's where we're going. The combination of anomaly detection will bring Sherlock in a new position to correlate anomalies out of human application behavior and infrastructure, network call, sys call behavior, again, in a Kubernetes environment, which is becoming already the standard for the cloud containerized cloud application world. That's the last slide. I hope I'm on time.
So the punchline is very simple. I told you the beginning briefly. So Sherlock, it's the take two of a company that sold the light with a different name. So it's a different company, of course, but sharing a bunch of the same people, including myself. It's a privately owned, self-funded. We're quite an obsession, an exception, as we used to be during crosshairs in terms of self-managing the company without any VC. And our mission is very simple. We want to make behavioral anomaly detecting a bit of the new normal for digital security.
With that, I think we can open up to any question that you might have. In the meantime, thank you very much. And here's the website and the email in case you want to reach us out.
Thank you, Andrea, for that insightful presentation. And so let's go back to the agenda. And the next part of the agenda is quite straightforward. It's the Q&A. You already received a couple of questions. And so the first one is, I think I'll start with this one. So IT identity threat detection response feels more like a tool for a security operations center rather than the IAM team. So given that it needs expertise in threat detection response, could you elaborate a bit more on that? I think it's probably somewhere in between. It requires both, I'd say.
Oh, you're right. And that's it for a question. I think ITVR is a discipline as a practice beside the tooling. It's the first attempt to connect two worlds that today are separate. So the mentality of IAM people is not really to detect and respond to things.
I mean, they program the system and sometimes they audit it, but that's no longer enough. So in a way, this detect and react ability, it's way closer to the way traditional security operation has always been working. But I think that there is no way for IAM team to add some kind of that component to their domain of management. So the first part is you don't have to manage a threat. You might be sending to someone else and be useful in the eyes of the company.
And people are delivering value to the broader security posture of the company, which is not always the perceptional case in many companies. But two, always keep in mind that the recommendation and the correlation of anomaly can provide to IAM people a number of beneficial insights for optimizing the platform that are already running. That's what we call recommendation. So in a way, yeah, it smells initially more in favor of security operation. But the reality, I strongly believe that identity management people, they need to stretch a bit into that domain.
And as I'm exaggerating and joking with some friends, guys, you need to come up with some night shifts. Maybe not 24 by 7, but you need to look after your system and not thinking that someone else is doing it.
Okay, great answer. And I think the other question is also a bit related to that. And that is identity threat detection response feels very similar to SIEM and UEBA. So user behavior analytics or user entity behavior analytics. How are they different? So how is it different to SIEM? How is it different to UEBA?
Yeah, that's a very common question. So SIEM, the only thing that SIEM and ITER have in common, it's the data ingestion. But that's like sharing the first letter of an alphabet. The reality is the case of SIEM, it's all about keeping the data storing. But there is no concept of an identity threat data model inside a SIEM system. And the little user behavioral patch that many SIEM vendors have put will not be sufficient for that. And I'm glad to see out there in the market already an established position saying, you can't find identity threats using a SIEM.
And in fact, we see more and more clients finally coming to us saying, yeah, you know, you're right. I can't do that with my whatever SIEM system. So the bottom line is SIEM system have no concept of an identity centric data model inside them. So that's why they can't do it. It's not a specific blame to any vendor. It's the conceptual design that is lacking the capability. The user and entity behavior analytics.
Well, that's something that started in principle many years ago. And it had some nice ingredients that were too much in favor of just looking to analytics and alerts. And they were very coarse grained and premature. And I will say primitive in terms of the machine learning techniques that they were using 10 years ago. So what we have done is nothing more than taking the good stuff out of the old UBA concepts and expanding into the requirements of what we need today. Where again, the only way going forward to cope with this ever changing landscape of threats, just to look at anomalous behavior.
There is no way you can program it. Okay. One more question here. And that is one I think you're taking a bit of a different approach here. So can the ShareLock solution also be integrated to access management systems to detect whether an ongoing transaction might be a threat and then trigger other responses? Or is it really more on the IGA side of things?
No, we do integrate with pure access like the OneLogin, Okta, Azure Active Directory and the others. They all share the same audit trail where we add value.
Of course, we can say there is a risk on top of your access management platform. But I think that most of the vendors are building something which is good enough for the real time access management confined threat detection. The value we bring is the ability to correlate that anomaly with a lot of other surrounding anomalies because any attacker, I mean, their desire is not just to crack the Okta OneLogin layer, it's to crack that and to do something else elsewhere.
So that's where we add most of the value when you realize that just the real time fraud detection of the browser that changed or a location that changed is no longer enough. That's just a tiny portion of what you need for identity threat detection. So we add value to any access management platform that might be lacking that capability. But the real value is really the correlation of their anomalies, identity anomaly with the rest of business applications. Okay. Final question, at least for the ones I have here. Is there anything you should consider regarding EDPR?
EDPR, things like the Chairman Workers Council, etc.? Oh, that's one of my other favorite questions.
You know, that's the FUD number one thing. So let's face the reality. Most of the security people, they are just scared about talking to HR and legal. The sheer reality is any labor regulation in any country in Europe, I'm talking, and GDPR, they allow any company to protect itself against threats. So what we do is that we don't look at what people are doing. We're just highlighting security situation and the investigation afterwards. So in practice, we don't do anything different compared to what you might be doing with the same system.
Now, the only caveat to that is that we sometimes use the word behavior, habit, but then links to, okay, you're monitoring my productivity. No, no, no, no. We don't care about that. We take the data, we find the baseline that are meaningful for security, and we just highlight the security situation. And every GDPR worker council says, you have the right to protect yourself, dear company. But just on a very specific situation, which is highlighted as a result of very risky factors.
Okay, perfect. Thank you, Andrea. Thank you to Sherlock for supporting the Scope and Core Analysts webinar. Thank you to all the attendees of this webinar. I think this was a very interesting one again, and I hope to have you back soon at one of our other webinars or at our European Identity Conference in May in Berlin. Thank you. All right. Ciao.