Well, welcome everyone to our Videocast today. I'm John Tolbert, Director of Cybersecurity Research here at KuppingerCole. And today I'm joined by Chris Hallum, who's the Director of Product Marketing of the Tanium Platform. Welcome, Chris.
Yeah, and thanks for having me. Really great to speak with you today.
Yep, looking forward to it. So Tanium is launching Autonomous Endpoint Management. Can you tell us how you're thinking about that and what's involved?
Autonomous endpoint management is the convergence of UEM, DEX [Digital Employee Experience], and vulnerability management, and potentially other endpoint domains where the use of AI will be used to automate the majority of endpoint tasks. AEM seeks to significantly reduce administration and enable the reallocation of resources towards employee enablement and innovation activities. And so that might sound fairly similar to where we maybe are at today. The thing that's different, is that these types of products will be bringing in AI to increase the volume and the types of automation that will be used in environments. And so it's really gonna be expansive. I think today automation is used maybe in a fraction of the scenarios in where it could. And so AEM hopes to broaden it into way more use cases.
Yeah, it sounds like it's a next generation of what we had previously known as unified endpoint management. Would you agree with that?
Yeah, I mean, I think if we look at a lot of these endpoint management products in the domains like UEM, DEX, and vulnerability management, I think it was kind of this inevitable natural evolution that they would converge, right? I mean, if you look at UEM, it's about putting a machine into a desired state, patching it, keeping it up to date, making sure the settings stay where they're supposed to. And then DEX of course, is on the other side of that. It's almost like server monitoring except for the client. Well, let's make sure that endpoint is actually as healthy and as productive for the end user as it can be. And of course, vulnerability management is about keeping them as resilient against threats as possible by keeping them patched and putting them into the most secure state possible. And so to have these as separate products out there, at the time it made sense, because it's kind of how the world emerged, but now is the time to bring them together. And AEM in the direction of that new market category is the thing that's going to bring those together under one unified roof. And what's interesting is it's going to bring together teams that normally struggle to work together, like the security team and the IT ops team. These products, since they'll be unified, will bring those teams together. And so they'll be able to get their jobs done where they intersect far quicker than they've done in the past.
Maybe we could remind everyone what UEM contains in so far as functionality. You know, it's hard to protect what you don't know that you have. So, you know, we like to say, you know, in our research, it starts with asset discovery and classification, doing an inventory of, you know, what's on all the affected endpoints. And endpoint in patch management. Patch management has long been preached as something that we need to do to decrease our risks. As soon as patches are available, as expeditiously as possible, get patches in place. But that also includes on the endpoint management side, things like just simple performance monitoring, CPU monitoring on all your endpoints. This is where you're most likely going with autonomous endpoint management. The incident response piece today is largely manual processes. And you can do things like contain an endpoint, do isolation, many products allow for things like remote wiping. But how widely used that is in the customer environments, I think we're not quite there yet in terms of really fully automating that in terms of both capabilities the products have but maybe also willingness on the part of management to use automation. What do you think about the role of automation, especially for endpoint management then?
Yeah, you know, it's funny if you look at every product out there, automation is probably a pillar of it, or it's at least mentioned. But the reality is, is what I, what I, my perception is most organizations and vendors out there are providing automation plumbing. They're not really providing the automation itself always, or at least to the degree that you like. And part of that is automation is inherently, it can be dangerous. And the challenge with automation is if you're going to roll out any sort of change, whether it's a patch, whether it's a configuration change, and you're going to play that at scale, there's a risk of something going wrong. And the challenge with UEM products, every endpoint management product, for the most part, Tanium being a little different as we'll get to in a minute, is if you don't know the real time state of those endpoints, if you don't know that they're healthy or not, or that they are, if you don't know what configuration state that they're truly in right now, because an hour ago they may have been in one state. In this moment, they may be in a different state. You roll that automation out and you actually may create more harm than good. That's why I believe that the potential of automation has never been realized because everybody's inherently constraining the type of automation they'll do because they don't really know what state their environment set. They have a general sense of what it is. It may, for one endpoint, maybe a minute old. For another endpoint, maybe days old, you just don't know and it's everywhere in between. So you have a general sense. And so I think the limiting factor on doing automation at scale has been the lack of true real-time data. There's a lot of organizations out there that talk about real-time, but I think to many organizations out there, real-time's a gradient. To Tanium, real-time means now. And so we're able to gather data across a million endpoints as we recently tested it and harvest all of the data for a specific setting as an example, and bring that back in seconds. And we're the only system designed for that time of communication. And so because of that, we're really excited about our potential to take advantage of AI, which would love to be fueled by real-time data. And then of course the automation that can be run because you'll have the confidence that when you roll this patch, this system, I know it's healthy right now. I know what state it's right now. I know it's not rebooting right now. And we can deploy that and be successful in the delivery of whatever the automation task that needs to happen.
Yeah, when you say that lack of real time data or lack of real crisp automation is what is kind of the current drawback to UEM today and that what AEM hopes to satisfy?
Yeah, absolutely. And I think all the vendors out there are going to move towards AEM. It's going to be a market category, maybe this year, maybe next. And we're going to be one of the first organizations out there with a solution. And so we're excited about the real time data that we have and our ability to automate those tasks that the customers would maybe be a little bit nervous about. And one of the benefits of giving an AI, which we'll be using this part, I believe that AI is crucial in an autonomous endpoint management system. Our ability to get the AI fueled with real-time state data on all of the endpoints that it may be considering for a purpose of a task will really be helpful because it'll be able to generate a confidence score, right? Like, hey, here's how confident you can be in this specific task to roll it across all of the the endpoints that the AI may be making a recommendation for. And so that's super, super exciting and super new. I know we've experienced a product that quite does that. Yeah, and that's why AEM is gonna deliver to customers.
Yeah, machine learning needs data to be the most effective. So I guess you'll be pulling across your customer environments, maybe aggregating that, and that will provide sufficient data to really increase the overall trust level for the recommendations that then come out.
Absolutely. Yeah, absolutely. One of the things that we have access to at Tanium is we have access to over 35 million endpoints to learn from. So we have this incredible endpoint base across all of our customers. And we have customers that are in some of the most complex and thus exciting environments in the world. We have Fortune 100, we have Tanium in most of those organizations, we have Department of Defense, United States, MODs around the world. And we also have products, our customers in emerging enterprise. So it's not just the giant shops. We also have smaller organizations. So we have a very diverse set of data in environments to learn from. And an AI like we intend to deliver, we'll be able to reason over all of those and provide the right recommendations for the right customer at the right time.
So I mentioned, you know, containment, isolation, remote wiping, are there other kinds of actions that you expect to be able to do automatically with AEM?
Yeah, I mean, I think there's kind of endless opportunities. One that comes to mind for me is the ability to learn about maybe a new mitigation. There are endless controls in Windows, for example. I think there's several thousand security controls. Some of these are always turned on by default, but then there's a broad range of things that aren't and are incredibly powerful in terms of being able to make a system resilient against all sorts of threats. One of my favorites in Windows is something that's called attack surface reduction. So Windows by default allows Office documents, for instance, to access Windows 32, Win32 APIs. And those are very powerful APIs you can take, you know, dramatic action with those, you can kind of do anything you want. And so a malicious document can do something you would never expect, and can be quite dangerous. The system, in fact, could install software, which is what they do sometimes. So anyway, so what I think is exciting is the ability for an AEM solution to be aware of these types of optional controls, know where they're used, where they're turned on, where they're not, and provide recommendations to, hey, we recommend you take advantage of this control or not. And we think that your environment, based on the way it's operating, can tolerate this particular control. And that's one of the reasons why some controls aren't on by default. I'll just give you a personal experience. I worked at Microsoft a number of years ago, and one of these ASR controls I just mentioned were like, well, let's turn it on everywhere in the company. And they were like, well, we can't do it because we actually know that the SLT has got some really amazing Excel spreadsheets that are using some of the most advanced features in Excel. And so they're gonna be using that Win32 API for whatever reason, so we couldn't turn it on corporate wide. But with an AEM system, there's a potential to know what types of things are being used versus not, so we can maybe make those recommendations and offer them to our customers.
So this increased information level that you'll be able to get through AEM should hopefully improve confidence in the trustworthiness of the solution itself. Do you think the use of ML, the use of AI, and then just experience with turning some of these things on and letting them take their course in organizations to do on the fly remediations, is that what's needed to really get AEM rolled out and experiencing maximum benefit for those who deploy it?
Oh, 100%. I think that's one of the most exciting things about AEM. Because I think the scenario that I just described, where a solution could go out there and come up with a list of interesting things you could do, you could do that today with today's technology. But to provide that list of recommendations based on some sort of intelligence that understands the state of the environment and then to provide a recommendation with a, maybe a confidence score, which is what we're calling ours. It says, hey, based on our observation of your environment, based on our observation across all of our other customers and all the other endpoints, we believe that you can be successful in deploying this patch, this new control, this mitigation that might be important against the latest emerging threat. And we believe that you can be this confident that you can do this without causing harm to your environment. That's what's new is that confidence level. And that's what an AI can bring to the table is the knowledge about the likelihood that you're gonna result in a good state versus a bad state. Whereas before it was just a list of things and you kind of had to do a bunch of work to figure out how safe that would be. And so that was a manual act. Today and in the future with AI will be an automated act. And so that's much of what AEM is all about.
That live, real-time, accurate data, the ability to understand what the dependencies are on the endpoints that you have in your inventory, I think will begin to give the proper amount of confidence and build trust that allows organizations who choose to deploy these kinds of tools to get more comfortable with allowing these automated responses. Seeing that they can be successful, I think, is probably the best means for improving that trust. So case studies, other things that we can get out there to demonstrate to people that this autonomy can work. Any extra thoughts on that?
Yeah, I think that we're talking kind of very broadly and generally about the potential of AEM and we're not really talking about specifically what Tanium is doing in 2024. What I can tell you is there's really two journeys that are happening in parallel. One is the vendor journey and Tanium's on the vendor journey and we're gonna be delivering as much automation as we can over time and it's gonna increasingly go from more manual or I guess what I would call kind of a trust but verify stage where recommendations are provided and the customer looks at it and they consider and they get the confidence score and then they decide to run it or not. It may run in a, what I would call a semi-autonomous state. So it's not just, okay, let's run the automation. It may be, this automation is a sequence of steps. It's 10 separate procedures. And so they want to run it in semi-automated mode, meaning they execute the first few steps, and then they wanna verify and confirm that those steps were successful before proceeding to the next set. And so they may take kind of a step-by-step with multiple periods of authorization and verification, and so kind of semi-autonomous. And then later on, there will be this transition where a subset of the automation recommendations become fully automated, there's a lot of trust with these tasks and these other ones remain in semi-autonomous and then way out in the future, maybe these systems increasingly run themselves maybe by default, but that's way out in the future. So that's the journey we're gonna be on is helping people get through those stages. So one of the key principles to our solution that we're building is to allow people to go through that journey. Autonomous is not a binary, it means that it's something you can do and you get to decide how autonomous you want it to be. And so central to our solution is the ability to ensure that every customer remains in complete control of the automation, all the way from the selection of the automation they choose, as well as the execution of that. So you can either let it run on its own or control it each and every step of the way. So that's central to a solution in our mind. So that's our journey, is providing those types of capabilities. The journey that our customers are on are to transition themselves to a level where they become confident and trust an AEM solution. And that's going to be up to the vendor to do a great job delivering the right recommendations, having the right concepts like confidence scores or whatever other concepts other vendors may create. And so that we can create that trust with customers so that they don't inadvertently wreck their environments by running automation too pervasively across your environment. So it'll take time for customers to kind of take that same journey we're on. And we're going to get them there is by providing the right features, and so they can control it. They can become confident. They then will develop trust and then they will increasingly run more and more automation, uh, without maybe the trust and verification. So, uh, so anyway, it's an exciting journey we're on and the customers will have to be there with us the whole, the whole way.
Sounds like a good approach. Any any final thoughts today?
Yeah, well, I'll just plus one what you just said there. I think the auditing capabilities that are, at least in Windows, I don't know Mac OS as well, but in Windows we have those auditing features when I was there across the whole system, in fact, the ASR system. I mentioned that it was heavily implemented there. And so, yeah, I think that the AEM solution, putting security controls in audit mode and observing what happens and looking at the event log and see, okay, hey, how many of these things actually would cause problems? That's probably part of the learning process our AI, maybe will take advantage of. I'm kind of speculating there, but that's a great set of telemetry that would be very useful to us. So I would imagine we'll be using that.
Well, great. Thank you. Thanks for joining us today.
Yeah, it's been great. Very interesting conversation, John. Thanks for the opportunity
Look forward to the next one. Thanks everyone.
Absolutely. Bye.