Well, hello and welcome to another KuppingerCole webinar. My name is Alexei Balaganski, I am a lead analyst here at KuppingerCole, and I am joined today by a distinguished guest, Nicholas DiCola, who is a security Jedi and VP customers at Zero Networks. I am really envious. How can one get such a title as a security Jedi? Maybe I could be a thief lord next time.
Anyway, before I launch you to the topic of our webinar today, when we are going to discuss Zero Trust Network Access and how it can be implemented with intelligent microsegmentation, we really have to spend a minute on housekeeping rules and talk about how the webinar is actually run at KuppingerCole. So we are all muted centrally, you don't have to worry about that. We will be recording this webinar, and the recording as well as the slides will be posted on our website, and everyone will get a notification, probably over email, really soon.
We will run a couple of polls during the webinar, so whenever you see a pop-up asking for your response, please spend a second, it will help us to understand the audience better. And finally, after the presentation, we will have a dedicated Q&A session where we will be answering your questions. You are able to enter a question at any time using the corresponding box in the GoToWebinar control panel somewhere on your right. And without further ado, let's just start with our webinar.
As usual, we have three parts planned. I will start with my introduction into this whole topic of Zero Trust and how it's related to micro-segmentation, if at all. Then I will switch to Nicholas, who will be giving you a much more technical and in-depth view into actually designing and implementing the micro-segmentation architecture within your company. He will also be doing a live demo of their solutions. And finally, as I mentioned, we will have a joint questions and answers section by the end of the webinar.
And again, before starting into the content, let's just run a really quick one-question poll. Have you actually considered Zero Trust for your organization already? Can we please have a poll? And you really have just probably 30 seconds to respond, so please be agile.
Okay, interesting. And we only have a few seconds left.
Okay, thanks. Can we have the results? As I can see, and hopefully you can see them as well, 86% of our attendees responded with a yes, which is, of course, great news for us as presenters. And Kupina Kol is an analyst host who has been covering Zero Trust for years.
But, of course, it will be interesting to compare your initial response with a second poll we will run by the end of this webinar. And we will dive a little bit deeper into the technicalities. But I would love to start with a slide I have probably used almost 10 years, and it has never been more relevant than today.
So, yes, we are living in a hyperconnected world. And every time when we talk about cybersecurity, we are always remembering and longing for the good old days of the castle and more security times. Unfortunately, we don't have that anymore. Those perimeters have all but disappeared years ago because our IT is now no longer behind a wall. We have resources and assets on-prem, in multiple clouds, on manufacturing floors, or just somewhere in the world where our work-from-home employees or contractors or even customers are residing.
And, of course, for decades, this was the priority for business. Nobody cared about security because it was much more important to open up and bring your data, your products, digital services as close to as many people as possible.
And, of course, this has led to the emergence of numerous security risks. Ransomware, data breaches, cloud breaches, social engineering, you name it. Every resource, no matter where it's located now, is basically open to a number of possible attacks. So we have to kind of go back to the drawing board. We had to 10 years ago, maybe even earlier, and decide what are we going to do with this new reality and how do we bring security, at least some semblance of security, back to our business.
Obviously, it has only been growing worse and worse with years. Yeah, we know that cloud is a new normal, but a fully mobile workforce is basically a reality for the majority of businesses, especially after the COVID pandemic. We have our cloud services. We have basically our data. We have our assets everywhere. So what do we do with it? How do we bring back at least some semblance of safety in these times where we have espionage and political issues and cyber wars already fought out there, malware, ransomware, you name it.
Well, we've been promised multiple different solutions from different vendors, but of course the buzzword du jour, if you will, is Zero Trust. And every time we talk about Zero Trust, we have to remember that Zero Trust is not a product. It's not even a specific architecture. It's just a set of guiding principles to design your IT, basically, your IT and operations, which by design will be more secure and improve your security posture. This is the definition of the American NIST organization, and this is what we've been promoting in basically every hour of our webinar on Zero Trust.
It's a little bit odd to understand that Zero Trust has been with us actually for 30 years already. It's nowhere as new as many people expect. The term was coined back in 1994.
And, of course, in 2009, the well-known and respected researcher, John Kinderberg, has actually introduced the principles of Zero Trust as we know them now. And basically, we had a period of Cambrian explosion of Zero Trust-related things, mostly labels and marketing buzzwords, but of course architectures and products and services and a lot of interesting developments. I would say a major milestone was in 2020 when NIST again has finally codified the ideas of Zero Trust and came up with a reasonably detailed explanation of actually how to build a real-life implementation of those principles.
And yet, we are still struggling, and yet people are still looking for, basically, take my money and give me some box with a Zero Trust label on it, and I just want to have it deployed by tomorrow. And we had to say again and again, sorry, it just doesn't work like that. It's a journey. You have to work hard. You have to combine your existing tools, whatever.
So, by now, we actually have the first report that people are finally getting tired of Zero Trust, or at least of the actual term. So, is this the end of hype? I would argue no.
This is, in fact, the beginning of the real productive development period. As I say at Gartner, after the trough of disillusionment comes a plateau of productivity, and this is what we are talking about today.
So, again, just a quick reminder, Zero Trust is just an architectural concept. It's not just about technology. It basically covers all of your company's assets, users, and data. And the goal of implementing Zero Trust is not just to put a label on your IT infrastructure, but to actually reach some tangible business benefits to make you more secure, less complicated, protect from ransomware and hackers, boost user productivity, you name it. And the primary idea behind this Zero Trust concept is basically you have to know at any time what's going on within your network.
You have to track all your actors, users, resources, data. You have to assume that your network is unsafe by default, so assume breach at any time, and implement the security controls the way that the breach, that implicit untrustworthiness, if you will, does not affect user productivity. And for that, of course, you have to enforce strict security policies and restrict access for every, even the smallest resource within your network, and you have to verify and monitor everything continuously. You've probably heard about the tenets of Zero Trust many times.
I won't go through every one of those. I just want to highlight that basically those seven rules can be grouped into several categories and statements.
Basically, they say, first, you have to isolate each of your sensitive resources. You have to apply strong security controls at every layer, from the lowest one. Like you have to protect your data, you have to protect your network, you have to protect your identities in the platform syndication, you name it.
And, of course, you have to monitor and record all security-related activities within your network. Only by implementing those three pillars, if you will, you can combine those and ensure that your security policies are actually working consistently and securely and for every access decision dynamically. I tried to sketch a really high-level overview of what traditionally Zero Trust architectures are presented like.
Basically, you have your endpoint devices, you have your identities, and every time they need to access a resource, they have to go through a policy evaluation process. Basically, some kind of an engine decides can this user from this endpoint access that particular resource or not.
And, if yes, your access request will go through a set of security controls, which classify, encrypt, protect access, monitor access, whatever, to your data, to your applications, to your infrastructure. And, this happens dynamically in real time every time you need to access.
And, of course, this access is always strictly enforced to be the least privileged one. And, of course, there is always the feedback loop, which provides context and information to optimize and improve and secure those policies even further.
So, this is an endless process, if you will. And, this picture actually doesn't go into details.
And, like, you will see many more technical details in the second part of the presentation, of course. I only want to highlight one really big takeaway from all this. Zero Trust, again, it's not a specific set of tools or protocols or anything. It's not even a specific architecture. I call Zero Trust the Feng Shui of IT.
Because, just like those principles of Chinese philosophy, if you will, you don't have to build a new house to live according to Feng Shui. You just need to rearrange your furniture to probably hang a picture on your wall. You follow a really easy set of basic rules and guidelines.
And, there is more than one way to achieve the perfection, if you will. And, this should be, like, the biggest takeaway from our today's webinar.
Again, Zero Trust can be implemented in more than one way. And, you probably already have all or most of the components you need for that. You just have to make sure that you know how to combine them together and how to make sure that they operate in a continuous circle.
So, you would ask me, how does it actually relate to segmentation, the main topic of our webinar? Well, it's kind of easy.
Because, segmentation is the foundational principle of security. It even predates cyber security. It predates computers. It even predates submarines, if you will. Segmentation is the basic approach to securing anything, starting from the medieval fortress to the modern multi-cloud application architecture. Without segmentation, the entirety of your IT will go down if it's in one space, one place.
And, of course, it's a little bit ironic that for the last decade, at least, or even more, we have been conditioned to think perimeters are bad. Zero Trust is a good alternative to those.
But, this is really a false dichotomy. The problem with perimeter-based security is not the perimeter itself, but what's inside it. If your perimeter is too big, if it only isolates a few key components of your IT, but not the rest of them, you have too much implicit trust within a single section of your IT submarine, if you will.
And, thus, you will have an increased risk of a breach. Segmentation is a well-known and proven remedy, but it has to be applied carefully and consciously. It has to be modified for the modern times. The biggest challenge of traditional segmentation is static.
Of course, you can vary the degree of your segmentation. Basically, you can go and just set LN and DMZ and end with that. That would be the real kind of legacy, old-school approach towards segmenting your network.
But, you can go smaller and smaller. You can segment your cloud environments. You can segment applications or even individual microservices, for example.
Or, go even further and segment on the container or single process level. Whether you call it micro- or nano-segmentation or anything else, you have a spectrum of approaches.
And, the only challenge here is that, basically, the complexity and the management effort increases as you go smaller and smaller. And, of course, ideally, you build a tiny micro-perimeter around every single asset within your network.
And, around every single user and endpoint and database and application and whatever. But, doesn't it really sound like zero-trust? That's exactly the same that is stipulated among the zero-trust tenets. That you have to isolate your resources. You have to enforce access to every resource. You have to validate it every time.
So, yes, the micro-segmentation is actually the workhorse. One of the tangible, proven, validated, and better tested approaches towards implementing zero-trust. The only problem is how do you make it manageable at scale?
Because, as soon as you start thinking about basically deploying tiny firewalls around every one of your assets, you have to think, okay, now, how do I deal with hundreds of firewalls? Thousands, maybe. What if my firewalls are ephemeral because they have to be instantiated for every container I'm running? How do I do it across multi-cloud environments, hybrid environments, and so on?
Obviously, you need some kind of sophisticated orchestration and automation platform for that. And this is where we are coming to the idea of intelligent micro-segmentation. How do we extend this technology to work for the multi-cloud scale?
Well, obviously, it has to be identity-driven. So, static micro-segmentation no longer works. We know that.
So, it has to be aware of the actual identity of a user or a device or even a resource you need to protect. It has to be dynamic and always learning. It has to know what's going on with your resources, with your traffic flows, with your network configuration. It has to constantly understand what's changing and how to adapt your policies to those changes.
Obviously, when you are thinking at large scale, you have to add some kind of automation. And, ideally, that automation has to be smarter than just if-then-else kind of rules.
So, maybe there is some really strong place for machine learning and AI here. And, of course, it has to be everywhere. It has to cover all of your IT footprint.
Otherwise, it just won't be zero trust if you're only doing it halfway, if you're only for parts of your infrastructure. So, ubiquitous deployment is a must.
And, one added bonus, I would argue, would be zero footprint. So, ideally, it should be implemented without actually throwing in additional expenses for security hardware or administrative effort for changing your infrastructure and so on. It has to just work with existing tools. It has to follow the Feng Shui approach, if you will. How to actually technically implement these requirements?
Now, that would be discussed in the second part of our presentation. And, I would like to leave you with one quote of Lewis Carroll.
So, you see, it takes all the running you can do to keep in the same place. If you want to get somewhere else, you must run at least twice as fast than that.
So, act now. Actually, I probably had to start yesterday, but do at least start today, not wait for tomorrow. Because implementing zero trust, according to all those requirements, it's still a journey. It still takes time, and you never know when the next ransomware or a breach hits you. It could be tomorrow. It could be even today. But I guess with that, we can give the stage to Nicholas DeCola.
Nicholas, you are very welcome. Hey, Alexei. Thank you. I appreciate the intro. I think it's really good context. And here at Zero Networks, we agree with everything you said around zero trust being an architecture. It's going to take multiple components. But I think some things that, you know, the last couple of years have not been a reality to actually implementing, especially intelligent micro-segmentation is something we've really overcome here at Zero Networks and spent a lot of time kind of creating that solution to make it easy to intelligently micro-segment. Yeah.
So, very much like Alexei's slide, if you think about networks today, they look something like this in some way, shape, or form, right? Maybe some of the components there, maybe all of those components are there.
But, you know, we have remote workers. We have some type of on-premise network or cloud networks that are all well-connected, which really is a problem because if you look at an attacker from an attacker perspective, right, an attacker can get in and they typically start at an endpoint, right? And once they get to that endpoint, they're easily able to compromise all of the different assets in the environment due to that lateral movement because these networks are well-connected and what I like to call network openness, right?
Because of the lack of segmentation or micro-segmentation, the attacker is just able to move to the resources that they need to, especially as soon as they can get a credential. And so a lot of things, a lot of tools out there are great for detection, but attackers are still getting in and getting past those and able to laterally move and compromise. And we've seen a lot of ransomware recently in the last few years, which can really quickly bring down a network by just stopping all of the capabilities from the clients and servers.
And so when we thought about how to really micro-segment the network, as Alexis said, you know, making it ubiquitous was really one of our big concerns. And so the way that we do it is by deploying a virtual appliance that we call a trust server. And that virtual appliance remotely controls every asset and its host-based firewall, right? So the built-in operating system firewall that's available inside the asset and controlling that remotely without having to put an agent on every machine.
And so once we can remotely connect and enable that built-in firewall, we can start collecting and monitoring events from that firewall. Well, this allows us to then take the asset and put it in a learning period.
And again, back to Alexis' last slide there with the bullet points, this is where we automate building all of the micro-segmentation rules for the environment, right? So we look at what's going into that machine and we build inbound rules to allow all the low-privilege traffic. So instead of ending up with, you know, 65,000 ports open, maybe not all listening on an asset, we get that down to only the few ports that are low-privileged that need to be. So think about, you know, clients accessing a web server. We're going to learn that.
We're going to create a rule that allows clients to access that web server, right? Because we don't want to break applications as part of micro-segmentation. You'll never be able to implement it as part of your Zero Trust journey. A web server talking to a back-end database server. We'll create a rule that says that web server can talk to that back-end database server. And now that database is not exposed to everything on the network, but only to the few servers that actually need to access it for storing data.
One of the unique things that we do during this learning period is we don't open any privileged ports and protocols. So think RDP, SSH, WinRM, those types of ports and protocols. We actually create MFA policies for those. And that's important because to really stop an attacker, attackers, when they're in, they need a privileged port and protocol on that destination to perform that lateral movement, right? They need to be able to connect with those credentials that they've stolen and actually execute something on the distant machine, right?
So they typically need RDP, PS exec, use some type of tool like that that allows them admin access. So we don't open any of that app, any of that up. And we create MFA policies that require users to self-service MFA before they're able to connect. And we'll talk a little bit more about that as well as demo it. So at the end of the learning period, again, back to automation, we automatically move all of those assets that are in learning into protection.
And so this is when we turn the firewall on to block inbound by default and only allow the few things that are low privilege that we've learned on the network. So, again, this is really about getting to segmentation easy using automation and using an agentless model. So you don't have to deploy agents to get to that kind of basis of micro segmentation using intelligence really, really quickly without having you to without having you to have to do a lot of work. Right. By using that automation to build all the rules, you don't have to analyze all of your traffic and build it.
The automation will do it all for you. Makes it super easy. We also give you the ability to protect O.T. assets and IOT assets in a little bit different model, because think about O.T. and IOT assets. They don't have firewalls on them. Right. You can't remotely control and restrict or micro segment that asset by putting a firewall on it because no firewall built into it. But what you can do with your networks is segment those using IOT segmentation and we will block access to those IOT devices from all of the assets that we do control the firewall.
So now, even if an attacker gets on a machine, they can't laterally move to that O.T. device, which has no firewall. And let's just say you happen to get a bad patch on the O.T. device with a malicious payload. That attacker is not able to spread to the rest of your environment, again, because of this micro segmentation and starting with a block inbound and only allowing those low privileged kind of ports and protocols in the environment. So just to back up on the O.T. segmentation really quickly to recover this. I know you didn't have the visuals.
So, again, you can segment O.T. assets by using zero networks and we block outbound access to those O.T. devices because the firewalls on the host O.S. is blocking inbound by default, only allowing that low privileged traffic. Then those O.T. devices aren't able to spread laterally to your endpoints. And then lastly, on the architecture side, there is an agent like model for certain use cases where maybe your machines can't be controlled by a virtual appliance. Then you can use an agent model to kind of manage those.
But again, everything functions the same way. You're just in real time, remotely controlling that firewall from the cloud by applying all the rules and policies that are in the environment. So then just to show kind of a visual of how the MFA piece works, again, we do this for all the privileged ports and protocols. But if a legitimate user wants to connect to a machine to administer it, again, use that privileged port and protocol. Once that connection is blocked, that triggers a matching of the MFA policies and the user is prompted with a self-service MFA using an existing identity provider.
So this is being able to, again, step up that authentication when they have to connect and use that privilege access. Of course, once they MFA, then they're allowed a just in time rule that allows it's temporary and allows them access to be able to do that administrative function that they need to do.
Now, we built this specifically to really stop attackers from using privileged ports and protocols. But you are able to use this for really any port and protocol. So kind of, again, back to Alexei's kind of concepts and architectures. You can now apply this to any port and protocol or if you have a sensitive application that you want to make sure is MFA. You can now apply this technology to do that as well as part of micro segmentation in the environment.
So now if we go back and think about an attacker and even take something like SolarWinds, which was a software update compromise or supply chain compromise. Right. Once that attacker is in and they try to go look for these ports because of this micro segmentation, which is kind of the underpinning of zero trust. Nothing is open. Right. Only few things that are allowed for low privileged kind of normal user access. And now the attacker has nowhere they can go. They can't use any privileged access without an MFA. So now they're stuck.
And this is really the core concept of what zero trust is all about. Stopping the attacker because they will get to a machine at some point in time in the environment. But you don't want them to be able to spread to ten hundred thousand machines in the environment. Want to contain them to that first one. Hopefully the EDR will detect that, contain that as well. And now because of the network level, they're not able to move anywhere inside of the environment. So now let's jump over and see what this looks like.
So here in the portal, you can kind of see a protection overview just very quickly of the environment, how many assets might be protected or in learning. Remember, I talked about the assets go through a learning period to do that intelligent micro segmentation. You can see the rules. You might see how many MFA's are occurring across the environment. But most importantly, here in access, you can see all of the different rules that have been applied to your environment from micro segmentation.
First and foremost, you can see all of these were created through a or through that learning period, which we took the asset. We said, hey, it's time to learn. Once it went through learning, we automatically generated these rules based on those assets. And you can see we do a lot here to really make this human readable or humanizable, if you want to call it that. Because if you remember, firewalls operate on source and destination IP addresses.
And so here you can see we have things like system groups, like all protected clients and all protected assets and any and internal subnets and functional tags where we automatically discover server roles and functionally tag them for you. And once we take those assets through learning and discover these types of tags and groups, we make it super easy to read these rules.
So, for example, all of my internal subnets can talk to my domain controllers, all of my domain controllers on all of these ports and protocols that we learned. And again, you would have to do a lot of human analysis to figure out what are all the ports and protocols that a domain controller uses, whether that's searching through Docs.Microsoft.com or literally watching a bunch of packets on the network. And we automate all of this for you through a 30 day learning window. But if you notice, there's no 3389, which is RDP or 5985, which is WinRM.
Again, those privileged ports are protected with MFA. And so we automatically build these inbound MFA rules. And you can see we've created quite a few to protect various different ports and protocols. But more importantly, we keep it simple. So the policy, the default policy that we give you will say any user on any asset using any process trying to connect to something that's protected on RDP must MFA. And if they do MFA and pass that MFA, then they will get a rule created for four hours that will allow them that access.
And of course, you can change how long you want the rule to be created for. You can get very granular and say, I only want certain users. So back to Alexei's point about being identity driven, you can make decisions based on identity assets, even the process level, the source process or destination process on top of that port and protocol that you want to make sure the user is actually connecting to when they MFA. And so if I lock this down to administrators, if a non administrator tries to RDP to any machine in the environment, they would never be prompted. It would just be blocked.
They would never have that access. But what it looks like for the user with these MFA policies, if they want to go connect and I'll try to connect to our controller, I'm just going to grab my phone. So here I just got a dual prompt on my phone. I know it's hard to read. I'm also going to get a browser prompt, which I will drag down to my screen here in a second. And in this browser prompt, we give context to the user.
Remember, this is a self-service MFA for that privileged connection. Nicholas is trying to connect to the domain controller from his machine using RDC main dot X. And this context is important because if an attacker was on my machine trying to connect with command dot X, power shell dot X, malicious process dot X, that context is very, very important. And so here I could sign in with my identity provider, which would require an MFA, and then I would be able to connect.
Now, I just hit approve on my phone in Duo for simplicity, and we can just quickly go back and look. And now we can see a rule has been deployed that says my machine can access that domain controller using 3389, and it'll expire in four hours based on the policy. And that came from the MFA platform. And just to show you that the rule actually did get deployed to the endpoint.
Now, when I connect, I will be prompted for credentials. Again, controlling that network level access is very, very important because then the attacker is not able to connect. And this is super important if you think about it a little bit deeper. If that port and protocol is closed, the attacker is not able to launch even a vulnerability against it without an MFA, right? They must MFA first to even open the port and protocol.
So even if there's a vulnerability in RDP or SSH or any of these privileged ports and protocols or really any protocol, again, you want to protect, then that is closed and they have to MFA first to open that up, which means they can't even use their vulnerability. And we give you visibility into all this traffic in the environment in what we call activities. You can think of this, excuse me, as the connection metadata. And so you can see everything from the user, the process, the machine outbound on the left side, inbound on the right.
We can see it was allowed outbound from the source, but it was blocked inbound to that domain controller when I tried to connect on RDP. And if we click on the shield, we can see why it was blocked. It was blocked because there was no open rule allowing that inbound traffic.
Again, we keep it closed and require an MFA. And once I MFA, now I was able to connect because of that MFA rule that had been created. And of course, you can see all of these things that happen, such as creating just-in-time rules in an audit log here. So you can see I created a just-in-time rule for the domain controller, and we have all of the details to include what MFA method was used. You can jump to the rule. You can jump to the MFA.
So, again, it's super important to be able to use this as part of your Zero Trust architecture to really micro-segment the network, only allow the few things that need to be allowed in the environment, and you can even review these. So if I go to that domain controller as an asset, I can see some rules, which is great, and I can also see any global rules that might be applied to that machine, but I can also visualize this.
So here I can see my domain controller has some incoming allow rules and outgoing allow to six different entities, and then based on the rule type, I can expand this out and see, oh, here's Nicholas's access on 3389. There's also some permissive rules allowing some various different things, like Red Hat has access to the domain controller. There's some groups like internal subnets that have a few ports that have access to the domain controller, and I can also flip and look from an outgoing perspective. What does this domain controller have access to?
And I can see it has some privileged access to the share server on port 443 and port 445. And I can also analyze all of the traffic instead of just looking at activities in a list, which can be tough to kind of summarize. I can actually summarize all of that traffic and say, OK, what are the distinct ports, distinct users, assets, processes, and you can see kind of counts over the last week. So here we can see 59 different assets with 22 different users, and you can click on each of these and see who they are.
Across 49 processes, we're accessing LDAP in our environment, and it occurred about 27,000 times in the last week. So you can aggregate all of that activity traffic to really understand the network connectivity that's occurring in the environment. And then just one last thing, we talked about OT segmentation. So here we have an OT asset we've segmented. You can see it's protected with zero networks. And same thing, if I try to go attempt to connect to this, it's blocked. I just got an MFA policy prompt as well. So I'm going to say approve. I also got a browser.
And just that fast, I now have access to that device, which has no firewall that can be controlled, but now it's segmented, and I can apply MFA to even basic connections like web where this machine does not allow or this asset, OT assets, not allow any type of MFA. Really, it doesn't even allow us to change the admin username. The best we can do is put a complex password on it. But now we're able to protect that with what we call an outbound MFA, where here, same thing, we're saying if any user tries to connect, they have to pass an MFA.
And so you can see a temporary rule has been created that allows Nicholas access to that meeting room camera for a time bound period, and that will automatically expire, and I'll have to re-MFA if I want to connect again. So let me jump back over to the slide. And I think it's your turn to present the last slide, Alexei. Okay.
Well, that was really impressive. The next part in our agenda is the Q&A session.
And, of course, again, kind of I encourage all of our attendees to submit their questions to the questions tool on the GoToWebinar control panel. I will just read them aloud, and we will talk about those. And in the meantime, we will just go through some additional useful content. But before we jump into that, let's quickly run our second poll of the day.
Again, it's just one question, but this time it's about your plans on micro-segmentation. Like are you already a fan, or are you still evaluating it, or maybe you are a skeptic? Can we please run our second poll? Okay. Interesting. And in the meantime, let me just kind of give you my kind of first feedback to your presentation, Nicholas, and your demo. I have to say it's really impressive how it ticks all of the boxes I mentioned in my part. It's all there.
It's intelligence and automation and scalability and even a zero footprint approach, right, because you are mostly reusing the firewalls which already exist on the host devices. Yeah. I was just going to say I appreciate that. We spent a lot of time, a couple years, really building the product to solve those challenges.
You know, when we talked to customers and got feedback about these challenges, the things that you mentioned were things we really, really focused on to solve these for customers, right, because no one has been able to implement micro-segmentation. I mean, just alone, building the rules, like let's say you can deploy agents and you can get past all those hurdles, building all of the micro-segmentation rules is just super complex, very, very time consuming. And many customers have told us they've been trying for years and have not been successful.
So we really focused on solving kind of those challenges that you mentioned. So thank you. I appreciate it. Right. I'm also kind of looking at the results of the poll. It's interesting to say that, yeah, about half of the people are actually already implementing it. And I really hope that they are implementing the right kind of micro-segmentation, the one we have seen today and not the old school approaches. But I can also see that at least a few people were actually considering it before, even trying it before, but somehow it just did not work out.
It is probably a good day to try it again this time, doing it the right way, aligned to the tenets of Zero Trust. I will say it's good that nobody said we don't need it.
Yeah, absolutely. So everybody realizes they do need it in some way, shape, or form, which is good.
Of course, we have kind of a bias here because people who don't need micro-segmentation, they usually don't attend webinars. But, well, at least I think we have a really committed bunch of people attending our webinar today, and this is great. Okay. So we'll just kind of start quickly with the first question we have from the audience. This is actually a question I hear almost every time we discuss any kind of an automation platform. So how long does it take to teach your platform to do its job properly? Yeah. So what we found is that most business cycles run on a 30-day period, right?
You think about things like finance jobs run once a month so that they can report and all of those types of things. So we recommend a 30-day learning window for servers and for clients. You can typically do two weeks. You can also extend that. If you have critical assets and you're like, hey, I'm worried there's only a batch job that runs once a quarter, you can extend that for multiple months and just let the learning go longer. But by default, we recommend 30 days, and that's enough. We haven't seen really any issues with the 30 days in the customers that we've worked with. Great question.
And I guess a follow-up on that would be, and this is my own question, if I may, so how do you ensure that nothing breaks? Like, of course, you can extend your learning periods more and more, but are there any alternative approaches? Do you have some kind of a testing, simulating, simulation of policies or something like that? Or can you actually manually add something which, like, you probably, like, some customer would fear that it won't be caught by learning? Yeah. So we have what we call blocked alerting. So let's say something goes into protection and all of a sudden there's a few blocks.
We alert and say, hey, like, this might have been missed in the learning and let them review it. Let's say it's a couple months later and something new is blocked. You can easily update the rule and add a port, and you saw how fast those rules are applied. Boom. It's open. And so you can very quickly respond and open that up if you need to. You can very quickly respond and close something down from an IR perspective if you need to.
For example, we had a customer that asked us about the new SMB vulnerability and how to block that, right, because you shouldn't be talking to the Internet on SMB, and we helped them block SMB on all external IP addresses except their corporate ones because you really shouldn't be talking out on SMB, and that helped protect them from the vulnerability very, very quickly. So, yeah, there's, again, and what we've seen is, you know, very, very, very rare cases something does get missed in learning. It's very rare.
Again, with 30 days, you're normally going to see everything that you need to, and one of the things we think about in AI is favoring opening ports up so that an application doesn't break. So sometimes we might open, you know, some might say extra ports that maybe aren't needed because we don't want to break the application, but we know it's okay because we've protected everything else that's administrative with the MFA.
So in terms or, yeah, in favor of being more application-centric and not breaking applications, we definitely are very sensitive to that and make sure we open the right ports and protocols, and sometimes one could argue some extra ports so that things don't break. Well, another kind of follow-up question I hear a lot when we're talking about AI and machine learning stuff is, can you actually reuse the findings, the learnings from one customer with another one, like this kind of a crowd wisdom approach where your models or whatever you call them are shared between customers?
Is it something which you can support? Yeah. No.
Actually, we don't use a bunch of ML and AI underneath. I know it says AI, meaning artificial intelligence, created the rule for you, but it wasn't through AI. One of the things about network traffic is it's not actually great data for ML models and AI models, so we use a very deterministic approach.
Hey, we know domain controllers need these ports. We know web servers need these ports. We know these types of things need these ports, and then anything else kind of falls into a bucket of, like, hey, what is actually connecting, and maybe it's something we're not aware of, and go ahead and create a port for that, again, if it's not a privileged port and protocol. And so it's more of just smart intelligence that we've learned, and so we can apply those things that we've learned from all the customers to all customers.
Yes, but it's not an AI or ML model in terms of using those across different customers. I guess that's just another point to strengthen the point I made earlier.
Like, don't look at the label. If it says AI, it doesn't mean that it has to be machine learning. If it says zero trust, don't trust it.
Actually, look behind it. Ask the vendor questions how it works, why it works, and you might actually be pleasantly surprised that it's supposed to work much easier and more deterministically. That's right. Traditional, quote-unquote, traditional machine learning.
Okay, next question from the audience. How do you isolate the bad boys during the learning process?
Okay, I'm not sure what you mean by bad boys. People?
Yeah, I think they mean do we learn if an attacker is in the environment, could we learn that? So the answer is yes, potentially, right?
Because, again, if the attacker is very much acting like a user and connecting to a file share, right, like, yes, we could learn that. But, again, we're going to open the file share up so users can access because, again, that's a low privilege operation that doesn't need to be, again, super complex. So there's a potential yes. The reason I say it doesn't matter because, again, that attacker is going to try to laterally move in the environment, and that's going to require a privilege port and protocol.
We never open those up unless it's an interactive, like, service account, right, because, obviously, service accounts can't MFA. And so that will automatically go to an MFA policy, and that will stop the attacker. And we've actually had a real customer that we learned on. We deployed the system. They never knew the attacker was in the environment. They found the attacker after because they started getting weird MFA prompts with malicious process names, right? Remember when I said the context is very important.
And so at the end of the day, I would argue it doesn't matter because we're going to close down all of those privilege ports, and you'll start getting prompts for MFA, and that's what's going to stop the attacker. So maybe if we learn that they connect to a server, it's not that big a deal because likely other users connect to that server too.
Again, that's, again, showing the importance of the difference between anomaly detection in the traditional sense and actually knowing what's going on deterministically, like you don't have to worry about mislearning something because you know that these things aren't anomalies. They are known malicious behaviors, which you could have already protected from the start, right? That's right. Okay.
Okay, next question. Have you experienced breaches to renewable energy or wind farms? How would you protect remote locations like that?
Yeah, that's a tough one, right? And it's something we're evaluating and looking at because there's physical access that you have to think about in that type of model, right? If it's a windmill sitting in the middle of someone's property that's being leased, you may not be able to control physical access into that device that's in the environment. Ultimately, you can still segment it like we do so that if an attacker were to come into the network in the traditional model that we're seeing many, many attacks, right?
They're not traditionally today walking into buildings and plugging into networks, right? The nation state actors just do it remotely. They don't need to fly over and try to break into the building. And so with that, you can use the OT segmentation, IoT segmentation for those type of devices to make sure the attacker can't move to those type of devices. So hopefully that helps answer the question. It probably requires a little bit deeper conversation, but, right. It was really interesting. So OT seems to be like a really big point use case for your solution.
Do you face this kind of traditional opposition from the hardcore, old-school OT security people who would actually value process continuity much higher than cybersecurity? Are they afraid of segmenting too much and breaking the manufacturing process, for example?
No, I mean, that's the good thing about the learning, right? We're very deterministic.
And again, we learn, oh, this OT device needs to connect to the server to run this batch process daily or upload metrics daily or whatever it is that it's doing, you know, connecting to that server. We're going to create a rule that allows that OC asset to talk to that server. But now it's nice and controlled. It can only talk to the destinations that it needs to. And so regular clients and other things like that won't have those capabilities.
And so the attacker can't talk to that OT device, or even if it was a bad patch on the OT device, the attacker can't move again to the other assets in the environment. So, yeah, we absolutely learn that and make it easy. And I think when people see how quickly the learning happens and how easy it is, because you don't have to analyze, and again, how we favor making sure we don't break applications because we know we're protecting the privilege ports and protocols, you know, we haven't had a lot of pushback in that OT sense. Right.
So that's, again, the major difference between the old school segmentation and the intelligent client. That's right. Okay. Great. Great. Next question. Okay. How can we be sure that the principle of least privilege is used? My fear is that the rules of the type any-to-any are enabled. So how do you prevent those types of rules from applying?
Yeah, so the AI will never create an any-to-any rule. There are some certain cases where we might say, because of what we learn in the environment, and it depends, like a central source to any, right, so think about a vulnerability scanner. It needs to be able to talk to many assets, and normally on all ports and protocols, right, so we'll create a rule that says that vulnerability scanner, a set of vulnerability scanners, can access all machines on all that.
So, again, this is about being deterministic in the AI and learning period when we create the rules to make sure it's not any-to-any. Of course, we have administrators in the portal, and an administrator can always create a rule and could create an any-to-any. So you do have to do some rules of review, and we have review approvals, right, so you can set those types of things up to make sure somebody doesn't create a rule that doesn't go through, doesn't create an any-to-any rule.
And, again, let me stress, this is the right way of doing zero trust because, like, who is watching the watchman? If you don't have this kind of zero trust governance, then you're not doing zero trust properly.
Okay, great. Next question, now it's about the multi-factor authentication. So can you only apply it on privileged ports, or can you configure any kind of protocol to support that?
Yeah, so it can support any port or protocol. During the learning period, the AI will only create for privileged ports and protocols.
Again, our number one goal was to really stop attackers because if you look at the attacks today, it's all lateral movement once they're inside the environment, and nothing is stopping them from getting in. They're always going to get in on something.
Some way, shape, or form, if you have that assume breach mentality, then it's about stopping them from laterally spreading. So that was the number one goal. So kind of the side effect of building it to protect privileged ports and protocols is that you can use it for any port and protocol, and it doesn't matter about the application because this is at layer four, so it doesn't matter what's higher in the app stack, whether it's a web app, whether it's a SQL database connection or whatever, you can protect any port and protocol with the MFA capability.
Well, going back to that kind of fear of breaking stuff up, do you provide some kind of guidance or best practice for which ports should be always protected with MFA and which have some risks involved? I can imagine that, I mean, for human identities, it should be probably fine almost all the time, but can you extend this strong authentication to, I don't know, APIs, automated clients, and stuff like that, which can actually break some flows?
Yeah, that's kind of a deeper conversation, but the short answer is it depends, right? So we do actually have a method in our MFA methods called noMFA. So you could have service accounts and non-interactive type things still do just-in-time.
Of course, they wouldn't be MFAing because they can't MFA. They're non-interactive, right? So you can still do some just-in-time capability to protect those and ensure they're only connecting to the machines that they should be connecting to. But we're already going to learn that.
Again, we will already look at interactive service accounts. For example, I didn't show it in our rules, but we have a rule for a server that manages another server. It's a service account. It does use WinRM, and so we did allow WinRM because we learned there's already an interactive service account.
But again, we could convert that to a just-in-time capability if we wanted to. But again, we don't want to break applications. We want to make it easy.
So yeah, our guidance is protect all your privileged ports and protocols for interactive kind of human places. Service accounts limit where they can actually use that network connection using the rules, which we will auto-learn for you. And so then you really won't have a problem with breaking the applications if you follow that model.
Okay, great. There is one really interesting question from a totally different perspective. One thing I mentioned earlier is that it has to be a ubiquitous deployment.
In a way, you already do that because you are reusing the existing firewalls, the host-based firewalls. But can you actually also incorporate other types of firewalls as well? What if your customer says, well, but we already have our Fortinet or Palo Alto or anything like that. Would you work with those as well?
No, because the problem and the reason we haven't implemented it is because, yes, if you have a Fortinet, it's in the middle of your network, and you might be routing traffic through it. But those two clients that are plugged into the same network switch on the same VLAN, because not every machine is on its own VLAN, right, can talk directly to each other without routing through the Fortinet. So if you want to micro-segment, you have to get to the endpoint. You can't put firewalls in the network. You can have them.
We actually have most customers removing their middle-of-network firewalls, not their perimeter ones, but the ones that they have in the middle of the network, because now they're basically doing it at the endpoint. Why do you need to do it in the middle of the network and force routing traffic? It actually makes it easier because now you can go kind of back to a flat network because you're controlling it at every endpoint if you so choose, right? It's really ultimately up to the customer and their architecture and how they want to design it.
But like you said, it should be easy, and you shouldn't have to change your architecture. So if they want to keep the network in the middle, that's fine. The problem is doing it at the Fortinet doesn't buy them the value of lateral movement between things in the same segment, because ultimately what you have by a Fortinet in the middle is segments, not micro-segments.
So, again, you're not competing with them. I can imagine you could, but I guess you can also coexist.
Yeah, we can coexist. There might be some, depending on the view, some competition of it, but, yes, we can absolutely coexist with them. It's just you would manage the roles in two different places, right? You can get all the way very granular down at the endpoint or kind of do a centralized policy in the middle of your network.
But, again, that doesn't give you the protections between inside that segment. Okay, great. And we've actually just reached the top of the hour, so there will be no more time left for further questions. If someone still has a question to answer, you are very welcome to contact us directly. I'm sure you will find our contact information in our slides, which will be available for download anyway. Thank you very much, Nicholas. Thanks to all of the attendees and the future viewers of the recording. I hope to see you at some future time at another Coupang.io call, webinar, or event.
Thanks again, and have a nice day. Thanks, Alexei. Thanks for having us.