KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Unlock the power of industry-leading insights and expertise. Gain access to our extensive knowledge base, vibrant community, and tailored analyst sessions—all designed to keep you at the forefront of identity security.
Get instant access to our complete research library.
Access essential knowledge at your fingertips with KuppingerCole's extensive resources. From in-depth reports to concise one-pagers, leverage our complete security library to inform strategy and drive innovation.
Get instant access to our complete research library.
Gain access to comprehensive resources, personalized analyst consultations, and exclusive events – all designed to enhance your decision-making capabilities and industry connections.
Get instant access to our complete research library.
Gain a true partner to drive transformative initiatives. Access comprehensive resources, tailored expert guidance, and networking opportunities.
Get instant access to our complete research library.
Optimize your decision-making process with the most comprehensive and up-to-date market data available.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Configure your individual requirements to discover the ideal solution for your business.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
Hi, I'm Chris Grove technology evangelist here at the ZMI networks. Thank you for joining us today at the cupping, your coal cybersecurity leadership summit 2021, where we'll be discussing using artificial intelligence to detect anomalies in the OT process.
First, we're gonna discuss a little bit about what's driving the market to leverage artificial intelligence, but before we do, I just wanted to give you a level set on where we are, where we come from. We were founded by two gentlemen. One of them was focused on artificial intelligence and had a background in it and a PhD in it. The other guy was focused on cybersecurity and has a background and a PhD in that together.
They created this product that we have on the market called Nazomi networks, guardian and vantage, both products leverage artificial intelligence from the ground up in order to accomplish our goals. So today's, session's gonna discuss a little bit about what AI is and why products are starting to embrace it in the cybersecurity space. So first some of the market forces that are really pushing this change, boil down to just digital transformation as the factories move into the future and leverage more and more digital components.
We see the need to use more and more advanced technologies in order to interpret and process all this information and apply cybersecurity. Also T devices, as you know, there's gonna be about 83 billion IOT connections by 2024, according to Juniper. And 70% of those are gonna be in the industrial sector. So we have a lot of devices coming from a lot of different areas that are gonna be doing a lot of different things.
And then the convergence, which is sort of behind us now, the it O T convergence where we're seeing a lot more risks and benefits and other types of things coming from it to OT and also the other way around coming from OT to it. So there's a lot of new ground to cover in that convergence finally threat and risk management is an important part of why we would bring artificial intelligence into a product in cybersecurity. So the goal of many cybersecurity products is to be able to address a lot of different industries that although they're quite different, they challenges are different.
They are regulations and their processes, and the things that they do on a day to day are quite different. Some of the challenges are the same and the technologies are the same. We may see similar PLCs and HMIS in various manufacturers that may be making completely different products, same thing with airports and automotive. You could potentially find the same brand of HMI or PLC in both of those environments.
Also, the attackers don't really always care if they're hitting an automotive plant or an airport, they just want to install ransomware and shut everything down and try to make money. So the vertical for them is not very important. So it shouldn't be for us either the cost of these attacks have gone up drastically average, 1.7 million per attack. Right now you can see the chart on the right when these incidents happen in the industrial space. The damage is very high because the uptime is very high.
The downtime as a result of not only the incident itself, but the cleanup efforts afterwards can rise up the cost very quickly. So we need great things to be able to could keep our environments safe from these types of risks and artificial intelligence is one of them. So first let's talk about a high level definition of what it is essentially. It's enabling computers to display behavior that could be characterized as intelligent.
Basically, we're trying to get computers to make decisions that seem human, like is a easy way to say it. And if we look at what's called the AI roadmap, you can kind of see where this is going. There's a lot of different sciences and fields that make up artificial intelligence. But for today, we're only gonna be talking about a couple of them, specifically machine learning and expert systems. So what is the difference between the two first artificial intelligence is basically the science and the engineering of making these intelligent machines.
But machine learning is basically making machines able to learn from data. You don't have to necessarily program the machine to, to get it, to learn something. You can just feed data in it. Furthermore, if there's a mistake made in its assessment, when you correct that mistake, the machine gets better. So that's what machine learning is. So AI has many other parts, as I mentioned, but two parts that we're discussing today are those two categories. And then within them, we're gonna hit on neural networks as well as knowledge-based inference systems.
So let's talk about AI and action for a moment and think about a car that is using artificial intelligence to drive down the street. We think about all the information required, such as the rules of driving and the direction of the paint on the road and what it means and the, the box truck, whether it's a building or a truck, if it's moving, if it's parked, if it's in the way, what direction is it going collision avoidance? There's a tremendous amount of information that has to go through a vehicle, both inputs and outputs and quick decisions are being made instantly.
So that can be challenging to do with traditional computing means of logic. This is typically how we would approach something like this. Let's simplify, instead of thinking about how we would handle a self-driving car, let's talk about just identifying these two fish. Is it a sea bass or a salmon? And if we used logic to do that, we would use our old school if then statements. And we would basically create a sequence of them to help define the characteristics of what would make a fish, a sea bass versus a salmon. And eventually we'll get there.
We'll be able to identify these two fish, but let's start talking about thousands of fish. And then let's map those out on a graph, which you can see on the right here and the one axises width and the other one is lightness. And there's a squiggly line going through there. Those that line sort of highlights the outliers. And these would either in a cybersecurity product, raise false positives or even worse, they might be false negatives. And it may like mischaracterize one of these salmons as a C bass or a CBAs as a salmon.
So artificial intelligence aims to solve this dilemma of not properly identifying these outliers with using just logic. And once we teach the machine that these are actually a CSR salmon that were mischaracterized, the machine gets smarter and the next time around it won't make that same mistake again. So if you can imagine now this access is only got these two dimensions on it with en lightness, but imagine now expanding that out to a Z and even many other dimensions of data and starting to account for length and weight and color, and a whole bunch of other things you can imagine.
Now we can get more and more accurate. The more data we run through it, and the less likely we're gonna arrive at false positives or negatives. And then we can keep teaching this model more and more information, not just on the characteristics around the fish, but also around other things as well. So we could eventually start to identify dolphins or land animals and other kinds of things.
And at one time the United States postal service was faced with a challenge of trying to read and identify digits used on envelopes and not just written digits, but printed and smudged and rain dropped and all these different languages and variations of ways to write addresses. And we have 5,000 city names and state names and 10,000 zip codes and 50,000 different characters.
So it became very complex in logic to use OCR, to accurately determine what was written on these envelopes using just logic, but using artificial intelligence, we can actually break a digit down into 784 different input in neurons, which you can see here or functions. And using those, we can navigate over to the right, which are, are classes. And by looking at the characteristics of the numbers and using these neural networks, we can determine that a two is a two much more accurately and quicker and more reliably than using old ways of trying to program in logic.
Now, imagine if we had to introduce a new character or something rewriting, all of that logic to introduce a single character would be practically impossible. So being able to teach a system, what a new character looks like is much easier task when it comes to using neural networks. And then it sounds like AI is the catchall for cybersecurity. You just dump all the data in there, and it does everything for you, but that's definitely not the case. Humans are really good at identifying things. Generally like a baby can look at a cat and know that it's a cat.
It doesn't need to see over and over and over many, many cats to determine what a cat is. It's really good at generalizations.
In fact, you could probably show it a cartoon of a cat once. And then when it's saw a real cat, it would instantly identify it.
However, machines can't do that. They can ingest large amounts of information. For example, they can ingest a million cats and easily identify the one cat with different color eyes, which would be a challenge for a human, but they can't learn what a cat is from just a single image. So together humans and AI are really the key to getting forward and having the benefits of both. So think of AI as a tool, not an end. All solution is not a silver bullet. It is a very economical and efficient tool to put into the hands of a human. And when used properly can yield really good results.
So AI has come a very long way. You may have first started to hear about it years ago in movies or science fiction. And the first maybe mainstream recognition of AI was in the IBM deep blue computer, which played against Casper and chess. And they played a few rounds and had different winners and so forth. But the main thing to know about it is that I may have been able to eat beat deep blue. Anyone may have been able to beat deep blue. This is because the machine was built from the ground up to beat Casper of not to be a good chess player.
It was to beat Casper of, and the AI was designed in the chips. The hardware was designed the operating system and all of the programming was built around a single player and it did a good job at it and it learned how to play against him. But what's amazing is how far we've come since then. It's not a lot of time between 97 and 2016, but there was a machine in 2016 that leveraged AI and learned how to play a game called go go is much more difficult for a computer to win. First of all, it's a lot more complicated.
It has more moves, but it's also naturally resistant to the way computers work and makes it more difficult to build a computer that can win at this game. But using AI, they did win the game. And not only did they win that game, but this AI was able to go on and learn how to play other games, such as chess, and then even played video games like Atari. And then they were able to expand that AI even further, that it was able to learn how to play games without ever being taught to play the game, meaning it was just thrown out there onto the tennis court.
It played and it learned how to play and it learned the rules as it played. So it didn't really need to be pre-educated with how to do everything. It just needed to start to live in the experience and it was able to pick it up. So that's very close to human experience. And since 2016, you wouldn't believe how far we've come. Unfortunately, the adversaries have also come far. There's been a few cyber attacks out there that have already known to have been AI assisted. Some big brands have been affected by these attacks.
If you think about how artificial intelligence can help the attacker first, it's gonna be speed. So today they have to adjust the logic. If something doesn't go right, or to get more infections out there, they need to make changes in the way that a worm or something automated has to operate. But by putting AI in there and allowing it to think like a human they're able to bypass that whole short circuit, that whole part of the process and the malware or the worm or whatever it is they're putting out there is gonna be able to get further, faster.
And then the penetration, once it gets there at the front door, it's going to be able to make better decisions about what vulnerabilities to leverage, which ones are gonna get deeper into the network, which ones made lands, you know, in a watering hole or something. So being able to make more intelligent decisions about which direction to go and maintaining persistence in the network is also going to be prob some benefits of the AI that they, they will be leveraging.
And also the efficacy of the attacks themselves being able to be more focused and laser pinpoint accuracy on exactly what to do is going to raise less alarm around their activity and allow them to operate with more stealth. So these are some of the things that we need to be on the lookout for, and we need to get there before they do. So how do we do this first? It's important to know there is no artificial intelligence without an information architecture. You can't bolt it on later. You can't just add it in. It needs to be baked into the product from the beginning.
And if we look at what cybersecurity products generally try to do in the marketplace is first identify all of the assets in the industrial environment and get deep level details about all of them from firmware versions, to the cards, the hardware modules that are installed and Mac addresses and programming that is on those cards, the OS versions and configurations of the devices themselves. And then the vulnerabilities that exist within those devices.
Being able to monitor all of the traffic and all the connections to understand the risks to them and the criticality of these devices within the operational processes.
And then furthermore, being able to identify anomalies and threats O on these networks and to these devices and understanding what to do afterwards, what kind of information does an operator need, who needs it and what is going to potentially be impacted, and then being able to scale this out, because a lot of these industrial environments, you know, you think about these benefits in a small scale, but many of these environments have dozens or hundreds or thousands of product lines and factories going.
So these solutions need to be able to scale out all of this information in a very large way. And when we think about anomalies first, all anomalies are not created equal. We don't need an alert for every single network anomaly. We need them for the important ones, and this is where AI really comes in and helps us out. So let's look at what types of anomalies we may have on the network.
First, a new node on the network is definitely something easy to identify. We know who should be there when we see something appear that is new, that's gonna be an anomaly. Also an existing node that we do know about communicating to another node that we know about, but it's the first time they've ever communicated. So they're both approved devices on our network, but the communications direction could be different.
Also an existing connection may have communications to a different layer of the Purdue model, or we may find different behavior in a device it's that we know about like an HMI to a PLC that suddenly begins to write programming to the PLC, rather than just reading a value. Also the other direction. What if typically the engineering workstation is, or an HMI is reading and all of a sudden it begins to write that would also be considered an anomaly or within the industrial protocols themselves. If we see a different operation in there, typically perhaps I set a, I set the value of a set point.
And in this particular case, I read the value or setting the value itself could be an anomaly. Perhaps I have a machine and it can only go to 100. And if I send a value of 1 25, that's gonna be an anomaly. So this is a tremendous amount of information required across the entire industrial enterprise about all of the devices, all of the values being sent. And even if the values slow down. For example, if I am used to seeing the HMI read a value of once per second, and it went to once every 10 seconds, that's also anomalies.
So we need to be able to track all this information and navigate using these artificial intelligence, tech, tactics, and techniques so that we can train these systems to do better and better cybersecurity. So if we think about where an anomaly may happen on a network, sometimes they may happen outside of the firewall. So looking for anomalies at the corporate firewall level could do, you know, have some benefits, like for example, if typically my remote connection comes in from the neighborhood, but now it's coming in overseas that might have value.
But unfortunately in OT, a lot of these anomalies can happen deep inside of the industrial network. We may not get so lucky to see our SCADA server reach out to the internet, to a foreign country. We may have something more likely as an insider threat or an, a PT that got through a different way, fishing or something. And we have an anomaly that happens between the SCADA server and the industrial control devices themselves.
In that case, we're gonna really need to be inspecting deep into the industrial protocol, not only the network and behaviors of all the devices, but the industrial protocol itself and everything within it. So using AI in cybersecurity solutions helps to speed the detection of process based anomalies. And you can have a lot less downtime by detecting these things early.
You can avoid safety issues and equipment failure, and it makes monitoring complex industrial control systems, a lot easier, having a strong information architecture from where to do your forensics and your research and querying and navigating across devices and working with these large data sets is crucial, especially during time of a crisis or an outage when you're trying to figure out what happened. So for more information about how AI can help cybersecurity in your industrial control systems, we have a couple things that you can find available on our website.
One is a PDF use case around preventing equipment failure. Another one is a, a white paper on advancing ICS security and visibility with the Nozomi network solution. We also have a solution brief on the OT and OT security and visibility, and then our Nozomi networks labs also has an OTT security threat report that we publish twice a year. And the most recent one is chalk full of valuable information about risks to these environments. Thank you very much for your time. Really appreciate it. And we hope to stay in touch and I'll stick around for Q and a take care, stay safe. Bye.