Now, let's change topics. Now, we dive into artificial intelligence. And our next speaker is my opinion at least, one of the most brilliant cybersecurity experts globally. He's the author and creator of the Cyber Defense Matrix and the Dye Triad. He's a board member of the FAIR Institute, senior fellow at GMU Scalia Law School's National Security Institute, guest lecturer at Carnegie Mellon, an advisor for many startups. I think he found his own startup even. He also has CISA experience in a couple of organizations gathered.
He's owner of over 20 patents and was recognized as the most influential people in security by Security Magazine and Influence of the Year by SC Awards. I'm very happy to be able to welcome Sounil Yu on stage. Thank you very much. Thank you. Maybe to share some perspectives. And thank you for the warm welcome as well. I hail from Washington, D.C., so rather from the United States. So I apologize. I can only speak one language because I'm American. But anyway, that's it. Let me share with you some perspectives. I think Bertolt talked about my background.
I have a deep background on cybersecurity. But when it comes to AI, I think I want to level set a little bit here because a lot of people think that they know a lot about AI. With tools like ChatGBT, we seem to have a lot of experts really quickly. I am not one of them. I have just barely passed the peak of Mount Stupid. And I'm hoping to be able to help you understand how I got past this peak of Mount Stupid. And the way that I did this is through looking at mental models. And mental models are really powerful because it helps us really understand the world and anticipate the future.
So there's a lot of mental models. And what I'm going to share with you is only a small handful of them. But these mental models, you'll see how they're really useful a little later as we talk about how to apply mental models to problems like security and to AI. So let's dive in. So the first mental model I'm going to share with you is called the DIKW Pyramid. And it stands for Data, Information, Knowledge, and Wisdom. And what we have seen is a steady progression of technologies that have moved us up this pyramid. So I would argue at this point in time, we are at the knowledge age.
We've moved up from data. We've moved up from information. The technologies have moved us now to the knowledge age, where if you're embracing AI, you're really trying to become a knowledge-driven organization.
Now, let's suppose I asked you, okay, hey, go define the problem space for AI. If you took that from a blank sheet of paper, if you approach that problem as if it was a blank sheet of paper, you might struggle to understand what the problem space is for AI. But this pyramid actually gives us a nice anchor. This mental model gives us an anchor because we know the problem space for data. Problems like data engineering, data privacy, data security. These are problems that we have dealt with with data. You apply those same suffixes to information. Many of them still work.
And then when you apply it to knowledge, all of a sudden, you see the problem space for AI. If I can replace the word AI with the word knowledge, this pyramid this mental model helps me define what that problem space is.
So, I'll give you an example. Let's take data quality. You know what data quality is. Everyone's familiar with data quality.
So, what exactly is a knowledge quality problem? Well, you've heard it a couple times already. Martin mentioned it earlier. Hallucinations. That's a knowledge quality problem, right?
Okay, so how do you deal with hallucinations? Again, if you look at it as a blank sheet, you may struggle. You may say, well, I don't know. That seems like a brand new thing. But if you see it as a quality issue, then you can say, what were the governance processes I had for data quality? Can they apply to knowledge quality? What were the roles and responsibilities we had for data quality? Might they apply for knowledge quality? What were the technical controls we had for data quality? Do they apply for knowledge quality?
Well, it turns out that our people and process controls can roll forward, but our technical controls fail, okay? And to understand that, let's take another example. Knowledge privacy. What exactly is knowledge privacy?
Well, way back about 15, 20 years ago, Facebook rolled out something called social graph, or sorry, graph search. And it was really powerful. It enabled us to find things that we had in common with our friends. Favorite restaurants, favorite movies, favorite books, what their sexual orientation was, what their political voting patterns were. That was not a violation of data privacy. That was a violation of knowledge privacy. There were no technical controls you could implement at the data privacy level to affect what was inferred at the knowledge privacy level.
We needed new ways of thinking about how to approach the problem. And that's the same problem that we're going to have with knowledge security, or AI security, or LLM security, okay?
So, one of the problems that we're seeing pretty quickly is when it comes to rolling out, for example, large language models for enterprise search. What that does is it enables you to flatten your knowledge space. Like a flat network, a lot of things become reachable. That's great if you're trying to find useful things to help you get your job done to save time and toil. If you want to find out the status of a certain project or get meeting notes, this will help you do it really quickly. But like I said, it flattens your knowledge space.
So, a lot of things become reachable, including things that are potentially hazardous to the organization. So, this is something this is a new knowledge security type of problem.
So, how do we tackle this problem? Again, the technical controls that we apply at the data and information level are what we have today. And that's typically what we say, okay, well, I know what we've been doing before. We go fix our permissions or go classify our information. But when I do that, what I'm essentially doing, according to this mental model, is I'm squeezing this bottom part of this pyramid. This mental model helps us understand what the effect is of applying the wrong controls at the wrong layer. Because what happens next is pretty obvious, right?
The value of the large language model shrinks as well. And we have this bad tradeoff between trying to address the harm that comes from these potential large language models, but at the same time reducing the utility. I hope this mental model should be a really clear depiction of the challenges that we're going to face. Because we're applying controls at the wrong layer. We're using the wrong type of controls. And we need knowledge-centric controls for a knowledge-centric problem. Okay?
So, that's model number one. The second model, I need some lead up for this.
So, let me explain this a little bit. This is from a book called A Brief History of Intelligence by Max Bennett. He's a technologist. He's not a neuroscientist. But he wrote this really great book about five stages of brain evolution. The first stage being theory. I can go left. I can go right. If there's something positive, I go towards that direction. The next stage is reinforcement learning. Okay. It's positive every time I go right.
So, I'm going to keep going right. The next stage after that is simulating. And this is where we are at the most advanced AI systems today. Okay? But you can tell from the words already, reinforcement learning is primarily where most of our AI systems are. Okay? Large language models are reinforcement learning based. Simulating, again, we're at the very cusp of the very beginning of this in terms of our AI systems, the most advanced AI systems. The ones that, for example, beat the world champion in Go, it's based on simulating. Okay? Stage after that is called mentalizing.
This is also a lot of research going into theory of mind or inferring the intent of others. And the last stage after that is, interestingly, language. Okay? We think we've gotten there already, but actually not really. Language is the last stage. But I want to focus on two of these stages. The two stages I want to focus on is reinforcement learning and simulating. Because there are interesting flaws or shortfalls associated with reinforcement learning. Things like it's prone to bias. It's overconfident. It's unexplainable. Right? These are things that we all know about AI.
But it's actually really fascinating because this also maps to another mental model. One that was popularized by Daniel Kahneman and Amos Tversky. If you're familiar with this concept of system one and system two thinking, it was captured in... First of all, Daniel Kahneman, he's a Nobel Prize winning economist.
Actually, he's not an economist. He won a Nobel Prize in economics, but he was a psychologist. And what he did was he identified these two modes of thinking. System one, which is fast thinking. If you've ever read Malcolm Gladwell's book Blink, this thin slicing is what reinforcement learning is really about. Or system one thinking. System two thinking is very deliberative. It's very heavy. And it's very challenging for us to work through.
So, we typically don't do that. Rather, we typically depend upon system one thinking. But it has these kind of flaws. What's really fascinating about the book, Making Fast and Slow, is if you read it, it talks about all these system one flaws. But guess what? Those system one flaws are the exact same flaws as most of our AI systems. Furthermore, in every chapter of the book, he talks about system two controls. System two controls that says, hey, if you want to avoid these system one flaws, here's the kind of system two controls to consider. Let me restate what I just said.
If you want a book on looking at AI risks and controls, read this book. But see it from that perspective. And all of a sudden, you see exactly the kind of controls and the flaws and anticipate this for the future as well. This was a remarkable discovery. I thought this was really fascinating because this was written in 2011. But a lot of the research around cognitive biases and such, it's been around for a long time. It just maps really nicely to the challenges that we're facing in AI as well. All right.
So, now let's look past that. We'll look at simulating. And what are some of the shortfalls of simulating?
So, we're starting to, again, hit these types of systems. And we're going to probably come soon enough to really having these types of systems take into effect. What are some of the shortfalls?
Actually, what are the shortfalls that we have when we oversimulate? Okay? When we oversimulate, the flaws that we end up with are things like PTSD. Anxiety. Okay? Mental health issues. When we oversimulate over and over and over again.
So, what are the controls? So, there's a lot of, you know, controls, cognitive behavior therapy, and all that kind of stuff. But let's talk about one of the controls that we typically want to put in are decision-making controls.
And so, to that end, let me also offer my third mental model. It's a mental model that maybe hopefully many of you are familiar with. It's called the OODA loop, observe, orient, decide, act. I made some slight adjustments to call it sensing, sense-making, decision-making, and acting. And the basic idea behind this is we can break up how we look at AI using this particular mental model.
And again, this is a decision-making mental model here. So, what I'm offering is a way to think about how do you control system two type of systems? Because guess what we're going to end up doing? We're going to end up looking at AI and robotics or AI and automation and conflate them together. But let's keep them separate for a moment. What I'm going to say is that when it comes to the OODA loop, observe, that's really not where AI comes in. Where AI comes in is in orienting or sense-making. Sense-making is where AI really factors in.
It does factor in a little bit in terms of helping us make decisions, in terms of saying, here's some options for considering for decisions. But the thing that we don't want to assume is that AI is also supposed to help us with acting.
Now, I want to separate AI from automation. We can do automation across all these different things, but automation and AI, I want them to be in your mind to have them as separate things. I want you to have a mental model that says AI is completely separate from the acting part of the equation. Because we can automate everything here, but do you actually want the machine to automate everything all the way through? And guess what we're doing today? We're basically doing that over and over again. If you've heard of agentic AI, that's really kind of the whole point of it.
We're automating the sensing, the sense-making, decision-making, and acting, and stitching them together to have a system go all the way through without stopping. If that sounds dangerous to you, well, there's a lot of stories on how things have gone wrong.
Like, what could possibly go wrong? Well, lots of things have gone wrong. Because of that, we want to be really cautious, especially when it goes straight from decision-making to acting.
Now, let's just be realistic here. We're going to have a lot of cases where we're going to skip those steps, okay? We're going to have lots of cases where people are going to say, hey, you know what? We should go straight from sensing to acting. Okay.
Well, if we're going to do that, let's say, for example, we've had this example on threat intelligence. Here's a threat feed. Go act on it. Here's a sensing. Here's a feed of sensing, and I would like for you to block 0.0.0.0.0. For those who don't know what that means, it means block the entire internet. Okay. No problem, right? Okay.
Well, that ends up with a lot of problems. Go block Google.com because it's serving malware, and, you know, again, bad things happen.
So, what are the controls that we need? We need controls that basically allow us to stop very quickly and maybe even put it into reverse gear. If we include sensemaking, then we need other types of controls. We need lots of regression testing.
We need to validate our assumptions and document our processes, and even when we add decision-making in and we have the machine decision-make for us choose different decisions, we need additional controls there specifically in understanding who's going to be the person in charge of this decision-making apparatus that's going to automate because what we really need is somebody who's going to be held responsible and accountable. They need to have that sort of accountability.
Otherwise, who knows what might happen? And what's interesting about each of these three stages when we skip them is they also map nicely to those first three stages as well, steering, reinforcement learning, and simulating. All right.
So, that was the five breakthroughs, but let me now offer the fourth mental model, and again, to understand that, I have to talk about this next stage. So, this next stage is called mentalizing, and it's inferring the theory of others, and it's often argued that this is the most important facet for AI safety because when we talk about AI safety, one of the things that we need the machines to understand is our intent. But how will it understand our intent? How do we define that? How does it infer what we have in mind?
And what's really fascinating to me is that when we think about the different types of controls, there's all these different... So, let me explain this for a moment real quickly. I mentioned earlier, reinforcement learning is like system one. System one controls what I'm calling system zero. System zero is steering. System two controls system one.
So, in other words, simulating controls reinforcement learning. By the way, simulating also includes things called mental models.
So, what I'm sharing with you is a way to simulate, to make sure that what you find to be true in reinforcement learning is actually true because you have a mental model of the world. Now, if you know George Box's quote, all models are wrong, but some are useful, not every mental model works.
So, you have to have a wide range of mental models in your mind when you simulate. And simulation and system two and mental models are very tightly linked together.
So, next, what controls system two? Well, what I'm going to call system three. System three is mentalizing. But what is the control for mentalizing?
Like, what's the actual control? And what it turns out is what enabled mentalizing to occur is social dynamics. The notion of being able to determine what's right and wrong, not just doing something based on statistical things.
So, this ability to determine what's right and wrong is where mentalizing comes in. And if you think about the type of controls that we have for social dynamics or politics, the question I would have is what are political controls?
Well, in the U.S., we have what we call the constitution. The constitution establishes three inequal branches of government. And the reason why we have the constitution is because we needed something that will help govern something that governs the machine, okay, or the government. We want to have something that allows the people to control the government. We need something that allows the people to control the machine.
So, in the same way, we have to have this sort of control for these AI systems. Now, I was thinking about, like, what are the type of controls that we have? We have a similar sort of three, you know, balanced sort of function within organizations. We have this for governments, but what does it look like for organizations? As I looked around for that, I found my fourth mental model. And the fourth mental model is called Westrom's typology of organizational culture.
So, Ron Westrom, he wrote this about ten years ago. And he identified three organizational typologies. One is called pathological, where information is hidden, messages are shot, people don't like new ideas, low cooperation. Next one is called bureaucratic, where these are tolerated, but basically everything's slowed down.
Again, this was written ten years ago. The third stage, the third type of organization, he calls generative. Isn't that interesting? He calls it generative, okay?
So, we have generative AI, of course. And think of generative AI, the generative part of the organization is the money-making part of the organization. They're making money, they're trying to move as fast as possible. And what do we see generative AI doing? We think it's going to make us a lot of money, right? A lot of companies are hoping that it'll save time and be able to drive business forward. But what exactly is our bureaucratic AI, or what's the bureaucratic part of our business? The bureaucratic part of our business is like HR. It's legal, okay?
But think about bureaucratic AI as something that slows down the AI system itself, so that we can come in and make decisions or interrupt some of those decisions. But now, the question is, what exactly is pathological? The way that Ron Restroom described this typology, he said that every organization is one of these three. I would actually say that's not actually true. I think every organization has a combination of all three in the right balance.
And so, the question is, do we have the right balance? Do we have the right balance of the generative use of AI, a bureaucratic AI, and a pathological AI? Okay.
Now, what's the pathological side of the business? I would say it's actually security.
So, security is the pathological side. We tend to hide information. We put blame on people and those sort of things. But when we have a pathological AI, we're not talking about pathological AI that is pathological against humans, but rather against the generative AI. We need something that pulls the kill switch, that says, hey, generative AI, I need for you to stop. Okay? Bureaucratic AI that says, hey, slow down. And generative AI that tries to make as much money as possible. Okay? And the question is, what's the right balance? For every organization, it may be slightly different.
If you're a very large critical infrastructure, I may not want you to be very generative. In fact, I may want you to be very bureaucratic and even pathological. But if you're a startup, maybe being very generative is the balance. Okay? And every organization is going to have a slightly different balance. But what is that balance? And if you don't have the right balance, things can go sideways.
So, just to summarize, I shared with you four mental models. And I hope these mental models help you understand, again, how to think about what's coming in AI, whether it's the DIKW pyramid, the system one and system two thinking, the OODA loop, or finally, thinking about the three types of organizational controls. And with that, thank you very much. And I'll turn it over to Bertolt. It was nice having you on stage.