My name is Rachel Swissa. I'm from the national security program in which actually it's about the whole security community in Israel.
It's the, the idea of highest rank it's the police highest rank and intelligence, highest rank. And then what we do is actually, and where this is where I belong is we do simulation with one objective, one goal it's to allow different languages to be, to, to, to play a role during one and a half hour in where it personnel cybersecurity personnel is clashing actually with someone like me, a social scientist. Okay. So it's really a different language. And I have as a social scientist, a privilege to tell you that we are in an era when we speak different languages, but still we meet somewhere.
We meet somewhere. So this, this subject or this topic today, I'm very much, one of the things I'm trying to do is to let these high people that comes really from very serious security platform, not to forget what human already gave us about strategy.
And what I mean is strategy in the real world, how much it is. Is it relevant? How much deter that we know for so many years? Is it relevant to cyber? Yes or no? If yes. How?
And then of course the concept of decision making and on decision making, there is already a lot have been written a lot, especially the one mentioned yesterday, learning die, adversary, right? Learning the opponent.
And when, when they tell us social scientists, you should bring all the security data. So they want everything on the table, which is what do I mean by mapping the comprehensive cyber factor. One artificial assumption is the result. No such a thing as a target. And there is no such a thing as an attacker, to me in terms of a cognitive process, both are the same in a cognitive process. And I will just explain what I mean, right? Because in terms of national security, a state can be a target and a state can also attack. Okay. Now how do I, I profile a state in terms of a cyber actor.
This is a question. So we need, we go to these cognitive systems. So one of them may be, where do I push? What is it? This one? Do I use this?
What? Which one?
This one, the white, white word. Two. Can anyone help me?
No, because I don't see it here. Okay, please.
So, okay. What I'll do, first of all, I'll just give some definitions. And of course, in every security definition, we always make this difference between minimal definition, maximum definition. And it Def it has a lot, a tremendous implication on when we want to analyze something we want to put into, into fragments. So if we speak minimal, it's really something which is focused. Yes. Yes. Okay. Thank you. Nope. Something is not, it's not working and just, okay. Okay. I'll try with, is it working or maybe you would like to help you from there maybe and just, well, okay.
So we have this outline in which I will first give this. Okay. Let it be here. It's okay.
No, no outline, no outline. It's okay. So if I go to the general definition, you see it's yes. The red one. Yeah. Yeah. Which has really no logic. Okay. Thank you.
The mental action or process acquiring knowledge, understanding through thought experience and the sense almost a philosophical philosophical definition, but the last one, which is a resolution of, of a definition of a general to cybersecurity, it's we speak about a mental model. We speak about cognitive map in which people perception and cognition of the concept behind our security decision.
It's about a security decision tells me that prepare yourself to specific biases that happens when we speak about security, because we have different patterns of decisions. It, it very much depends on the issue. When you come to security, there are new biases. Okay? So this is now the red one. I'll try and see
What I say here, a general saying, but I know it should be elaborated is we always make this different. When you go to a cyber crime scene, we always make this different between motivations and cognitive process of a cybercriminal, but actually they be separated. Okay.
Because of different reasons, one of them is concerns. The interface. There is an, in a very interesting interface between what we call mental models and cognitive motivation in at least in the last one. Okay. Which is a completely term taken from the social sciences, but it brings forward all these variables that can be quantified, of course, in terms of expectancy value theory, for instance, for a, it is an attribution theory.
For instance, law enforcement agencies very much deal with it, cognitive desonance for instance, for a victim, the self-perception, it goes for everybody in the, in the cyber crime scene and the self actualization.
Okay. So all these actually cover or they, they, they, they relate. Theyre all the actors, all the actors of a cyber crime scene, any cyber crime scene. This is why I don't make a difference between attackers and targets. Why is it good? Because one, we can learn from one on the other. If I want to understand a crime, I have a lot of information in the victim. Okay. And vice versa.
So, okay. The third one is, so this is just, I should say one question, just one challenging question that we usually make a simulation out, a simulation out of it. And there are these two, these two assumptions, okay. That all cyber crime actors are involved in cognitive processes, all of them, but they have a similar interactive learning process. One to all of them. In spite of the fact that they have conflicting goals and different cognitive maps.
I'll tell you an example, one example of the similar interactive learning process.
When you predict in a situation of a prediction, you might get at least two results. Either your prediction correlates with the actual behavior of your enemy, of your opponent, of Thea, or it doesn't correlate. These two examples are very easy to understand if it doesn't correlate the human, not the AI, not the artificial intelligence. This is what they're telling us today that maybe the artificial intent a robot might understand you better. This is what they keep telling us.
The human, the human mind is very much affected by these two options. If he doesn't succeed to predict in predicting the actual behavior of the at care, okay. He has what we call a Lance cognitive. It's a bad influence. For instance, if we speak about state actors that are in a conflict, if I don't succeed in predicting the behavior of my enemy, instead of doing a he's doing B, he might see much more threatening than if I do have actually predicted the right behavior.
Of course, this is a theory that goes along and try to understand, okay, the interaction between rivals from the evolution, the beginning of the, of the conflict to the, the possible resolution or management of the conflict. But what we have in here is this, that all of them, all decision makers make predictions and predictions are very much cunning, very much, really interesting when they do correlate or don't correlate with reality.
It's very much a fact, and this becomes already an input in the following, in the coming decision, which what I mean today, to what I mean is that we are working with sequential decision making in which all decisions okay are in are actually a process going on and you should detect, you should identify the type of the mechanisms, okay? The iterations going from one decision to the other concerning your prediction on the other.
So this is the first assumption, but if we take all of them, for instance, cyber criminals, it can be insiders and outsiders, victims, law, law enforcement agencies, and others, it personnel, they have one question. How do they make decision? One is of course, to attack the other one. How does he make decision that expose him to cyber threat? The third one is how does he make decision to fight cyber criminal behavior? And the fourth one, how does he make decision to protect cyberspace?
Now, if I do some social networking, I have a tremendous added value on the event on what you call, prevent or identify or detect. This notion belongs to what we call car strategy, thinking, okay, you don't need, you don't necessarily need this information, but someone has to do it for you and let it be there. The machine knows what to do with it, but you need to give some programming clues.
So those programming clues cannot focus only, only on the attacker all the time. Okay. It should be because everybody has a different thinking or notion or perception of the other.
You ask law enforcements that have no really background in cybersecurity and they work with the it Analyst. It's a huge difference between the way they perceive cyber criminals and the way the it personnel perceive cyber criminals. So how do you integrate them both and how you do, how can you harmonize this perception into one in order to work with it? Not only in an ad hoc situation, identify is ad hoc, prevent is an ad hoc situation. What I'm trying to tell you, what we do in this class is we just go, you know, far beyond that. Okay.
In order to possibly put some more inputs that might necessarily may, might look irrelevant in the beginning, but if you give them to a machine, okay, and you give it some key or some programming, you will see really tremendous. And we did this on past, even on, on cyber events that, that they brought in class. Okay. So I'll go to the following.
Yeah. So what another, another maybe example, focusing on the category of the perpetrators, not the threat, it is different, not the risk, the perpetrator himself, and we have different, right?
We have the individuals and you have the organizations that we see as the most challenging. Now they all have this, what we call men's cognitive system, which became, which begins with a precognitive phase. All of them have this precognitive phase. All of them have cognitive capabilities. All these actors have cognitive capabilities. And it is very important when you try to understand capabilities and intentions, this is a very, a very important variable. And of course, we, then we have the decision making. Now. I hope that I can. I dunno. I'm not sure I can.
I just want to give an example of some, you see that word. So there are very specific questions that we try to program.
Can you open it please?
No, I don't think that I can. I'm not sure.
No, this is not, this is the coming one. No, no, it's already the end inside here. What we are actually trying to ask is can we, can we write, learn from the motivation, for instance, these two types of motivation, what we call eccentrically and intrinsically motivations that speaks about different types of cyber attacks. What can we learn from that? And then what we can, or technology versus social engineering that this morning I had the, the, the privilege to, to attend to, and then decision making the determination and intentional selection of targets. This is not a threat and risk analysis.
It's not about that. It's much more about the human factor. What can I know? Because there is no such a thing as one type of attacker.
And that, of course, you know, there are so many attackers, a state can be a target and can be an attacker. What does it mean? What does it mean? Okay.
So, and then this, we were, we were, this was a very interesting proposition that we took from a question, how can capabilities we provide help user performance, because we know that there are such a thing in a particular system. How can we use it as a compass to know the users, the fact that you are programming something for someone, either by design or not, what can we learn on his cognitive capabilities?
Can we use that data? Okay. And integrate, can we use it in cybersecurity, in cyber defense when we want to defend this target or that target?
It appears that yes, because programmers, this is what they, they, they, they proposed two things. One thing is on the organization, specifically speaking on the organization factors take the factors. That's what we were talk told, take the factors in the interface between internal, external organization environment, work with them in order to try to profile and organization, whether it is a target or, or an attacker, try to see what we mean by the organization, mental model. What do we mean by an organization, cognitive system or cognitive map? So this is what we working.
What is the organizational goal and mean structure? What can I learn from this as either at occur or target? Okay. The other question, what is the complexity of task elements in this organization in a bank, in a business, in a military platform?
What, how do they work? They work hazards. What are the hazards? Even for a victim? Okay. That's the lowest level, even for what can we detect as a hazard for him? What are the hazard for a bank? Okay.
What, what, what do we mean by hazard for, for, for, for a business company? So this is a very important in profiling the cognitive system, okay. In profiling the cognitive system, what are the constraints action of this organization? And so on the other one is the organizational agents go and check factors with human and machine agents. That type of interaction tells us a lot about organization and the human factor, right? About the human factor of different types of organization, just this kind of relationship. So how can I profile the human information processes?
There are different human information processes that we might okay. Categorize in with the, with them, different organizations, like for instance, what are the perceptual characteristics of a national security platform versus business platform?
What is the memory and attention characteristic?
This is, this is about cognition. This is about cognition. Okay. And the other one is communication and cogniti nation, human human. For instance, I like very much the title, if I may, of this conference and it's governance as said yesterday governance and not management because in human human, you also have a lot of problems in risk analysis. Why? Because we speak of something that is first of all, subjective, second, very dynamic. The risk of yesterday is not the risk of today. And relative there is, as, as said, there is no such a thing as absolute security.
So when you take this concept of human, human incognitive aspects, you understand even your work should be, should be verified. Even the thing you do as a cybersecurity Analyst should be checked. Okay. And human intelligence system. Okay. I think it's just for, I think I'll, I'll stop here. And if there is any questions be better, I won't take more time than that. Thank you.
Thank you.
So, and thank you.