KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Optimize your decision-making process with the most comprehensive and up-to-date market data available.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Configure your individual requirements to discover the ideal solution for your business.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
The rapid advancement of Artificial Intelligence (AI) technologies, notably Deep Fake and sophisticated machine learning algorithms, has ushered in a new era of cyber threats, presenting unprecedented challenges to digital identity integrity and cloud security. This panel session will focus on this new quality of sophisticated threats and their implications for identity verification, data security, and the broader cybersecurity landscape.
Participants will engage in a rigorous discussion on developing resilient cybersecurity frameworks that can adapt to the evolving AI threat landscape, emphasizing ethical considerations, technological countermeasures, and policy interventions. The session seeks to equip attendees with strategic insights and practical tools to navigate the new quality of cyber risk, ensuring the integrity of digital identities and the security of cloud infrastructures in the AI era.
The rapid advancement of Artificial Intelligence (AI) technologies, notably Deep Fake and sophisticated machine learning algorithms, has ushered in a new era of cyber threats, presenting unprecedented challenges to digital identity integrity and cloud security. This panel session will focus on this new quality of sophisticated threats and their implications for identity verification, data security, and the broader cybersecurity landscape.
Participants will engage in a rigorous discussion on developing resilient cybersecurity frameworks that can adapt to the evolving AI threat landscape, emphasizing ethical considerations, technological countermeasures, and policy interventions. The session seeks to equip attendees with strategic insights and practical tools to navigate the new quality of cyber risk, ensuring the integrity of digital identities and the security of cloud infrastructures in the AI era.
So this is fantastic. What a, what a wonderful, I'll move over. What a wonderful opportunity.
So, you know, we heard a lot in, in this track about Yes, please. Oh, here we go. There we go. Come on. There's always room for more. That's right, exactly.
Well, don't be so gender binary. So this is fantastic. So we heard a lot. Could someone close the door please? Thank you. We heard a lot about paradox and the paradox yields conflict. And in this session, we're gonna break the protocol a little bit. We had originally planned to have the panel and then Yvan and I talking, what I'm gonna suggest is that we have the panel and then everyone stay on stage if you're willing, unless you have another appointment because, and I'll let the folks introduce themselves in a moment, but we, all of the paradoxes we had can often lead to conflict and conflict.
We're human beings, we're physical beings, and we've had experience in managing and sometimes poorly managing conflict in the physical world. And now we're taking these paradoxes and the resulting conflicts into information spaces and some of the mechanisms, some of the strategies that we've used to manage conflict and mitigate conflict in the physical world. Some of them will be useful framings in the information space and some will be less useful.
And so I, so with everyone's indulgence, I'd love to start out with the, the, the first panel is intended to be in a cyber realm. And so let's talk about cyber and then recognize that in the second part we're Yvonne's expertise in the warfare area and the physical conflict because cyber is so important in that physical realm as well. Then we'll kind of move into that.
So let's, on the first part of this, let's emphasize the cyber and then the second part, recognize that we wanna move over into cyber, in the physical realm and what that's doing. AI obviously is part of that, is that indulgence. I I'm asking you when you're on stage, so what are you gonna say? Right?
So, but anyway, so let's start out, I'd like just to start out with the, just a general question of a statement and, and a brief statement, please. A brief ish on just thinking about the what ways is the landscape of conflict, cyber conflict changing in an AI drenched environment? And as part of the initial statement, please just introduce yourselves briefly and just your association so that people have a sense of what that framing is, the institutional framing you're bringing to bear. And let's just go along and as a starting point, please. Can you hear me okay?
Yes, yes Indeed. Okay. I'm Robert Laps. I work for UK government. I have to disclose that because there is an election declared that I am under impartiality rules. So I can't, I can only say what is really in my role, my role as a, I'm the identity architect for defense in the uk.
And I, although I have cybersecurity as a professional background, I can't comment about the cybersecurity in my organization as you understand. So also I disclose that I am neuro distinct.
So I, I have a DHD and I think in different ways. So I'm familiar of conflict in many different ways.
So, and the, the question was the second part of it as the introduction was after that, Just to think about are it, is it a difference in kind or difference in degree that we're moving into in the AI realm? And so, so taking your perspectives on conflict and, and projecting them out into the I and again, briefly for our initial discussion, I would say there is both.
We, we see innovation in many different ways, but we also see scale. And so our challenges are both at pace and new ways that we find conflict can come about That.
Yeah, my name is Yvonne Hofsteder. I'm professor for Digitalization and Society at Zi, university of Applied Sciences in Zi, Germany, as well as a CEO of 21 Strategies, which is a company that provides a, a virtual twin of the battlefield in order to try out things in the virtual battlefield before you actually take it to, to the real battlefield. And of course there's a lot of AI implied here. So my vision on shaping conflict with ai, so I'm actually referring to, to, to national security and conflicts between countries.
On the one hand we see that things speed up, which is also due of course to the war on Ukraine. So that's speeding up things. When it comes to procurement through governments, AI obviously is a very important topic with regard to issuing new platforms which are in eye enable. So speaking of more increased technical autonomy, on the other hand, we nevertheless see that there is not yet real disruption in the defense space. With regard to ai, we see linear innovation, so we see that things and doctrines, which are still in place, are being improved but not yet disrupted.
However, there are few attempts from few countries as we undertook a countries study how defense AI has been used in different countries. So slowly, slowly some countries catch up and try to do more modern things or things which I have not been yet been able to do without ai. They start doing it with ai.
So, so to speak. Tactics at the battlefield, Please.
Hello, my name is Sandra Tobler. It's a pleasure to be here. Nice meeting you all. I'm the chairwoman and founder of a Zurich based cybersecurity company called re. We are a specialized product innovator in stroke authentication transaction signing, as well as fraud prevention. And with evolvement of ai, of course we have both attacks that become more sophisticated. We might talk a bit more about new types of, of things we see out there, be it polymorphic malware, be it very refined phishing attacks, like it's really real danger. It's on a very day-to-day basis.
Customers are confronted with those challenges. We have mainly customers that are providing critical infrastructure, be it banks, utility companies, or telcos. And on the other side, of course we can also dive a bit deeper into what I would more be more humble and call it machine learning based technology.
How, how can it help address cyber threats? I don't want the machine to take decisions autonomously at this stage yet, but definitely there, there are ways that can amplify fraud prevention and that's why we do a lot of research.
We, we came out of, at the technical university at h Zurich and have closed ties to global research in that respect. And I'm happy to dive a bit more into that. To answer your question, I would agree it's both, I would say so for those reasons, the threat is real on a digital level, but also, let's be honest, what we can use, there's also a lot of marketing lingo and the fraud detection is still evolving and it's a very exciting journey to be part of. Excellent. Alan? Yeah.
Hi, my name is Alan. I'm coming from Nati and basically in this context, so, so I wear several hats involved, several different projects, but our main, and so we are most active in the digital identity space. So especially in like new or emerging technologies when it comes to digital identities. But we did a lot of work or, or doing a lot of work in terms of how to simplify verification of information and with, with with ai, now everyone can produce great and convincing content.
And today, today's trust model really are built on the fact that I trust information because I trust the source. But this paradigm is kind of starting to break. So what our, our focus, excuse me, Like can you talk a little Bit closer? The microphone? Yes. Okay. So the real challenge that we are, so, so we already have a huge challenge in the conventional digital identity space, really about what is the origin of information, is the organization entitled to, to produce that information and, and who is to be held liable.
And with AI industry today through social media, we have a serious problem to address at scale, but we don't have the very basics covered in, in like already established domains. So, so there are several challenges And I don't m feel free at any point also among the panelists, if you don't, don't just rely on me.
I'll, I have many questions, but if you have questions for each other, please feel free to jump in on that. Yeah, absolutely. And and also we probably have some questions from our virtual audience or, or Hear from the world. We're very casual in our panels.
Well my, my my main concern here in general is it is true that there is a lot of information and data and information is something that is not a static, it is changing constantly and the technology is changing with it. So then creating a more secure space, I believe that the main challenges maybe are the jurisdiction eventually, you know, so then where are the attacker really happen because I believe that, I dunno, at a certain point we would need to think about, okay, so we cannot divide this between regions like the, the EU or North America or something.
There is like a parallel region, which would be like the digital world really. That that one would be a main challenge. And I wanted to pick up on something Sandra mentioned is we have new attacks and so we have a situation now where we have a democratization of access to tools that are very powerful.
And so if, if we have a, let's start with the scenario. AI fueled attacks and not ai fueled responses.
How, what are some of the issues that arise when the institutions are have this lag to pick these things up, but everyone has the ability to start to accelerate the, the threat surface and, and expand the threat surface that the last couple of presentations, again, I get freaked out by these things. You have the possibility of I could make five clones of myself and set them off in the world and say, go do mischief. You do mischief, you be legal, you be this, you be that. Forgetting the idea of liability for a second, I just made five times me being an attacker.
And so putting aside the possibly interesting issues of whether that's an iteration of the multiverse itself, which I think might be interesting, what do we do about our institutions that are newtonian and linear and, and physically derived and this threat surface that's proliferating exponentially? Do you have some thoughts on that?
Well, one of the things that i i we mustn't lose sight of is in general, as a species, we are making great progress. Things get better. They are not as bad as you think they are.
So don't, don't get, don't get frightened by this. Most people live good happy lives and there are only a minority of people who exploit other people. So just so we mustn't lose sight of the fact that this isn't something that we can't come together and deal with. If you've ever seen hands rolling and his factful, if you've read that book, we have our own bias views of what reality is and we're already misinformed and we don't really need any help from AI or misinformation because we're not always very good ourselves.
So the interested parties who come here are motivated to learn and find examples and we probably don't, we're not representative of the general population. We're self-selecting.
So I, there are probably quite a lot of people who are really not worried at all and unlikely to impact. Now on the other side, there are some real concerns and I'm very pleased that people who come here are actually trying hard to do something about it to make our systems safe and in the analog world.
So I, my first profession is a mechanical engineer in the analog world, if my engineering efforts ended in a system failure, then we would already have had to deal with it. It's just not permissible my started out in the automotive industry in breaking if my product failed, civilians could be harmed. But we have already resolved those issues because we're not allowed, we have safety testing, but safety testing doesn't really exist in the digital space. It's still a very young industry.
And that's, so at the moment when we talk about this industrial revolution, it's not really that the cyber is and computing is that advanced compared to established engineering industries. So we, it's just that we need to get better at what we do and then won't these problems go away, Yvonne?
People, they go away. I think it has No political impact if you democratize the access to these technologies.
So in, in earlier times we, it was more or less owned by the us. We had the Silicon Valley. Silicon Valley was very strong. We had the hege on the us that was actually more or less the setting the rules with regard to technology. Now you democratize it and what actually happens is that you go away from the unipolarity of the world into a multipolarity. So it actually fuels the multipolarity of the world and you enable smaller states to become, to, to get into power with regard to inter interfering with other states.
So I remember for example, it was, I think the, the elections when the Trump election, the, the when, when, when Trump was actually elected, we saw a lot of Russian interference into which was proven and was actually very well looked at by the prosecutor afterwards. So there was a lot of Russian interference. And in the, in the last elections when President Biden was elected, we saw a multitude of similar things happening, but actually being done by where, by smaller states like Mexico. Like like Israel. So you actually give access to, to states which are less powerful.
So you change the balance of power globally if you give other nations access to these technologies, which were in the nineties still actually owned by the government. So for example, artificial intelligence is not new. We applied it in the nineties already for for military systems, the very early versions of it. And it was owned by the governments, it was classified more or less, but then it proliferated afterwards. And now we see it everywhere. And as I said this, this changes the, the balance of power worldwide. We talk about new risks, what can we do on the cyber side?
How can we understand the risks and start to test for our systems to address those risks. I think those challenging There are present here today. I think first and foremost, it's always interesting to understand a bit who is your adversary.
I mean, I'm not talking about state driven attacks now I'm really talking about private commercial motivated attacks. And they're acting these days as large corporations. So they have a business case they need to fulfill, which is typically under time pressure. It's typically at scale. So they need to push through as many attacks as possible to get some sort of response, which could be a monetary reward. And I think this is already a very useful mindset to understand how to keep up with potential changing threats. So organizations, it's, it's a raise in the end.
It's, it's increasing the barrier, make it more complex, may make it more expensive for attackers to attack because that's lowering the incentive and they will go to another one that has lower barrier. So that's a bit, that's a bit the raise we're talking about when it's very specifically of what we do with critical infrastructure. We see historically you have a lot of legacy systems that might have a very siloed view on data.
So I take the example of a bank, you might have transactional information from credit card or debit card transactions, but then you have maybe other data sets that are from the front end. And I think one development where we see a lot of organizations moving towards is definitely bringing the data sets together to have a more specific, more real time view of, of patterns and behaviors that could come from changing interaction like sector that it could be a potentially malicious attack.
So, so these are definitely technologies that can address on the other hand, but first and foremost it's really, I always advise organizations to look at what are risk scenarios, what are motivations why they could be targets of an attack. And then based on that you can look where entry be.
I mean, but one, one challenge that I definitely see compared to maybe the mechanical engineering analogy that you brought, which is also of course one that can be very fatal if if not done well, is the scale, the order of magnitude of scale that we are facing fret from. I'm, I'm very excited to hear more about, about your work, but on a private sector level, you have so much reliance on on few technology stacks or infrastructure pieces and components and those bottlenecks are the ones I think that are most exposed.
There's a lot of reliances on and that's why I strongly believe in, in the cybersecurity debate, it's no winner takes all market for that specific reason. You need to, to have specific best of breed technology patches that you block together and, and address those ever-changing fraud pieces and, and consolidation is actually very dangerous.
Yeah, Alan do you I was just, what I was just gonna say was that, but the problem is that the f the foundations on which our technology is built has got lots of unsafe code. So, and it and, and it's everywhere.
And until we fix the unsafe things that we do, all of those anti-patterns, then everybody will be able to exploit these everywhere if they have the knowledge to do it and a few people have and they're able to make a living outta it now you know that, that that's just like, okay, if we don't have safe windows and could be opened with a barrow or what have you, then there will be a certain number of people who can always get into people's houses and it's just a numbers game. You know, you make it easy for people to another living, they'll go and do that.
But for some people it, that's how they own a living. And if we don't want that, then we have to make our digital infrastructure, the built infrastructure better so that it is safe and fail safe and people don't get harmed. But just sticking plasters over it doesn't help. We have to do better infrastructure and, and I, that's how, but I have a bias view 'cause I'm an engineer so I see it from one point of view.
So, Well, you know, we, we do things as we move along. I know the, the striped zebra fish embryo has a single chambered heart. The adult striped zebra fish has a four chambered heart. The heart goes from one to three, two to three to four chambers while it's beating during the development of the striped zebra fish. And in a way we need to fix it while we're evolving it.
And, and so Alan, I wanted to turn to Alan on, you talked about simplifying verification and the simplifications involve abstractions and I know that the abstractions become plasters over the problems. I get it. But can you talk a little bit about the benefits of simplification in terms of accessing it and being able to understand it, but then maybe some of the challenges of simplification of obscuring what's underneath? Can you address that a little bit?
Yes, of course. So one, one design pattern that, that we have in the digital identity space, which is, which has both pros and cons is that, you know, the, the image from I think eighties where nobody on internet knows whether you're a dog or not, right now it's a, it's a good pattern in some cases, but it can be misused. And when we work with different use cases, what we figure is that many, so when information gets into their systems, and this is not like a trusted source either verifying where that information is coming from and whether who, who's behind it.
It's, it's either very hard or impossible and it, it's, this cannot be solved with abstraction. It, it really goes down to, to the very basics where we need to build the missing foundational layer where we can, at least for organizations that, that, that would like to take part in this so that their identity, like digital identity can be clearly expressed. It's cryptographically protected but it's not.
We've, we learned that it's impossible to do it like top to bottom. So there needs to be a like similar to to to to any other like foundational layer where everyone would agree to a certain protocol and set of standards that would allow to express that information. And then once that is, once it's my identity, I don't know, rights authorizations are expressed correctly, only then we can start talking about whether sec like the other organization will recognize you as a trusted source.
So, so trust as we see it's not absolute, it's always relative, but at least to have this ability could already help to, to fix the problems a bit. But it really, we are focusing here or organizations. 'cause once that organizational, I say digital identity is kind of fixed for, for natural persons, it also becomes a bit more, more secure. So that's secondary. But without having that, we will not be able to make verification like easy. So we to with use cases and, and this is the hardest part from them.
So when we ask them who it within your domain is, for example, responsible for giving you the authorization to issue certain documents to, to another company or user, I dunno. Either let it be a certificate or a driver's license or education credential, they usually stop, right? And all this information today is in legal documents that people usually don't read or they're extremely old or they're written in a language that's you don't understand.
So, so having the ability to express the information, this is a university, this university is accredited to perform this. So turns out to be a helpful model, but how to realize it, it's a real challenge that we don't have an answer to. You know, it's interesting, Kalia in our presentation was talking about how good should institutional memory or identity be? And one of the challenges is that we have all of our institutions, when we all woke up this morning, they existed. And that means they were artifacts of old problems. The nation state, you know, 1648, the piece of west failure.
So that's, how many other things are we using from 1648 right now? Right? So the part of the challenge it seems like we have is we created corporations, we created governments to serve human needs and how might we look to humans and the, and how we act, you know, in the prior presentations we talk about digital clones or AI clones. Well maybe part of the programming, if humans undertook to make their clones have certain characteristics, might we then have something that emerges that serves those needs better? But obviously my mom doesn't know about it.
It sounds like people have much more sophisticated moms than my mom in the other presentations. But how might we have the sovereigns, you know, sovereigns who don't ask for permission or forgiveness? And how might we help institutions as people to do better?
You know, how can we, is there, is there something, if we wanted to create a safety net, a neighborhood watch and, and we know that the companies are gonna do certain things to make money, we know that the governments are gonna do certain things that are bound to their jurisdiction.
Is, is there an opportunity for people as customers, as employees, the engineers who are programming, might we come up with the, we talk about it as ethics, but might there be guidance from that ethics that then lets us drive in the existing institutions and the new institutions that arise into those spaces to more humanity? 'cause if we don't ask for it, it's not gonna happen.
I think that, I think there are challenge is that when we hear the, we think that the, the responses are largely in the human world and they, they think that the problem is in the human world, but the problem is actually in the digital world. So if you think of an analog to what the problem is, the analog is not in the digital sense because we are not shrunk down into little tiny people and then injected into the digital verse. So it is the trying to figure out in if there is some sort of similarity or pattern.
I think that's the challenges that our executives higher up, not many of them that will, they think about people and the real world, but not how the digital paradigm works. And you don't have a sense of smell.
You, you don't have a gut feel. You can't use those things to sense whether something is right or wrong. But that most of the challenges are that in, in the cyber world, they ex the method of attack is to exploit a common vulnerability. Somebody leaves a door open, they walk inside, they do this thing called living off the land. They are a local, they appear to be local, they don't look any different because they're in your system. They appear to be part of the system. So in that regard, it they are being fraudulent then member of your organization.
You can't tell any difference between them and someone else. So the the first thing we have to do is to make these things safe so they can't get in because somebody sitting next to you can still lie and cheat the organization or take paperclips home. So we have to, in the analog world, we can work on our morals and ethics. Perhaps we have to ask our executives to help us invest more and to know what to do better to make safer systems digitally.
Yvan, let me just note, we're now officially moving into the next session, the shift breaking protocol. So this is, so please, even though that wasn't advertised, I'm hoping that our panelists will stay on the stage, but if you have another appointment, you can leave. But I have an appointment there, there we go. That my co-moderator is taking the opportunity.
Yeah, but please reset. Thank you. And Yvonne, please use this as an opportunity to help us shift over to that the next, that next topic.
Yeah, I wanted to Oh, contribute to the question please. Yes, yes, of course. So relying on, on old stuff from the analog world, we can actually bring in philosophy and philosophy, philosophical thoughts into system engineering.
And we, we heard about value sensitive design in a talk earlier here, but there's something which is currently been established globally, which is value-based engineering. There's even a standard that exists, it's ISO 2 47 48. And that actually brings in emerges philosophy with technology. This is not easy to understand for engineers, so they need to be educated in ethical theories.
But this said, there is a process for this where you can actually break down the very high requirements of yeah, system safety, system security, system transparency, et cetera, ppp down to individual system functions. So from you get, you really go from the, the high level principles as we have heard it in that talk down to the, to the system features. And that is a very stringent structured and academically or, or scientifically proven approach to it. It is philosophy, philosophical approach to it.
So there is something that exists and I can only encourage to, to, to look into that standard because, okay, now to come to the other topic because it's also NATO that is doing standardization work for artificial intelligence and defense, which is already checking out whether that standard, that that framework might say might serve as a framework for ethical AI or ethical defense AI systems. So this is what a nice transition we're. So in this, so now we're moving into talking about AI and the effect on politics and, and the, and politics intrinsically brings forward philosophy.
We wrote a paper with the Microsoft folks for the OECD in the nineties called personhood. And in it we observed that in the EU identity was based on notions of hagel and cunt in the US identity is based on mercantile notions, utilitarians and Locke. And it happens, I had some experience with the Chinese Supreme court folks a number of years ago because of Japanese imperialism in World War ii, China, Korea, and Japan in, in incorporated Hagel and Kant and also Confucius. And so it's a philosophical interoperability opportunity that we have on the political side.
But the, so that may those interoperability have not yet been exploited. And so now again, with the democratization where it's non-state actors and state actors, that's why I wanted the cyber folks to stay on the stage. And thank you for being indulging me because what can we start to tactically now the cyber experience of that proliferation of threats because you have not just other companies attacking, but individuals, hackers as a whole, whole manner.
What might we learn from the, on the cyber attack side to help us understand some of the tactics that we need to take to have more reliable nation state actor actions, warfare actions, et cetera. Are there some comments on the, again, looking at a company system could be a weapon system and are there some thoughts you have?
Yeah, please. Actually the, the, the cost of attacks gets really like, like so, so especially social in engineering attacks became really low cost.
But, and also even we, if we managed to control, let's say AI govern and managed by, by companies basically nothing prevents fraudsters to run their own system, right? So, so because they have an incentive, they have the budget.
So, so, and really online, we, we cannot tell one from another. And, and the issue that was, that existed in digital identity until now, really it's really pro like the, the issue is even bigger now and we need to decide.
I mean, so even if we control big companies, internet is an open space and we, we cannot, so, so it's like you would take analogy with cars. Like if, if there's a car that looks strong and you see it on the road, it's easy to remove it. We cannot do this at the internet. So we really need to take a different approach and rethink how we, how we trust the information that comes into system and what can we do to, to what what's tactics basically to employ to, to basically mitigate the risk or, or at least detect the risk.
And one other thing there is, is that we have more and more computing power in place, right? It's like in different and ex like there, there, there's a huge incentive to start exploiting this and just installing malicious code on all the different devices that we have. And so suddenly the computing power of, of certain attackers can, can really increase. And with all these like LLM models which are become smaller and smaller, this amplification can, can really take off.
So it's a real challenge to start thinking about how, how to, what, what strategies should we take and employ to, to, to mitigate that? Because at least like in news lately there's, there are, there are, we see that there are many, many social engineering attacks resulting with really high like, like convincing people to, to simply send money to, to strangers, right? So we have no way to verify to whom the phone number belongs. There's no way to verify who's be, who's the color.
And, and, and even when you perform a transaction, you have an IAnd number. So, but there, there's no way for for us to verify, right?
So, so, so we have a very fragile system. On the other hand, there are extremely powerful tools where everyone can write a perfect email or can speak perfect English language or, or any other.
But, but the, the problem is that you, you sort of hinted that there maybe that will, this is asymmetric, but what's happened is the channel shift that happened to get people to move many, many services into the digital world, it's, you know, law of thermodynamics, the flow of whatever it is has meant that there was a a, a shift of power and of energy. It, the benefit to the people who had to do that shift to put the effort in is put burden on ordinary people to be aware of these things.
And we know from any transformation of business, if you forget to do the business and human side of it to manage those change, then then you, you have a, a discontinuity and some sort of impedance where people are not equipped to deal with this. And the people who made the system don't realize that it is being built with flaws. So the people who could solve those problems, because those interconnected systems, it it they do the people who could allow you to verify those numbers because they are private run companies, then they see it as a, as an opportunity to sell you something.
And, and we've seen from recent examples that one company had a, a detailed logging service, which was a premium thing and you had to pay extra for it. And then it was realized that oh actually people had been exploited and then okay, they now have an obligation to give this to people to look after themselves. So when you monetize people's safety, oh, it's an optional extra to have a seatbelt. That's what's happening in cyberspace. And we have to change that dynamic to say, no, your product is not fit for purpose. These people are being harmed.
And, and, and again, I'm sorry for the analogy with, with the engineering world, but yes, you are right. These practices have existed for a long time. It's just that they're not evenly used and that comes down to helping the, our digital industry to become more professional and accountable for their actions and inactions.
So that Sandra, please, I just give another practical example that we are working on, which is also touching like ethical problems that, that we might face in the future, which is there is one unsolved problem that biometrics as an example, brings along, which is it's non revocable. You can't recreate it once it's gone and it's like a huge ethical firm. We also work with biometrics, we have also components on that level, but I think it's so interesting and so mind boggling because we see how fast you can synthetically create biometrics already we have seen all deep fakes.
We have, we have seen the advantage just the last year and it's something for us in research, extremely exciting. What what is after, like what can you do in, in addition, how can you, again, to take the analogy of before increase the barrier for criminal and we specifically look into contextual data because it's something that is constantly changing. Cons can be cons, context can be on a micro level, like, which is in a diet, privacy, preserving way, things that you, that happen around you instead of yourself.
But these are very, very interesting developments because the order of magnitude, the technology forces us to think, we think even some technologies that many organiz are organizations are still in the process of implementing because they're new generation. We already have, do we think. So this cycle becomes faster and faster.
And that, that's something that I think we, yeah, I, I agree with you. There's the fundamental that needs to be worked on, but we, it's just not enough time there to just go with the same pace. So we have to already from ethical point of view, understand what's coming next and, and we, we have to have this discourse on a global level. So that's fantastic. So we've identified that we're living in the fog of peace.
Now let's move directly to the fog of war and let's talk about what's the status right now, Yvonne, of the in information space in the battlefield and what is, what are the implications of bringing AI into that? Is the fog of AI gonna be added to the fog of war now?
So, so first of all, we state that we actually have a super high degree of transparency at the battlefield. So you have sensors everywhere and it's very hard. You see this also in Ukraine as a practical but obviously very sad example.
It's, it's very hard to move on there tactically for the adversary as well as for the defender because you have sensors everywhere. You, you look into the battlefield with your radars, with your satellite and you know, actually exactly where, who is where, and you might make your actually your, your mind up. What could be next, what could happen next. So this high degree of transparency is there. And then we, so I actually attended the international air show yesterday where we were also speaking about that topic.
And then you have, so there's transparency and then you have the, what has been called presence. So you have weapons systems actually that have different reach, but you can actually attack everywhere. So you have hypersonic weapons that are very fast that can actually hit the target with within five minutes because they're so, so fast and you don't have, don't have any, any option to defend against it, right?
So you, you launch them somewhere in the Soviet Union and five minutes later they're in the US and they make a hit. So that, that's actually where what we actually see there. So transparency and presence are these two new paradigms we have to deal with. And now where do you bring artificial intelligence into that system or ecosystem?
Well, there are several options, obviously. The one is you make the processes better that you actually have in place.
And this, we see this largely among many nations. So one of my team undertook for three years now a survey and a study and they looked into 28 nations how artificial intelligence is used in defense.
And it's probably due to the fact that we were living in peace during the lab, more or less in peace during the last 30 years that we do not see, as I said in the beginning, disruptive innovation there, but making things better, things which are there processes which are there, just make them better through the use of more data and, and, and, and, and making it faster through the use of, of more computing power, which we have. So, and this refers especially to intelligence surveillance and reconnaissance missions.
So you get sensor data, mass data there, you classify for example what you see flying around, whether this is friend or foe. And, and, and that happens still. And it happened 30 years ago already, and now it happens still, but only with better algorithms and with more data. So that's the one thing. The second thing which though goes into in a fresh direction is that you use artificial intelligence for, for a simulation. So you simulate battles, you simulate threats in order to anticipate the tactics that the adversary might use.
So you play hundreds of thousands, millions of scenarios in a simulator. We call this the defense metaverse, in order to figure out and to get some statistics how the adversary might behave in a certain mission that you actually plan how, how it actually, what what would happen. So you you, you put some more foresight into the process. So that's the one thing.
And the, the second thing is that you can also use artificial intelligence to develop new tactics for the armed forces here you must understand that tactics is actually a core and the core knowledge, core competency of the armed forces to behave tactical at the battlefield. But now we have a situation that we're being confronted with artificial intelligence, so with systems or or technology that is, that can produce super human whatever, superhuman text maybe.
And when I think of chat GPT, but we have also text stack that can produce tactics which are as good as military tactics today, human military tactics, but go beyond it. So super human tactics and, and that's new and, and that's that, that is in fact disruptive and the ground forces start to learn now that it's, it's eventually the same thing that like we were as civilians were also confronted with this.
Yeah, we learned now to use chat GPT to eventually write better taxes or with less errors. And here they start now to learn that there is a, a tech stack around that allows for, hmm, let's say unconventional tactics eventually. Yeah. And slowly, slowly the penny drops among the nations that artificial intelligence can be used here as well. There's only one nation that actually bets on this, which is in fact Germany. And we learned that that other nations are very much interested in this tactical development with the use of ai.
And, and this said, if I develop an ai which has tactical, which is which provides tactical superiority for the armed forces, and I developed this and, and, and visualize this in a simulation environment, it actually means I can also take it out from the simulation environment and extract it and easily spoken, dump it on a physical platform like a tank, like a, a drone, like an aircraft to improve the tactical behavior and let the tactical behavior being controlled by the machine rather than by the human being. Here, here, here.
I would just like to cha challenge a bit the, the stability and because, so, so AI can produce unpredicted behavior, right? And what, what are, what are the latest maybe thoughts or ideas on the fact that, so we can, we can train AI to do certain stuff. Well even come up with superhuman tactics, but it also, so if, if the decision making relies too much on that, it can also become a very interesting attack surface because if someone manages to change the parameters of the algorithm, something that worked well yesterday completely, completely failed the next day.
So I think the testing that was mentioned before and having security measures in place to ensure even though it can in like, come up with new tactics that it does not go against its, I dunno, initial goal or strategy that it was designed for, but not by itself, but because there's some, someone gets into the system and actually changes that changes things that are basically not visible for, for the, for the end user service.
For me, the, our our our real challenges are our own inherent biases of how we perceive the world and how we use these tools because our executives gives us the guidance to, and, and, and the approval of the budgets to go and buy these things. And so our existing command and control structures dictate the, choose the ones that they think will work. They had some experience at the front they can go and visit. But quite a lot of the innovation that we see is actually from firsthand experience at the front line.
And we know that ev these things are context dependent and the, the disruptive behavior can comes from particular context and those context can apply everywhere. So it, there is an, IM an imbalance between the sort of, well really a western view of science and technology can, trying to impose a new paradigm versus the a sort of a a, a evolutionary people being affected, finding their own solutions, which is what we, what we are seeing is that the transition from highly specialized decided in advance this is going to be the thing that solves it.
And what we find is that the half-life of a very expensive vehicle is really shortened. And that we, we've got in, if you, if Ukraine at the Russian war against Ukraine is an example of what's to come, it appears to be just trench warfare and it is just a numbers game. So then this looks to be a, an evolutionary thing. Where is it, is it the Red queen paradox where your prey and predator evolve together and nothing seems to change?
So, but we, but there are changes that in the AI side where we've developing what they call cognitive disruption, where they play in that space between the what you, I think this sort of Donald Rumsfeld popularized that your known knowns, your unknown knowns and your unknown unknowns. So where, so you look at the blind spots of your opponent and then you try to sow doubt in their systems and uncertainty. This is exploitability, but it's it's exploiting a different paradigm.
And, and, and those are, we still have those sorts of challenges, but it's the, that game theory and practicing, but it needs diverse people to be able to participate. And what large language modules have shown that this just gets you to average really quickly. And anomalies who think differently are seen as hallucinations. The hallucinations are probably the key to proper disruption.
Yvan, do you answer that? Yeah, yeah, basically, I mean this is a good point though. And we must say that we are not speaking about the state of the art artificial intelligence here.
darpa, the Defense Advanced Research Projects agency, which is basically a, a procurement agency of the U-S-D-O-D has coined the term third wave ai. And third wave AI is obviously not second wave ai, but the, the AI which is in place today is second wave ai.
It's, it can reproduce things. It is not usually not context aware. So chat GP is a little bit context aware, but it is not aware of its consequences, the consequences of what it does. So for example, if you look into large language models and, and, and you ask it to, to, to, to plan, it'll completely fail. It cannot plan. It can give you a a list of actions.
If you, if you say, okay, bring me from Munich to Vienna, you, it gives you a list of actions and guess what it does, I tried it out. It will send you first to Frankfurt. Yeah. And this absolutely makes no sense. And if you rely on an artificial intelligence that gives you such a, a list of actions in defense, this is such a inferior plan that it can be easily exploited by the adversary. And the idea is now to come up with superior tactics and superior tactics requires different, a different text stack. And we rely here really on a very different text stack.
So that one word you just used and it was a game theory. So game theory plays a role here. And based on this, we must say that we, we, we see surprising tactics of such a system, but it is in line with what it is actually allowed to do. Why is that? Because third wave AI is hybrid. Hybrid means there is a lot of physicists and mathematicians sitting in, in, in my team for example, who do the modeling of things, which an AI does not need to learn because we know, we know the facts here, like let's say the physics of a platform, the physics of a radar. How can a specific radar type behave?
What's the radar cone of it? How does a specific tank behave? What are ballistic models? What are damage models? What you do not need to let an AI learn this. And by the way, you do not have the data for this because there's one, one big challenge next to several big challenges in the real world is in the real world of war, fog of war. You said it, we don't have data and if we have, we have, we don't have data from the battlefield.
We, we, we have data, well data, we, we, we, we see that, oh, how Russia attacks sharkk. Yeah. And we see this once in our lifetime, but this is far from mass data that we can actually use to train an AI to, to learn how to encounter such tactics.
But, But some, the solutions we'll have will not come from data because data is just something that happened in the past. Yes.
And, and so your Yes, I your criticism of what these modules can do is, yes, it's founded in the past and no matter how hard you ring that data from history, it will not tell you what's going to happen tomorrow. Therefore You need to rely on hypothesis what can happen. So you can build up a whole, a tree of hypothesis about the future. So one of the things that we couldn't have predicted in the sort of hybrid warfare is the emergence of something like nafo. Okay?
So it's, it spontaneously emerged that the highly sort of organized trolls of misinformation in Russia have been disarmed by a community of people who have used humor to disable these trolls. Now, who would've thought that humor would've been a form of warfare or a method of pacifying, an organized attempt to disinformation.
This is more the strategic view of warfare, whereas I'm now speaking about the operational and tactical use of artificial intelligence and warfare where we actually know adv al platforms and what their physical capabilities are so that we can actually speak to, to train tactics against like, let's say ground-based air defense system, for example, which Russia uses. So, so it's a different, it's a, it's a different level. It's a different level of Yes. Creativity. Your innovation is not always about logic.
And then I think that that's where we have to move to something which is quite Different. And, and it'll be, and it'll be different.
Well, in our last minute here, we've, it's, it's clear that working in AI is not a tankless job that's upon, okay, so in our last minute, let's have just a sentence, and this is an impossible question I ask of every panel, 15 years out the year 2040, what does good look like In a sentence? How, how, how much time do I have to take? We all together have 20 seconds Better than today, let's say more secure and putting less burden on, on the end users. I'm excited to build technology from people, for people.
So diverse backgrounds, insights will yield in better quality For me personally, being retired from ai, being offline and being active in animal protection, A life without technology, switch it off, live a calm and peaceful life in a local community. Wonderful. Thank you.
Well, please, please join me in thanking a wonderful panel here today.