So this is fantastic, what a wonderful, I'll move over, what a wonderful opportunity. So you know we heard a lot in this track about, yes please, oh here we go, come on there's always room for more, that's right exactly, well don't be so gender binary. So this is fantastic, so we heard a lot, could someone close the door please, thank you.
We heard a lot about paradox, and a paradox yields conflict, and in this session we're going to break the protocol a little bit, we had originally planned to have the panel and then Yvonne and I talking, what I'm going to suggest is that we have the panel and then everyone stay on stage, if you're willing, unless you have another appointment, because, and I'll let the folks introduce themselves in a moment, but all of the paradoxes we had can often lead to conflict, and conflict, we're human beings, we're physical beings, and we've had experience in managing and sometimes poorly managing conflict in the physical world, and now we're taking these paradoxes and the resulting conflicts into information spaces, and some of the mechanisms, some of the strategies that we've used to manage conflict and mitigate conflict in the physical world, some of them will be useful framings in the information space and some will be less useful.
And so, with everyone's indulgence, I'd love to start out with the first panel is intended to be in a cyber realm, and so let's talk about cyber, and then recognize that in the second part, where Yvonne's expertise in the warfare area and the physical conflict, because cyber is so important in that physical realm as well, then we'll kind of move into that. So let's, on the first part of this, let's emphasize the cyber, and then the second part, recognize that we want to move over into cyber in the physical realm, and what that's doing, AI, obviously, is part of that. Is that indulgence?
I'm asking you when you're on stage, so what are you going to say, right? But anyway, so let's start out, I'd like just to start out with the, just a general question of a statement, and a brief statement, please, or brief-ish, on just thinking about the, what ways is the landscape of conflict, cyber conflict, changing in an AI-drenched environment? And as part of the initial statement, please just introduce yourselves briefly, and just your association, so that people have a sense of what that framing is, the institutional framing you're bringing to bear.
And let's just go along, and as a starting point, please. Okay. Yes.
Yes, indeed. Okay. I'm Robert Lakes. I work for UK government. I have to disclose that, because there is an election declared, that I am under impartiality rules, so I can only say what is really in my role, my role as a, I'm the identity architect for defence in the UK, and I, although I have cyber security as a professional background, I can't comment about the cyber security in my organisation, as you understand.
So also, I disclose that I am neuro-distinct, so I have ADHD, and I think in different ways, so I'm familiar of conflict in many different ways. So the question was, the second part of it, as the introduction was, after that. Just to think about our, is it difference in kind, or difference in degree, that we're moving into in the AI realm, and so taking your perspectives on conflict, and projecting them out into, and again, briefly, for our initial discussion. I would say it is both.
We see innovation in many different ways, but we also see scale, and so our challenges are both at pace, and new ways that we find conflict can come about. My name is Yvonne Hofstadter, I'm Professor for Digitalisation and Society at Hochschule Bonn-Rhein-Sieg, University of Applied Sciences in Bonn-Rhein-Sieg, Germany, as well as CEO of 21 Strategies, which is a company that provides a virtual twin of the battlefield in order to try out things in the virtual battlefield before you actually take it to the real battlefield, and of course, there's a lot of AI implied here.
So my vision on shaping conflict with AI, so I'm actually referring to national security and conflicts between countries. On the one hand, we see that things speed up, which is also due, of course, to the war on Ukraine, so that's speeding up things when it comes to procurement through governments. AI obviously is a very important topic with regard to issuing new platforms which are AI-enabled, so speaking of more increased technical autonomy. On the other hand, we nevertheless see that there is not yet real disruption in the defense space with regard to AI.
We see linear innovation, so we see that things and doctrines which are still in place are being improved but not yet disrupted. However, there are a few attempts from a few countries, as we undertook a country study, how defense AI has been used in different countries.
So slowly, slowly, some countries catch up and try to do more modern things or things which have not been yet been able to do without AI. They start doing it with AI, so to speak, tactics at the battlefield.
Hello, my name is Sandra Tobler, it's a pleasure to be here, nice meeting you all. I'm the chairwoman and founder of a Zurich-based cybersecurity company called FutureAid. We're a specialized product innovator in stroke authentication, transaction signing, as well as fraud prevention.
And with the involvement of AI, of course, we have both attacks that become more sophisticated, we might talk a bit more about new types of things we see out there, be it polymorphic malware, be it very refined phishing attacks, like it's really real danger, it's on a very day-to-day basis, customers are confronted with those challenges. We have mainly customers that are providing critical infrastructure, be it banks, utility companies or telcos.
And on the other side, of course, we can also dive a bit deeper into what I would be more humble and call it machine learning-based technology, how can it help address cyber threats. I don't want a machine to take decisions autonomously at this stage yet, but definitely there are ways that can amplify fraud prevention, and that's why we do a lot of research. We came out of the Technical University at Zurich and have close ties to global research in that respect, and I'm happy to dive a bit more into that. To answer your question, I would agree it's both, I would say so, for those reasons.
The threat is real on a digital level, but also, let's be honest, what we can use, there's also a lot of marketing lingo, and the fraud detection is still evolving, and it's a very exciting journey to be part of. Excellent, Alan?
Yes, hi, my name is Alan, I'm coming from Natis, and basically in this context, so I wear several hats in several different projects, but our main, and so we are most active in the digital identity space, so especially in new or emerging technologies when it comes to digital identities, but we did a lot of work, or are doing a lot of work in terms of how to simplify verification of information.
And with AI now, everyone can produce great and convincing content, and today's trust model really are built on the fact that I trust information because I trust the source, but this paradigm is kind of starting to break. So, what's our focus? Can you talk a little bit about the microphone, yes.
Okay, so the real challenge that we are, so we already have a huge challenge in the conventional digital identity space, really about what is the origin of information, is the organization entitled to produce that information, and who is to be held liable, and with AI, so I think today through social media, we have a serious scale, but we don't have the very basics covered in like already established domains, so there are several challenges.
Maureen, feel free at any point, also among the panelists, if you don't just rely on me, I have many questions, but if you have questions for each other, please feel free to jump in on that. Absolutely, and also we probably have some questions from our virtual audience. We're very casual in our panels.
Well, my main concern here in general is it is true that there is a lot of information, and data and information is something that is not static, it is changing constantly, and the technology is changing with it, so then creating a more secure space, I believe that the main challenges maybe are the jurisdiction eventually, you know, so then where that can really happen, because I believe that, I don't know, at a certain point we will need to think about, okay, so we cannot divide this between regions like the EU or North America or something, there is like a parallel region which will be like the digital world, I believe that that one will be a main challenge.
And I wanted to pick up on something Sandra mentioned, is we have new attacks, and so we have a situation now where we have a democratization of access to tools that are very powerful, and so if we have a, let's start with the scenario, AI-fueled attacks and not AI-fueled responses, what are some of the issues that arise when the institutions have this lag to pick these things up, but everyone has the ability to start to accelerate the threat surface and expand the threat surface, that the last couple of presentations, again, I get freaked out by these things, you have the possibility of, I could make five clones of myself and set them off in the world and say, go do mischief, you do mischief, you be legal, you be this, you be that, forgetting the idea of liability for a second, I just made five times me being an attacker, and so putting aside the possibly interesting issues of whether that's an iteration of the multiverse itself, which I think might be interesting, what do we do about our institutions that are Newtonian and linear and physically derived, and this threat surface that's proliferating exponentially?
You have some thoughts on that? We mustn't lose sight of is, in general, as a species, we are making great progress, things get better, they are not as bad as you think they are, so don't get frightened by this, most people live good, happy lives, and there are only a minority of people who exploit other people, so we mustn't lose sight of the fact that this isn't something that we can't come together and deal with.
If you've ever seen Hans Rohling and his Factfulness, if you've read that book, we have our own biased views of what reality is, and we're already misinformed, and we don't really need any help from AI and misinformation, because we're not always very good ourselves, so the interested parties who come here are motivated to learn and find examples, and we probably don't, we're not representative of the general population, we're self-selecting, so there are probably quite a lot of people who are really not worried at all, and it's unlikely to impact them.
Now, on the other side, there are some real concerns, and I'm very pleased that people who come here are actually trying hard to do something about it to make our systems safe, and in the analogue world, so my first profession is a mechanical engineer, in the analogue world, if my engineering efforts ended in a system failure, then we would already have had to deal with it, it's just not permissible.
I started out in started out in the automotive industry, in braking, if my product failed, civilians could be harmed, but we have already resolved those issues, because we're not allowed, we have safety testing, but safety testing doesn't really exist in the digital space, it's still a very young industry, and that's, so at the moment, we talk about this industrial revolution, it's not really that the cyber is, and computing is that advanced, compared to established engineering industries, so it's just that we need to get better at what we do, and then won't these problems go away?
Yvonne, people that go away?
I think it has no political impact, if you democratize the access to these technologies, so in earlier times, it was more or less owned by the US, we had the Silicon Valley, Silicon Valley was very strong, we had the hegemon, the US, that was actually more or less setting the rules with regard to technology, now you democratize it, and what actually happens is that you go away from the unipolarity of the world into a multipolarity, so it actually fuels the multipolarity of the world, and you enable smaller states to become, to get into power with regard to interfering with other states, so I remember, for example, it was, I think, the elections when the Trump election, when Trump was actually elected, we saw a lot of Russian interference into, which was proven, and was actually very well looked at by the prosecutor afterwards, so there was a lot of Russian interference, and in the last elections, when President Biden was elected, we saw a multitude of similar things happening, but actually being done by smaller states, like Mexico, like Israel, so you actually give access to states which are less powerful, so you change the balance of power globally, if you give other nations access to these technologies, which were in the 90s still actually owned by the governments, so for example, artificial intelligence is not new, we applied it in the 90s already for military systems, the very early versions of it, and it was owned by the governments, it was classified more or less, but then it proliferated afterwards, and now we see it everywhere, and as I said, this changes the balance of power worldwide.
New risks, what can we do on the cyber side, how can we understand the risks and start to test for our systems to address those risks? Today, I think first and foremost, it's always interesting to understand a bit, who is your adversary?
I mean, I'm not talking about state-driven attacks now, I'm really talking about private commercial motivated attacks, and they're acting these days as large corporations, so they have a business case they need to fulfill, which is typically under time pressure, it's typically at scale, so they need to push through as many attacks as possible to get some sort of response which could be a monetary reward, and I think this is already a very useful mindset to understand how to keep up with potential changing threats, so organizations, it's a race in the end, it's increasing the barrier, make it more complex, make it more expensive for attackers to attack, because that's lowering the incentive, and they will go to another one that has a lower barrier, so that's a bit the race we're talking about.
When it's very specifically of what we do with critical infrastructure, we see historically you have a lot of legacy systems that might have a very siloed view on data, so I take the example of a bank, and you might have transactional information from credit card or debit card transactions, but then you have maybe other data sets that are from the front end, and I think one development where we see a lot of organizations moving towards is definitely bringing the data sets together to have a more specific, more real-time view of patterns and behaviors that could come from changing interaction, like it could be a potential malicious attack, so these are definitely technologies that can address on the other hand, but first and foremost, it's really, I always advise organizations to look at what are risk scenarios, what are motivations, why they could be targets of an attack, and then based on that, you can look where entry doors could be, I mean, but one challenge that I definitely see compared to maybe the mechanical engineering analogy that you brought, which is also of course one that can be very fatal if not done well, is the scale, the order of magnitude of scale that we are facing a threat from.
I'm very excited to hear more about your work, but on a private sector level, you have so much reliance on few technology stacks or infrastructure pieces and components, and those bottlenecks are the ones, I think, that are most exposed, that a lot of reliance is on, and that's why I strongly believe in the cybersecurity debate, it's no winner-takes-all market for that specific reason, you need to have specific best-of-breed technology patches that you block together and address those ever-changing fraud pieces, and consolidation is actually very dangerous. Alan, do you want to say something?
I was just going to say, but the problem is that the foundations on which our technology is built has got lots of unsafe code, and it's everywhere, and until we fix the unsafe things that we do, all of those anti-patterns, then everybody will be able to exploit these everywhere if they have the knowledge to do it, and a few people have, and they're able to make a living out of it.
Now, that's just like, okay, if we don't have safe windows and can be opened with a biro or what have you, then there will be a certain number of people who can always get into people's houses, and it's just a numbers game. You make it easy for people to earn another living, they'll go and do that, but for some people, that's how they earn a living, and if we don't want that, then we have to make our digital infrastructure, the built infrastructure, better, so that it is safe, fail-safe, and people don't get harmed, but just sticking clusters over it doesn't help.
We have to do better infrastructure, and that's how, but I have a biased view because I'm an engineer, so I see it from one point of view, so. Well, you know, we do things as we move along. I know the striped zebrafish embryo has a single-chambered heart. The adult striped zebrafish has a four-chambered heart. The heart goes from one to two to three to four chambers while it's beating during the development of the striped zebrafish, and in a way, we need to fix it while we're evolving it, and so, Alan, I wanted to turn to Alan.
You talked about simplifying verification, and the simplifications involve abstractions, and I know that the abstractions become plasters over the problems. I get it, but can you talk a little bit about the benefits of simplification in terms of accessing and being able to understand it, but then maybe some of the challenges of simplification, of obscuring what's underneath. Can you address that a little bit?
Yes, of course. So, one design pattern that we have in the digital identity space, which has both pros and cons, is that, you know, the image from, I think, 80s where nobody on the internet knows whether you're a dog or not, right?
Now, it's a good pattern in some cases, but it can be misused, and when we work with different use cases, what we figure is that many, so when information gets into their systems, and this is not like a trusted source, either verifying where that information is coming from and whether who's behind it, it's either very hard or impossible, and this cannot be solved with abstraction.
It really goes down to the very basics where we need to build the missing foundational layer where we can, at least for organizations that would like to take part in this, so that their identity, like digital identity can be clearly expressed, is cryptographically protected, but we learned that it's impossible to do it top to bottom, so there needs to be a similar to any other foundational layer where everyone would agree to a certain protocol and set of standards that would allow to express that information, and then once it's my identity, I don't know, rights, authorizations are expressed correctly, only then we can start talking about whether the other organization will recognize you as a trusted source, so trust, as we see, it is not absolute, it's always relative, but at least to have this ability could already help to fix the problems a bit, but it's really, we're focusing here on organizations, because once that organizational, I would say, digital identity is kind of fixed, for natural persons, it also becomes a bit more secure, so that's secondary, but without having that, we will not be able to make verification easy, so we lean to with use cases, and this is the hardest part for them, so when we ask them who within your domain is, for example, responsible for giving you the authorization to issue certain documents to another company or user, I don't know, let it be a certificate or a driver's license or education credential, they usually stop, right, and all this information today is in legal documents that people usually don't read, or they're extremely old, or they're written in a language that you don't understand, so having the ability to express the information, this is a university, this university is accredited to perform this, so it turns out to be a helpful model, but how to realize it, it's a real challenge that we don't have an answer to.
You know, it's interesting, Kalia, in her presentation, was talking about how good should institutional memory or identity be, and one of the challenges is that we have all of our institutions, when we all woke up this morning, they existed, and that means they were artifacts of old problems, the nation state, you know, 1648, the Peace of Westphalia, so that's how many other things are we using from 1648 right now, right, so part of the challenge it seems like we have is we created corporations, we created governments to serve human needs, and how might we look to humans and how we act, you know, in the prior presentations, we talked about digital clones or AI clones, well, maybe part of the programming, if humans undertook to make their clones have certain characteristics, might we then have something that emerges that serves those needs better, but obviously my mom doesn't know about it, sounds like people have much more sophisticated moms than my mom in the other presentations, but how might we have the sovereigns, you know, sovereigns who don't ask for permission or forgiveness, and how might we help institutions as people to do better, you know, how can we, is there something, if we wanted to create a safety net, a neighborhood watch, and we know that the companies are going to do certain things to make money, we know that the governments are going to do certain things that are bound to their jurisdiction, is there an opportunity for people as customers, as employees, the engineers who are programming, might we come up with the, we talk about it as ethics, but might there be guidance from that ethics that then lets us drive in the existing institutions and the new institutions that arise into those spaces to more humanity, because if we don't ask for it, it's not going to happen.
The challenge is that when we hear the, we think that the responses are largely in the human world, and they think that the problem is in the human world, but the problem is actually in the digital world, so if you think of an analog to what the problem is, the analog is not in the digital sense, because we are not shrunk down into little tiny people and injected into the digital verse, so it is the trying to figure out if there is some sort of similarity or pattern, I think that's the challenges that our executives higher up, not many of them will, they think about people and their real world, but not how the digital paradigm works, and you don't have a sense of smell, you don't have a gut feel, you can't use those things to sense whether something is right or wrong, but most of the challenges are that in the cyber world, the method of attack is to exploit a common vulnerability, somebody leaves a door open, they walk inside, they do this thing called living off the land, they are a local, they appear to be local, they don't look any different because they are in your system, they appear to be part of the system, so in that regard, they are being fraudulent, they are a member of your organisation, you can't tell any difference between them and someone else, so the first thing we have to do is to make these things safe so they can't get in, because somebody sitting next to you can still lie and cheat the organisation or take paperclips home, so we have to, in the analogue world, we can work on our morals and ethics, perhaps we have to ask our executives to help us invest more and to know what to do better to make safer systems digitally.
Let me just note, we are now officially moving into the next session, the shift to breaking protocols, so please, even though that wasn't advertised, I'm hoping that our panellists will stay on the stage, but if you have another appointment, you can leave, but there we go, my co-moderator is taking the opportunity, but please reset, thank you, and Yvonne, please use this as an opportunity to help us shift over to that next topic.
So, relying on old stuff from the analogue world, we can actually bring in philosophy and philosophical thoughts into system engineering, and we heard about value-sensitive design in a talk earlier here, but there's something which is currently being established globally, which is value-based engineering, there's even a standard that exists, it's ISO 24748, and that actually brings in, merges philosophy with technology.
This is not easy to understand for engineers, so they need to be educated in ethical theories, but this said, there is a process for this where you can actually break down the very high requirements of system safety, system security, system transparency, etc.,
down to individual system functions, so you really go from the high-level principles, as we have heard in that talk, down to the system features, and there's a very stringent, structured, and academically or scientifically proven approach to it, it is a philosophical approach to it, so there is something that exists, and I can only encourage to look into that standard, because, okay, now to come to the other topic, because it's also NATO that is doing standardization work for artificial intelligence and defense, which is already checking out whether that standard, that framework might serve as a framework for ethical AI or ethical defense AI systems.
So this is, what a nice transition, we're, so in this, so now we're moving into talking about AI and the effect on politics, and politics intrinsically brings forward philosophy. We wrote a paper with the Microsoft folks for the OECD in the 90s called Personhood, and in it we observed that in the EU, identity was based on the notions of Hegel and Kant.
In the US, identity is based on mercantile notions, utilitarians and Locke, and it happens, I had some experience with the Chinese Supreme Court folks a number of years ago, because of Japanese imperialism in World War II, China, Korea, and Japan incorporated Hegel and Kant, and also Confucius, and so it's a philosophical interoperability opportunity that we have on the political side, but the, so that made, those interoperabilities have not yet been exploited, and so now, again, with the democratization, where it's non-state actors and state actors, that's why I wanted the cyber folks to stay on the stage, and thank you for indulging me, because what can we start to, tactically now, the cyber experience of that proliferation of threats, because you have not just other companies attacking, but individuals, hackers, the whole manner, what might we learn from the, on the cyber attack side, to help us understand some of the tactics that we need to take to have more reliable nation-state actor actions, warfare actions, et cetera, are there some comments on the, again, looking at a company system could be a weapon system, and are there some thoughts you have, yeah, please.
Actually, the cost of attacks gets really, like, so especially social engineering attacks became really low cost, but, and also, even if we manage to control, let's say, AI cover and manage by companies, basically, nothing prevents fraudsters to run their own system, right, so because they have an incentive, they have the budget, so, and really, online, we cannot tell one from another, and the issue that was, that existed in digital identity until now, really, it's really, like, the issue is even bigger now, and we need to decide, I mean, so even if we control big companies, internet is an open space, and we cannot, so it's, like, you would take analogy with cars, like, if there's a car that looks strong, and you see it on the road, it's easy to remove it, we cannot do this at the internet, so we really need to take a different approach and rethink how we trust the information that comes into system, and what can we do to, what tactics, basically, to employ to, basically, mitigate the risk, or at least detect the risk, and one other thing there is, is that we have more and more computing power in place, like, in different, and, like, there's a huge incentive to start exploiting this, and just installing malicious code on all the different devices that we have, and so suddenly, the computing power of certain attackers can really increase, and with all these, like, LLM models, which become smaller and smaller, this amplification can really take off, so it's a real challenge to start thinking about how to, what strategies should we take and employ to mitigate that, because at least, like, in news, lately, there are, we see that there are many, many social engineering attacks, resulting with really high, like, convincing people to simply send money to strangers, right, so we have no way to verify to whom the phone number belongs, there's no way to verify who's the caller, and even when you perform a transaction, you have an IBAN number, so, but there's no way for us to verify, right, so we have a very fragile system, on the other hand, there are extremely powerful tools where everyone can write the perfect email, or can speak perfect English language, or any other, but the problem is that you sort of hinted that there may be, so, well, this is asymmetric, but what's happened is the channel shift that happened to get people to move many, many services into the digital world, it's, you know, the law of thermodynamics, the flow of whatever it is, has meant that there was a shift of power and of energy, it, the benefit to the people who had to do that shift, to put the effort in, has put burden on ordinary people to be aware of these things, and we know from any transformation of business, if you forget to do the business in the human side of it, to manage those change, then, then you have a discontinuity and some sort of impedance where people are not equipped to deal with this, and the people who made the system don't realize that it has been built with flaws, so the people who could solve those problems, because those interconnected systems, they do, the people who could allow you to verify those numbers, because they are private-run companies, then they see it as an opportunity to sell you something, and we've seen from recent examples that one company had a detailed logging service, which was a premium thing, and you had to pay extra for it, and then it was realized that, oh, actually people had been exploited, and then, okay, they now have an obligation to give this to people to look after themselves, so when you monetize people's safety, oh, it's an optional extra to have a seat belt, that's what's happening in cyberspace, and we have to change that dynamic to say, no, your product is not fit for purpose, these people are being harmed, and again, I'm sorry for the analogy with the engineering world, but yes, you are right, these practices have existed for a long time, it's just that they're not evenly used, and that comes down to helping our digital industry to become more professional and accountable for their actions and inactions.
Sandra, please. Another practical example that we're working on, which is also touching ethical problems that we might face in the future, which is, there is one unsolved problem that biometrics, as an example, brings along, which is it's non-revocable, you can't recreate it once it's gone, and it's like a huge ethical problem.
We also work with biometrics, there are also components on that level, but I think it's so interesting and so mind-boggling because we see how fast you can synthetically create biometrics already, we have seen all deepfakes, we have seen the advantage just the last year, and it's something for us in research extremely exciting, what is after, what can you do in addition, how can you, again, to take the analogy of before, increase the barrier for criminal.
And we specifically look into contextual data because it's something that is constantly changing, context can be on a micro level, which is in a privacy-preserving way, things that happen around you instead of yourself.
But these are very, very interesting developments because the order of magnitude that technology forces us to think, rethink, even some technologies that many organizations are still in the process of implementing because they're new generation, we already have to rethink, so this cycle becomes faster and faster, and that's something that I think we, yeah, I agree with you, there's the fundament that needs to be worked on, but it's just not enough time there to just go with the same pace, so we have to already from an ethical point of view understand what's coming next, and we have to have this discourse on a global level.
So that's fantastic, so we've identified that we're living in the fog of peace, now let's move directly to the fog of war, and let's talk about what's the status right now, Yvonne, of the information space in the battlefield, and what are the implications of bringing AI into that? Is the fog of AI going to be added to the fog of war now?
So first of all, we state that we actually have a super high degree of transparency at the battlefield, so you have sensors everywhere, and it's very hard, you see this also in Ukraine as a practical, but obviously very sad example, it's very hard to move on there tactically for the adversary as well as for the defender, because you have sensors everywhere, you look into the battlefield with your radars, with your satellites, and you know actually exactly where, who is where, and you might make your mind up what could be next, what could happen next, so this high degree of transparency is there, and then we, so I actually attended the international air show yesterday, where we were also speaking about the topic, and then you have this transparency, and then you have what has been called presence, so you have weapon systems actually that have different reach, but you can actually attack everywhere, so you have hypersonic weapons that are very fast, that can actually hit the target within five minutes, because they are so, so fast, and you don't have any option to defend against it, so you launch them somewhere in the Soviet Union, and five minutes later they're in the US, and they make a hit, so that's actually where, what we actually see there, so transparency and presence are these two new paradigms we have to deal with, and now where do you bring artificial intelligence into that system or ecosystem, well there are several options obviously, the one is you make the processes better that you actually have in place, and this we see this largely among many nations, so one of my team undertook for three years now a survey and a study, and they looked into 28 nations how artificial intelligence is used in defense, and it's probably due to the fact that we were living in peace during the last, more or less in peace, during the last 30 years, that we do not see, as I said in the beginning, disruptive innovation there, but making things better, things which are there, processes which are there, just make them better through the use of more data and making it faster through the use of more computing power which we have, so and this refers especially to intelligence surveillance and reconnaissance missions, so you get sensor data, mass data there, you classify for example what you see flying around, whether this is friend or foe, and that happens still, and it happened 30 years ago already, and now it happens still, but only with better algorithms and with more data, so that's the one thing.
The second thing which though goes into a fresh direction is that you use artificial intelligence for a simulation, so you simulate battles, you simulate threats in order to anticipate the tactics that the adversary might use, so you play hundreds of thousands, millions of scenarios in a simulator, we call this the defense metaverse, in order to figure out and to get some statistics how the adversary might behave in a certain mission that you actually plan, how it actually, what would happen, so you put some more foresight into the process, so that's the one thing, and the second thing is that you can also use artificial intelligence to develop new tactics for the armed forces.
Here you must understand that tactics is actually a core, the core knowledge, core competency of the armed forces to behave tactical at the battlefield, but now we have a situation that we're being confronted with artificial intelligence, so with systems or technology that can produce superhuman, whatever, superhuman text maybe, and when I think of CHAT-GPT, but we have also tech stack that can produce tactics which are as good as military tactics today, human military tactics, but go beyond it, so superhuman tactics, and that's new, and that is in fact disruptive, and the ground forces start to learn now that it's eventually the same thing that like we were as civilians, we're also confronted with this, we learned now to use CHAT-GPT to eventually write better texts or with less errors, and here they start now to learn that there is a tech stack around that allows for, let's say, unconventional tactics eventually, and slowly, slowly the penny drops among the nations that artificial intelligence can be used here as well.
There's only one nation that actually bets on this, which is in fact Germany, and we learned that other nations are very much interested in this tactical development with the use of AI, and this said, if I develop an AI which provides tactical superiority for the armed forces, and I develop this and visualize this in a simulation environment, it actually means I can also take it out from the simulation environment, extract it, and easily spoken, dump it on a physical platform like a tank, like a drone, like an aircraft to improve the tactical behavior, and let the tactical behavior be controlled by the machine rather than by the human being.
Here I would just like to challenge a bit the stability, so AI can produce unpredicted behavior, right, and what are the latest maybe thoughts or ideas on the fact that so we can train AI to do certain stuff well, even come up with superhuman tactics, but it also, if the decision making relies too much on that, it can also become a very interesting attack surface, because if someone manages to change the parameters of the algorithm, something that worked well yesterday completely failed the next day, so I think the testing that was mentioned before, and having security measures in place to ensure, even though it can like come up with new tactics, that it does not go against its, I don't know, initial goal or strategy that it was designed for, but not by itself, but because there's some, someone gets into the system and actually changes that, changes things that are basically not visible for the end user.
The real challenges are our own inherent biases of how we perceive the world and how we use these tools, because our executives gives us the guidance to, and the approval of the budgets to go and buy these things, and so our existing command and control structures dictate that choose the ones that they think will work.
They had some experience at the front, they can go and visit, but quite a lot of the innovation that we see is actually from first-hand experience at the front line, and we know that these things are context-dependent, and the disruptive behavior comes from particular contexts, and those contexts don't apply everywhere, so there is an imbalance between the sort of, well really a western view of science and technology can try to impose a new paradigm versus the sort of evolutionary people being affected finding their own solutions, which is what we are seeing, is that the transition from highly specialized, decided in advance, this is going to be the thing that solves it, and what we find is that the half-life of a very expensive vehicle is really shortened, and that we've got, if Ukraine, the Russian war against Ukraine is an example of what's to come, it appears to be just trench warfare, and it's just a numbers game, so then this looks to be an evolutionary thing, is it the Red Queen paradox where your prey and predator evolve together and nothing seems to change, but there are changes that in the AI side where we're developing what they call cognitive disruption, where they play in that space between the, what I think this sort of Donald Rumsfeld popularized, that your known knowns, your unknown knowns, and your unknown unknowns, so you look at the blind spots of your opponent, and then you try to sow doubt in their systems, and uncertainty, but it's exploiting a different paradigm, and those are, we still have those sorts of challenges, but it's that game theory and practicing, but it needs diverse people to be able to participate, and what large language modules have shown that it just gets you to average really quickly, and anomalies who think differently are seen as hallucinations, but hallucinations are probably the key to proper disruption.
I mean this is a good point though, and we must say that we're not speaking about the state-of-the-art artificial intelligence here, DARPA, the Defense Advanced Research Projects Agency, which is basically a procurement agency of the U.S. DoD, has coined the term third wave AI, and third wave AI is obviously not second wave AI, but the AI which is in place today is second wave AI.
It can reproduce things, it is not usually not context aware, so chat TP is a little bit context aware, but it is not aware of its consequences, the consequences of what it does, so for example if you look into large language models, and you ask it to plan, it will completely fail, it cannot plan, it can give you a list of actions, if you say okay bring me from Munich to Vienna, it gives you a list of actions, and guess what it does, I tried it out, it will send you first to Frankfurt, yeah, and this absolutely makes no sense, and if you rely on an artificial intelligence that gives you such a list of actions in defense, this is such an inferior plan that it can be easily exploited by the adversary, and the idea is now to come up with superior tactics, and superior tactics requires a different tech stack, and we rely here really on a very different tech stack, so that one word you just used, and it was game theory, so game theory plays a role here, and based on this, we must say that we see surprising tactics of such a system, but it is in line with what it is actually allowed to do, why is that, because third wave AI is hybrid, hybrid means there is a lot of physicists and mathematicians sitting in my team, for example, who do the modeling of things which an AI does not need to learn, because we know the facts here, like let's say the physics of a platform, the physics of a radar, how can a specific radar type behave, what's the radar cone of it, how does a specific tank behave, what are ballistic models, what are damage models, you do not need to let an AI learn this, and by the way, you do not have the data for this, because there's one big challenge, next to several big challenges, in the real world is, in the real world of war, fog of war, you said it, we don't have data, and if we have, we don't have data from the battlefield, we have data, well, data, we see that, how Russia attacks Charkiv, yeah, and we see this once in our lifetime, but this is far from mass data, that we can actually use to train an AI, to learn how to encounter such tactics, we'll have, will not come from data, because data is just something that happened in the past, and so your criticism of what these models can do, is yes, it's founded in the past, and no matter how hard you wring that data from history, it will not tell you what's going to happen tomorrow, therefore you need to rely on hypothesis, what can happen, so you can build up a whole, a tree of hypothesis about the future.
So one of the things that we couldn't have predicted in the sort of hybrid warfare, is the emergence of something like NAFO, okay, so it spontaneously emerged that the highly sort of organized trolls for misinformation in Russia, have been disarmed by a community of people who have used humor to disable these trolls. Now, who would have thought that humor would have been a form of warfare, or a method of pacifying an organized attempt at disinformation?
This is more the strategic view of warfare, whereas I'm now speaking about the operational and tactical use of artificial intelligence in warfare, where we actually know adversarial platforms, and what their physical capabilities are, so that we can actually speak to train tactics against like, let's say, a ground-based air defense system, for example, which Russia uses. So it's a different level, it's a different level. Your innovation is not always about logic, and I think that that's where we have to move to something which is quite different. And it'll be different.
Well, in our last minute here, it's clear that working in AI is not a tankless job. That's a pun.
Okay, so in our last minute, let's have just a sentence, and this is an impossible question I ask of every panel. 15 years out, the year 2040, what does good look like? How much time do I have to think? We all together have 20 seconds. Better than today.
Let's say, more secure and putting less burden on the end users. I'm excited to build technology from people for people, so diverse backgrounds, insights will yield in better quality. For me personally, being retired from AI, being offline, and being active in animal protection. A life without technology, switch it off, live a calm and peaceful life in a local community.
Please, please join me in thanking a wonderful panel here today.