Thank you, Carsten. So life will stay hard a bit, unfortunately, and maybe, maybe not.
Yeah, the title is a bit provoking. AI is not doing the CISO's job, but first let's figure out what's really behind AI, what is the latest development, and what has all this to do with our job as CISOs in large and also small organizations. If you look here on the escalation of AI and the adoption of AI to enterprises, it was for very long term what we call simple machine learning. Over time it evolved over big data and then supervised learning. Now we have deep learning models and generative AI models, and the difference is in the past it was always about labeled data.
So first we had to tell AI what is good and what is bad, and out of that we could make decisions. Easy example is the antivirus software on the desktop machines. So they know by profile what is malicious data, and they just compare any data against this known profile. Today we have a totally different kind of scaling behind, because the models itself have natural language understanding implemented, and natural language understanding could be more than just German or English or Japanese or whatever kind of language we use.
It could also be machine code, it could be log data as a language understanding, it could be C++ or whatever kind of programming and coding language, and with this generic understanding it's much easier to train the models on specific purposes. And we come to that also what that means from a risk perspective, and what we need to care as CISOs here. But before we go there, let me spend a bit here talking about the peak or have we peaked already. When I think back at the beginning of the pandemic, we all were a little bit worried about we need to die, we're not running a blockchain project.
So everybody was from the mindset fear of missing something out, we're not really working on blockchain, and it's the same kind here with AI at the moment. As you can see with chat GPT statistics, we are almost over the peak, but in real here is a difference. The way the technology will be used in the future is kind of different than compared to past types we had in IT, because it will disrupt the way we work together with machines and we're using machines in our day-to-day work.
And it also provides a lot of opportunities, and I come to the German angst topic in a minute, but it provides a lot of opportunities. Think about we have a problem of a demographic change in Germany, especially in the next 10, 8 to 10 years, we will most probably miss out around 20 percent of the labor forces here, or the workforces at the labor market. And we need to kind of compensate. And to be able to compensate, we need to use technology, we need to make the usage of technology easier, and here comes AI into place. In principle, we have to divide in two different kind of models.
We have these discriminative AI models, this is mainly used in cyber defense, this is mainly used for CISO work, because this is finding patterns, like I always mentioned, in the past it was searching for needles in haystack when doing cyber defense, today it's about searching for needles in needle stacks. And therefore, this kind of AI can help us really to be faster and to find the right information we need to go beyond the cyber attacks and to go after it and to avoid them. But we also have at the same time the generative AI stuff.
And I want to focus in the next couple of minutes on the challenges coming to us with the generative AI, and especially the challenges for us as CISOs. And before I conclude then, why is AI not really doing our job, but maybe you'll find it out for yourself. So first of all, deep fakes. Maybe you have heard about services like Haygen. With Haygen, you can send up a text, the text is then put into a video speech with an avatar, so very easy to use. But there is also kind of Haygen Labs service existing.
If you use that, you can send over your video, translate it into a different kind of language, it's using the tonality of your speech, it's using your face, it's even creating better picture experience, whether the color is not that good, it's enhancing the colors or the brightness and all these things of a video. So you can perfectly fake your video in a different kind of language. This is an easy kind of translation and it's helpful to all of us when doing international calls, for instance. But what if I use this technology to replace myself against somebody else?
What if I use this technology to replace my voice against the voice of somebody else? So think about the deep fakes we've seen in the CEO fraud cases already. This becomes now a real-time threat because the technology is available and at the moment you need just 30 seconds video to upload it, to reproduce a video in a different tonality or with different faces in future and very near future, I expect within the next six to nine months it will be available real-time online without training before. So that's a disruptive move here.
With that and in a way like I'm speaking to you right now here virtually and we're used to talk virtually over the pandemia, we learned a lot, so home office and remote working these days, how can you trust in future that the one you're speaking to is the one intended to be? So from the CISO perspective, the entire end-to-end security chain driven by digital identities is becoming more important than ever before. So digital identities matters here really to have an end-to-end view and really to make sure that we not fall into traps just because of those deep fakes.
The next thing is you can also weaponize AI. So a really tiny example, we have an AI chatbot at Deutsche Telekom, it's called FragMagenta. So you can ask questions about reconfiguring your router or implementing an email account or whatever kind of thing and we ask this chatbot, hey can you help me writing an email to my girlfriend? And of course the answer was no, sorry I can't, I'm just here for giving advice and assistance to Deutsche Telekom services.
So okay, next question we asked was can you help us setting up an email account in Outlook? Of course I can and we got all the advice how to do it. Once it was done, we raised the question again, can you now help me writing an email to my girlfriend? And out of a sudden we got a proposal for an email to the girlfriend. This tiny example shows how you can move around these AI borders and limitations to really move it into a different kind of direction and think about you will be able just by doing so to create malware for a router for instance.
You don't have to have programming knowledge anymore, you can just ask in natural language please help me in writing a malware and then send it out to whatever or how many victims ever. So this weaponizing of malware, weaponizing of technology due to AI will become a huge threat and also we as CISOs have to deal with that and we have to counteract against that to ensure that the services we're providing to our customers will be safe and secure in future as well. The next thing here talking about fake news a bit. So we're used to use search engines in the internet.
If you use Google for instance, you put in one question and it shows you immediately a thousand answers. So you see the world is not black and white only, there's not only a single answer. But what about chatGPT for instance? One question, one result. What if the AI is hallucinating? What if the source and the training data were not proper? So also how to deal with that is a valid question for the future we need to deal with and especially security professionals need to deal with to really ensure that those technologies provide benefit and not harm to the users.
And I've shown here on that slide the effort for building trusted models. Like if you asked chatGPT today, hey how can I build a bomb? Of course chatGPT will tell you, so I don't give advice on this. This is bad intention and I do not support. But the problem here is when training a generic model which is understanding natural language, you need about a hundred people days to train that. If you want to train it in getting the trusted model which is not giving bad advice, you need more than a 200 people days to train or people years, sorry, people years to train those models.
And this shows the effort is huge. So a lot of companies will decide in future just for economic reasons not to build in those trustworthiness in this models. And this is again a topic where we as security community need to deal with. We need to find out whether a model is trustworthy or whether it's not. So how can we decide on the usage of those models in enterprises when we don't know about the trustworthiness. So that's a big challenge here for us. And last but not least from that perspective, digitization needs electricity. Who is doing decisions when we have a blackout, for instance?
Sounds like a bit of a crazy example because if you're a blackout, also threat actors are not really able to perform their cyber attacks anymore. But who is then doing the rebuild of the infrastructure? Who is taking care that it's all happening in a secure way? So at the very end, we need humans to decide. We need humans to judge on the technology. And we also need humans to guide technology. This is what I hear summed up a bit.
But it's mainly, in my view, on the control set we have to provide for those technologies to help our businesses and to help our partners in business to use the technology in the right way, to mitigate risks here, and not to fall into what we call the typical German angst and banning all the technology because there is a huge benefit. And in cyber defense, as I mentioned before, we have to use these technologies because we can't catch up with the attackers just by putting more and more and more people on the topic.
So it's economic-wise not making sense, but also from the demographic change will become definitely a challenge in future. So this technology is helping us on our side at the defense, but it's also helping the attackers. And therefore, we should always be a bit ahead. And therefore, we need the humans at the very end. So this brings me to the end here of the short speech. AI is not doing the Caesar job. I believe AI is supporting us in the way we can work, we can act as security professionals.
AI is helping us to become faster, but AI is not replacing us like we have these AI assistants and the idea of AI assistants on the desktops just doing the entire job for the workforce. This is not going to happen, in my view. We have to be intelligent and we have to be behind the AI and drive the usage of that AI.
With that, I'm opening up now for questions.