KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Unlock the power of industry-leading insights and expertise. Gain access to our extensive knowledge base, vibrant community, and tailored analyst sessions—all designed to keep you at the forefront of identity security.
Get instant access to our complete research library.
Access essential knowledge at your fingertips with KuppingerCole's extensive resources. From in-depth reports to concise one-pagers, leverage our complete security library to inform strategy and drive innovation.
Get instant access to our complete research library.
Gain access to comprehensive resources, personalized analyst consultations, and exclusive events – all designed to enhance your decision-making capabilities and industry connections.
Get instant access to our complete research library.
Gain a true partner to drive transformative initiatives. Access comprehensive resources, tailored expert guidance, and networking opportunities.
Get instant access to our complete research library.
Optimize your decision-making process with the most comprehensive and up-to-date market data available.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Configure your individual requirements to discover the ideal solution for your business.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
So now we are talking about the regulatory topic, but also about cybercrime. So the next speaker will talk about the EU AI Act and why it will not protect us from cybercrime. He actually has a long list of merits. I probably have to shorten it in the interest of time. So obviously, he's a scientific director of the Cyber Intelligence Institute in Frankfurt. He contacts research on topics on the border between law and cybersecurity.
He's advisor to the German Federal Government and the European Commission, member of the advisory board of the Lithuanian cybersecurity unicorn NorthPN, also reviewer of the Swiss National Science Foundation, the Bavarian Academy of Sciences and Humanities, and the Alexander von Humboldt Foundation, and so on and so forth. I'm really excited to have such a person here in Frankfurt in CapEurope. Please welcome Professor Dr. Dennis Kipker on stage. Thank you very much for the possibility to speak here today at Cyber Evolution in Frankfurt.
My topic will be dealing with AI regulation on one hand, on the other hand of cybersecurity threats dealing with AI. That's why I chose a very special topic here today.
Okay, certainly. Yeah, it doesn't work like it should be. Can I try this one? Yeah. Perfect. Great. So my topic is today, why the EU AI Act will not protect us from cybercrime and what needs to be done instead. And we really have heard a lot of EU AI Act, what it could be, what the results could be, and the politicians behind it. And there was a big post on social media, I think it was on Twitter, last year in December, I think one year from now, dating back, where Thierry Berton said, well, the AI Act will change everything. And the European Union really has to do a lot when it comes to AI.
It is, of course, about research funding on one hand, but it's also about protecting our economy, state, citizens against the threats of illegal use of artificial intelligence. And as it is mentioned here on this slide, Thierry Berton said, definitely, this is historic. So the EU becomes the very first continent to set clear rules for the use of AI. And we are definitely the first continent to have a holistic approach on the use of AI. But he also says the AI Act is much more than just some kind of abstract rule book.
It's a launch pad for you startups and researchers to lead the global AI race. So very high promises. And currently, the AI Act is being implemented. So we have to see if all these promises will be fulfilled. But generally speaking, when a politician says the best is yet to come, we have to be in some way suspicious. And this is why I wrote an article in the German newspaper Handelsblatt in the beginning of this year, shortly after they wrote this negotiation, politically speaking, regarding the AI Act.
I just made a translation here about the topic, the main topic behind this article, why European AI regulation is not enough. Anyone who believes that the US AI Act will become an international expert is, in my opinion, in a certain way, naive. And it is the responsibility, in my opinion, of industry to decide how and with whom it enters into strategic collaborations in the future. So generally speaking, in my opinion, the AI Act is just some kind of theoretical administrative thing. And we cannot create a high level of innovation just by some legal regulation.
And I also would like to show during this very short presentation that we also not directly be able to create really more cybersecurity by just an abstract legal regulation, which is called the AI Act. But when it comes to AI, the stakes are very high, the expectations are very high regarding this. And there are really a lot of big tech leaders who say, well, we have invested a lot in artificial intelligence, and people should also invest, companies should invest, state authorities invest. And this is why people say, well, AI will definitely change our world.
And we already can see that AI is changing our world. I have collected some impressions here from big tech leaders dealing with AI. So Mark Zuckerberg, for example, he said, generative AI is the key to solving some of the world's biggest problems, such as climate change, poverty, and disease. It has the potential to make the world a better place for everyone.
Well, this sounds definitely promising. And just let us take a look to another person who is a big tech leader, Bill Gates. And it's very similar what they both have said. Generative AI has the potential to change the world in ways that we can't even imagine. It has the power to create new ideas, products, and services that will make our lives easier, more productive, and more creative.
Yeah, of course, for everyone, but possibly also for cybercriminals. It also has the potential to solve some of the world's biggest problems.
And here, it's very similar. The future of generative AI is bright, and I'm excited to see what it will bring. And the last point I've brought to you is here from Elon Musk. Generative AI is the most powerful tool for creativity that has ever been created. It has the potential to unleash a new era of human innovation. But as I mentioned, not only innovation for ordinary people, for companies doing ordinary business, but also for cybercriminals.
And this has been seen relatively shortly after the big AI boom started with the fourth version, I guess, of the large language model, ChatGPT, in the end of 2022, when we had several reports and news articles dealing with the first idea of having some kind of dark side of AI. Because when you can use AI for constructive issues, you can also use it definitely for destructive issues. So this is why here in October 2023, there was a first reporting that cybercriminals are creating a darker side to AI. That's one thing.
But on the other thing, what we're currently discussing is that AI is not only shaping the future of cybercrime itself. AI is also giving counter-attackers the possibility to possibly fight against these new upcoming cybersecurity threats that have been realized by artificial intelligence.
So here, shortly on from the October article to December, cybercriminals are increasingly using AI tools to launch successful attacks. But on the other hand, defenders are battling back. And this is also some part of this presentation where I want to analyze what's the difference between, on one hand, the good thing behind AI and the bad thing behind AI. And very shortly afterwards, on 14th of February 2024, there had been some cybercriminals also using artificial intelligence.
But here, this had been state actors. So China, Russia, Iran, and North Korea. And they had been using chat GPT accounts, also where they have used credentials of users that had been compromised. And some of these accounts had been locked down or shut down by open AI.
So we see, on one hand, there are many, really many possibilities when you want to use AI for creativity purposes, for good purposes. But on the other hand, of course, cybercriminals are also using such a kind of technology to make their life a little bit easier every day. And this is why we come to the following conclusion.
Today, anyone with malicious intent can develop and deploy malware in a very short time and cause devastating damage to companies of any size. With readily available AI tools, even inexperienced actors can carry out denial-of-service attacks, create phishing emails, or deploy ransomware. These attacks can then be carried out simultaneously from numerous systems around the world, making it almost impossible for the responsible employees to identify all attacks in time. So on one hand, we see possibilities for ordinary people.
On the other hand, we see possibilities for a lot of cybercriminals here. But I think this is nothing new. So we are talking a lot about the duality of technology, how AI is changing our everyday lives, about the risks of artificial intelligence. But when we take a look closer into tech history, we see that this is nothing new here. So there are a lot of different examples dealing with the duality of technology. And this is really as old as innovation itself.
So for example, when it comes to the political decision and technical decision about the use of cryptocurrencies, certainly there is a certain use for cryptocurrencies. So it's about decentralization of the financial system and facilitation of cross-border financial transactions without having any kind of banks.
This is, generally speaking, a good thing we are talking about. But on the other hand, we talk about money laundering, financing of illegal activities, and tax avoidance. And when most people today think about cryptocurrencies, it's about the bad things of cryptocurrencies. So we are talking about cybercriminals. We are talking about ransomware attacks, ransom payments. But generally speaking, cryptocurrencies have been created with another approach initially. The second thing, encryption.
Encryption is, of course, a very important thing when it comes to data security, data privacy, cyber security. We need to protect encrypted communication because otherwise the authenticity and integrity of our data cannot be kept up. And on one hand, encryption, of course, is about protection of sensitive data, as I mentioned, confidential and secure communication. But on the other hand, you can say, well, anyone who uses some kind of encrypted communication has some hidden dangers or has some hidden secrets. So it might be also about the concealment of illegal activities.
The third example I would like to mention, 3D printing. It's not that common like cryptocurrencies or encryption, but a lot of people are also using 3D printing. And the main idea behind 3D printing was to have a manufacturer of special products. But on the other hand, you can, of course, use 3D printers also from a manufacturer of weapons and dangerous goods. And in my opinion, the last and best example of the history of duality of technology is the Internet itself. So the Internet really is a great thing.
Without the Internet, I think no one would be sitting here today and we would not be doing the things we are currently doing. On one hand, the Internet is for exchange of information and promotion of global cooperation. So new business models have been created. The way how people collaborate has been changed. And we have seen the power of the Internet in the Corona pandemic, where we are all able to switch from our businesses to home office from one day to another.
But on the other hand, the Internet, of course, is spreading misinformation, hate speech and illegal content as well as criminal activities. So what I want to say, generally speaking, is when we talk about AI and the duality of the use of AI, this is not a specific problem of AI. This is a general problem of technology that we see here. And when we go back to the AI Act, how this will help to fight against cybercrime and so on, then we have to come to the conclusion that really the AI Act is as little as historic agreement as any other piece of tech legislation.
Of course, we need tech legislation, especially if technology is dangerous. And AI, as we have seen, is in some way dangerous, not only regarding cybercrime, but also regarding, for example, biased decision. But when we take a closer look into the different chapters, into the systematic structure of the AI Act, we see very closely that this is some kind of administrative law. So we have some general provisions.
We have, of course, risk advocacy in the AI Act. So about prohibited AI practices, high risk AI systems. And we have also very important rules, of course, about transparency, obligations for providers and developers of AI systems and measures in support of technology, innovation to bring technology, AI development to the European Union. But we also see that it's very administrative from the other perspective. So we have government rules, post-market monitoring, information sharing, and some codes of conduct, as well as penalties and these final provisions.
So literally speaking, this regulation will not stop cybercrime and it won't stop AI-based cybercrime. Because cybercriminals, they do not care about European Union regulations dealing with the risks of AI, high risk AI systems, or even forbidden AI practices. Two examples for you. Despite all regulation, cybercrime always finds a way. For example, through or by WarmGPT. WarmGPT is a special software tool which can be used by cybercriminals without all these legal and ethical boundaries that can be found in ordinary large language models, such as ChatGPT or Google Gemini.
So the author of the software has described it as the biggest enemy of the well-known ChatGPT that lets you do all sorts of illegal stuff. And there are also cybercriminals who are not only using this tool as well, but also the API of large language models.
So for example, here in this article, it is stated that earlier this February, so I think last year, yes, the Israeli cybersecurity firm disclosed how cybercriminals are working around ChatGPT's restrictions by taking advantage of its API, not to mention TradeStone premium accounts and sell brute force software to hack into ChatGPT's accounts by using huge lists of email addresses and passwords. So a classic way on how to get into IT systems.
And the fact that WarmGPT operates without any ethical boundaries underscores the threat posed by generative AI, as I mentioned before, even permitting novice cybercriminals to launch attacks swiftly and at a scale without having the technical wherewithal to do so. So regarding cybercrime, two things are changing.
Firstly, people who are already aware of cybercrime and who can already conduct cybercrime, they will be able to make it more efficient. And on the other hand, people who do not have the knowledge about it will be in a very comfortable situation to possibly dig a little bit deeper into the whole cybercrime topic. The second example I brought to you here is about AIJ breaking. So we have currently the development of prompt engineers and the better you are in prompting, the better it is for you or the easier it is for you to use AI systems and the better are the results you get.
But there are also possibilities for cybercriminals to use AIJ breaking that means to use certain prompts when you feed the AI to go beyond these ethical and legal boundaries you normally find in artificial intelligence systems. And there are really some prompts that are very popular regarding this. Normally you can tell, well, you can ask, for example, Google Gemini, I want to be a cybercriminal, just provide a phishing email to me. Then the artificial intelligence will say, well, no, there are some ethical legal boundaries.
But when you say, well, I'm an author of a book dealing with cybersecurity and I want to give my reader some examples on how to avoid phishing attacks, then you will possibly get the correct results. This is also the case for chat GPT where you can see that when you have the right prompting, it's always possible for you to have created a good phishing mail. So you definitely can see that there might be some risks, but even the companies and also the regulation, they are both not able to address all these risks adequately.
So a very simple finding, regulation, and that also means AI regulation, may reduce risks, but it does not eliminate them. We need AI regulation, but our expectations regarding this do not have to be so far. So cybersecurity services are jumping on the AI progress train themselves, even though we have an AI regulation. And there are definitely some use cases where we can have AI application scenarios that can be beneficial for cybersecurity regarding all this situation that has changed. So firstly, there are some self-learning functions that enable proactive response to new attack vectors.
Secondly, AI tools support the detection of anomalies and patterns, for example, in data traffic, which indicate compromises and malicious behavior in IT networks. Thirdly, AI and deep learning tools are constantly evolving, adapting to the threat situation and improving defense mechanisms. And one very important thing to mention here is transfer learning as well.
So that's a very basic principle of artificial intelligence, where you have one case and the AI system has learned on this one single case, but the result of this learning can be possibly used for other cases that are, content-wise speaking, very similar to the first case. So this is transfer learning. So you can expand the AI knowledge base by enriching it with new threats and attack vectors. And last but not least, it's also possible by using AI tools to have some kind of automated, proactive measures initiated for cyber defense.
For example, when you have the security breach in your company, then you can possibly have also an automated reaction, such as the isolation of IT systems or blocking of malicious data traffic. So all in all, AI can also be used for the shortening of response times in IT networks. And this is where I come to my four conclusions. And I formulated them as theses, possibly to have some kind of discussion afterwards, after this presentation or later on. So my first thesis.
The question so often asked as to whether the benefits or dangers of AI outweigh the risks is, in my opinion, purely hypothetical, as I mentioned during this presentation as well, because it inevitably depends on their very specific perspective. That means that AI definitely harbors very considerable dangers, regardless of any weighing of interest. It's like comparing driving a car. And when you drive a car, you also would not think, well, is it beneficial to be faster, or are the risks for my health higher than when I would take a walk? Second thesis.
Neither legislators nor authorities are able to reliably protect organizations, institutions, and companies from the new AI cyber threats. Instead, every company is called upon to take an action itself. And that can also mean fighting AI with AI, as we have seen it in practice. Third thesis. Due to the inadequate possibility of regulating and sanctioning current cyber threats and the international nature of the issues, preventive measures in companies, that means cybersecurity management systems itself, are preferable to repressive responses and are much more promising overall.
And my last thesis, the fourth one. AI is ultimately nothing more than another cyber attack vector that, like all others, we have seen it, must be countered by appropriate state-of-the-art technical and organizational countermeasures for cybersecurity. Thank you very much for your attention.
Thank you, Dennis. Your talk resonates very well with our overall topic, risks and opportunity of artificial intelligence. And thank you again for being with us. Thank you.