Welcome to the KuppingerCole Analyst Chat. I'm your host. My name is Matthias Reinwarth, I'm the Director of the Practice Identity and Access Management here at KuppingerCole Analysts. My guest today is Marina Iantorno. She's a Research Analyst with KuppingerCole. Hi, Marina. Good to have you back.
Hi Matthias, thanks for inviting me. It's a pleasure to be here again.
We have a great reason to invite you because just last week we published a blog post at KuppingerCole Analysts’ website and I think it was on LinkedIn as well and it had quite some traction. And you were covering in that blog post - it was of course authored by you - you covered the topics of AI ethics, bias in AI and in data and the topic of deep fakes. And there was quite some feedback and some interest in the overall topic. So where can we start discussing that in more detail? So it's AI in the end. So how has artificial intelligence been transforming cybersecurity or the way we live in general?
Well, we have to say that artificial intelligence made the revolution, right? So it is a game changer in the market. Now, if we talk about AI in terms of cybersecurity, we need to think that AI is enabling advanced threat detection, automated response systems. And of course, it also helps to mitigate different risks. For example, the cyber threats in real time, AI is actually helping in identifying this. So I would say that AI is actually really a game changer.
Fully agreed and fully understood and we are all using it every day. So it's not only just Alexa or Siri in your pocket, there's much more.
Totally, yes.
I think, yeah, and we are just getting used to it. The question is, What are we getting used to? And you've mentioned in the blog post that ethics plays an important role. So ethics in AI is something that we hear about. How does that affect the cybersecurity field? What are the challenges that systems, solutions and also algorithms encounter when using data, when applying models stemming from AI?
Well, of course, if we talk about ethics in general, this is very challenging, right? And in terms of cybersecurity, the main concern, I would say, when applying AI models is ensuring the pillars of the AI. Fairness in the models because, you know, the model is trained with the data that we are actually assigning to it. So if we have certain data that has affected underrepresented group or historically underrepresented groups, it would be an issue because the outcome would be related to that. Other challenges could be explainability. How can we explain the outcome that we have when we implement an AI model? So this is something that is on the table since so long and still it is an issue, I would say at the moment of implementing AI models because sometimes the developers, they say, well, I cannot really explain the result. Also ensuring the robustness of the model, right? So we need to avoid favoring, you know, certain groups or certain individuals rather than others. So we need to ensure as well transparency. Users, they should know how the companies that are using AI, how their data is used in the models. And of course, organizations must ensure the privacy protection, especially for all the parties that are involved in the cybersecurity supply chain.
Right. We all heard about bias in AI already. We all have seen these weird examples of AI, of machine learning, preferring some ethnic groups and just ignoring other ethnic groups. We have all seen that. And bias is important when it comes to data, but also to the models. But when we look at bias in AI and machine learning, in the models and the data for cybersecurity, where does this manifest? Where can we see that?
Well, the first thing that I would say is the reliance on historical data. So we need to consider that societies are changing. So then we don't see the world as the same way that we looked at it before. So if we rely only on historical data, sometimes, let's say, the results or the outcomes from the AI system can actually replicate and amplify the view that we had, so the human biases that we have there, and it could lead to misleading outcomes. Misleading, if we consider our reality, our current environment. Now, of course, if we think about the cybersecurity, for example, we could actually have results or outcomes that are pointing or targeting certain groups unfairly or they are undermining their privacy. And we need to be very careful with the data that we use to train the models. You know, because AI, yes, it's fantastic, it's great, it's a great tool, but it is a machine. So then we are training the machine with the data that we are assigning to that. And if we are giving data that is not accurate or is not precise or is not fair or it is not transparent, let's say, so we will have bad outcome, you know, like an outcome that is not representing the reality or it is going in detriment of certain groups. So then it is happening with all the models and the main point here is, be careful with the data that is used and in the end, avoid having useless results or even dangers in some cases.
Okay, so now let me be the journalist. That was still too general, can you give me some real life examples? Where did that happen and how can we prevent that? And what did it lead to?
Well, for example, if we think about the threats that are detected by AI systems, the algorithms could actually flag, for example, certain individuals or certain groups as a potential threat. And it is a discriminatory, let's say, outcome, right? And now of course, this is not fair and it is harmful for certain stereotypes. It creates the stereotypes in the society and it is hurting the society as a whole. So then it is going against the pillars of the use of the AI.
And we have not yet talked about deepfakes, how do they come into play when we are talking about that topic, what is their role when we are talking about AI ethics, when we talk about bias. Where do deepfakes come into play here?
Well, when we talk about deepfakes, they are becoming a threat. Because AI is very powerful and we can actually use AI to replicate, for example, the movement, the voice, the image, you know, of certain people and there are many techniques that are used nowadays to, for example, use the image of a famous person and change the message that is delivered. Now, at the same time that, you know, this is used for bad, there are other techniques such as adversarial training, for example, that can be used to detect and counter the deepfakes. You know, this is if if you think about that, it's like, you know, if you have a paper and a lighter and you are in the middle of the forest, you can just make a fire to warm yourself or you can make a fire to burn the entire forest. Well, this one is the same. So it depends on us. How do we actually use AI? You know, and it is crucial to actually be up to date with the new techniques to mitigate these
Right. So obviously the issues are understood and are recognized and need to be dealt with. And if we have a problem with traffic security then we want to have make sure that the airbags and the car, what are the right strategies to mitigate the threats that result from bias in data, bias in machine learning models, from these deepfakes, what would be strategies to be really effective against these mechanisms?
Well, the first thing that I would say is we need to be aware that at the same time that AI is evolving for good, there are people who are using this for bad. You know, like this is exactly what I mentioned before. So it depends on the person, how to use it. Now, it is undeniable that we are living with AI, we are living with the machines. And this is something that that came to stay. So if something history showed us, is that technology is progressing, is advancing, is evolving, and we cannot stop it right? Now, there are certain frameworks that should be in place and there are, you know, there are some of them in place. And yet the industry and academia are working together. There are many companies nowadays, for example, that they have researchers working on this with academia, right? And trying to come up with a framework with certain ethical implications. Now, there are conferences, seminars, publications, campaigns, right? So there are many things that are helping to raise the awareness and to promote the best practices. Of course, the role of the government or certain institutions is vital in this case. There are some guidelines, but you need to think about that. Imagine if you try to go through an agreement with your society in general, it is very hard to actually come up with something that everybody is happy with. So imagine aligning this to all the countries. So it is very challenging, but it is happening and it is on the table. I would say that this collaboration between industry and academia is helping the society, you know, in general with this and with the use of AI.
Right. So this sounds like a long term journey. When we look and focus on the cybersecurity part of things that we are usually talking about, how should this cooperation, this public private partnership, including academia, how should that address the issues, especially when it comes to cybersecurity?
Well, so we need to think that there are, as I mentioned, some frameworks, right? And certain limits for AI. Now it is great to know that AI will keep progressing and will be a huge help for us. But at the same time, governments and, you know, certain institutions should actually create more regulations. In 2016, I think it started the GDPR regulation to protect the data of the users. This was actually a very good change, I would say, in terms of Europe, but this is not still applied in all the countries or in all the regions in the world. And this should be something that should be on the table as well, because users should know how their data is used for the AI models that are used, for example, in cybersecurity, you know, to ensure their privacy, to ensure that their data is in the minimal risk to actually suffer a leakage or something like that. Again, so the main point is following the guidelines that are nowadays in the market and that they promote the transparency, the fairness and the accountability as well in the use of AI, you know. It is very hard to imagine how all of these countries actually reach to an agreement. But for example, the United Nations, UNESCO, they have a meeting every year with more than 100 countries, you know, to actually come up with an agreement to see, okay, so what are the regulations, what are the guidelines that we should follow or everybody should follow at the moment of using AI? You know, and I believe that it is actually good that it is in place
Yes, I fully agree that this should be covered. And on my virtual reading list, there is the draft of this AI act of the EU that currently is in discussion, and you've mentioned the government already. What role do you think that regulatory bodies and governments can play or should play in ensuring these ethical AI practices in cybersecurity? What can they do?
Yes, the first thing that I would say is like their role is vital. You know, it's critical because they are actually establishing certain rules. Everybody, for example, nowadays is talking about ChatGPT. And they said, okay, so if you go to ChatGPT and you add any question that you want, but you would actually be giving away your data because we have to think that the machine is actually training itself and users should actually know or be aware about that. Now what is happening, if you check here in Europe, not all the countries actually accept the use of ChatGPT because of this, because there are some, let's say, clashes between the use of the machine and the regulations that are in place. I wouldn't say that that, you know, you don't have to use it. Actually, I'm always you know, a fan of AI. This is my field as well. And it would say that, yes, it is good and it is a tool that is actually helping us. But the government should actually recheck and review whether the tools that are in the market are actually safe to use or not and how our data could be protected. I would say that this is the main role of the government and the regulatory bodies that are in the market to ensure that the ethics practices and the best practices are actually in place and ensure the cybersecurity, you know, in the use of AI.
Absolutely. But this will require quite an amount of work to do to actually implement such tests and such compliance rules to make sure that this can actually be checked and can be proven. You are a research analyst, so you are relying on facts. So you are on facts and figures and pictures and graphs. If you take one step back and look at this market evolving over time, what is your point of view in terms of the long term outlook? So what is your look into the crystal ball of what can happen when it comes to AI ethics, bias and the deepfakes? Where will we move? What will be the next two years, five years?
Well, the first thing that I would say is like, AI will continue to evolve and as such, it will transform the cybersecurity landscape if we continue on that. And as this is happening, there are some measures that will be necessary to address to interrelate the issues between ethics, bias and deepfakes. The advancements, you know, in technology, as I mentioned, is something that we cannot stop and they are progressing. But at the same time, there are some limits and there are some boundaries. The regulatory frameworks will actually come up with differences along the years because as well, the society is changing and these collaborative efforts between academia, within the industry, within this institutions that actually discussing about what will happen with AI will be necessary to mitigate the potential risk. And the main idea here is not to think that AI will actually replace people. The main idea here is to think that AI is actually here to augment human intelligence, to augment human capabilities. Now, at the same time that this is the main idea, how do we actually reach there? And we need to think that we have to still consider all the pillars of artificial intelligence: fairness, transparency, robustness of the models, explainability, right? We have to prioritize the ethical principles. Now, many people ask me, what will happen with our jobs, so will we be replaced by machines, is it possible that we will lose our jobs because AI is evolving rapidly? And my answer is no. AI, it is not here to replace humans. But the reality is people who are working, they need to learn how to use AI at their workplace. And this would be the way that it is because we are working not just between humans, but now it is humans and machines. The idea is not to replace humans. The idea of AI is actually making the job easier and to augment the capabilities and give a more positive impact, let's say, in the workplace.
Right. And unfortunately, this AI, this machine learning is also making life easier for the bad actors. So that will be the people that we need to counter when it comes to cybersecurity. So they might even not follow any regulations because they don't have to. While the cybersecurity companies and the industry has to do so, so that will be a challenge. But this is really a thought for another episode because this imbalance of I don't care for regulation because I'm a bad actor, anyway. On the other hand, to have cybersecurity having to follow the regulations, that is still an interesting point. Is there anything that you want to add before we close down?
Yes. In November we have an event, cyberevolution in Frankfurt, and we will talk about this topic. We will go in depth on how AI is impacting cybersecurity and we will have great speakers there, I think the first agenda is already on the website ,on our KuppingerCole website and I would recommend you to attend because it is a great space to boost your networking, to speak with other professionals, to speak with other companies, with other peers, and keeping up to date, because nowadays it’s very important and we need to know what is happening in cybersecurity, how AI is affecting. And we will love to welcome you there.
Absolutely, and now is the right time. There are lots of speakers already announced and are already in the agenda. But if you are working in that field, if you think, I know better than Matthias, please reach out to us and talk to us and talk at the cyberevolution in Frankfurt in November, there are still a few slots open that can be yours, so please reach out to us and I'm looking forward to that. I will be there. You will be there, Marina, I'm quite sure. And we are looking forward to listening and to contributing to that discussion and to just learning. We are all learning in that area. Thank you very much, Marina, for being my guest today, for your great blog post. If there's any feedback to this video, just drop your message below that video on YouTube or reach out to us or just leave a comment on the platform that you use to consume this podcast or just drop us an email. We are happy to learn from you. We are happy to pick up your topics and to also discuss your contradictions, your thoughts, your emotions when it comes to AI, ethics, bias and deepfakes. Thanks again, Marina, for being my guest today. We will talk soon, I know. And there will be another episode in that area. So I'm really looking forward to that. So a few weeks for those who are waiting to see Marina again. There is a chance for that.
Me too, Matthias, me too. Thank you so much. And I am looking forward to talking to you again in a few weeks.
Soon. Thank you very much. Bye, bye.