Welcome to the KuppingerCole Analyst Chat. I'm your host. My name is Matthias Reinwarth, I'm the director of the Practice Identity and Access Management here at KuppingerCole Analysts. My guest today is Alejandro Leal. He is Research Analyst with KuppingerCole. Hi Alejandro, good to have you.
Hi, Matthias. Thank you for having me back.
Great to have you back. We are in the set of episodes running up to an event that we are planning for November in Frankfurt, which will be called cyber-revolution or cyber-evolution. So you can choose. And we are working on topics around the intersection of artificial intelligence and cybersecurity. So this is one of the focuses also of the event. And we want to talk today a bit more about what we have done already in other episodes together with Warwick and Marina Iantorno. We want to talk about future threats and how new technologies, of course, machine learning, artificial intelligence, have an impact on cyber security. What is your main threat that you currently see and that you're currently working on?
Yes, well, it's a very cool topic, and I'm afraid that there are plenty of cybersecurity threats out there that are continuously evolving and getting more sophisticated. Thankfully, we here at KuppingerCole, we're going to be covering some of these threats in the coming months. And like you said, especially during the cyberevolution conference that will take place in November of this year. So as we know, over the past decade, we have seen rapid development in the field of information technology. And we can even say that there has been a sort of digital revolution that has provided unprecedented benefits to society and economy. It has also created new ways for businesses to interact with employees and their customers. However, these developments have also created some challenges. Cybersecurity threats have been increasing over the past few years, and recently the European Union Agency for Cybersecurity published a report called Cybersecurity Threats for 2030, and one of the threats mentioned in the report happens to be deepfakes, and that's the topic that I will discuss today in this podcast. I'm currently working on a blog post that will be published sometime this week or next week on the website, and there are two potential ways that deepfakes can be used by adversaries. One of them is the remote identity proofing attack and the other one is a disinformation campaign. So the global pandemic underlined the importance of having well-regulated identity proofing services and trusted digital identities that public and private organizations can rely on. The shift to work from anywhere and remote work accelerated this trend and the need for identity proofing services. So what is remote identity proofing? Essentially, it's the process when a user proves his or her identity. The process usually goes like this: The user takes the webcam or the mobile device, and the user then shows their face and then sometimes they produce a government issued document, either a legal identity card or passport. However, criminals or adversaries are getting more creative in the ways they could circumvent these systems. The manipulation of visual media is being enabled by the wide spread availability of sophisticated image and video editing tools, as well as automated manipulation algorithms that permit editing in ways that are very difficult to detect and distinguish from a real footage, let's say. While many manipulations are benign, they are mainly performed for fun or just for artistic value. Others are for the more adversarial purposes, let's say, such as the spread of propaganda or disinformation campaigns. And that is when it becomes a problem. But when it comes to the identity proofing service attack, it's necessary first to understand what deepfakes are and deepfakes or manipulated media. It could be a video, it could be a voice recording or simply a photo of a targeted person. This data is fed to a program that will learn the fundamental traits of the target person, of the victim. And this program then would use the information learned to modify the photo, the video or the recording, and then apply these traits on the face of the victim. So Deepfake software can create a synthetic video or image that realistically represents anyone in the world, and it can be very difficult to distinguish from a real footage. So for identity proofing, this kind of attack is particularly used against the video evidence based identity proofing system. So, for example, if the attacker knows the steps of the process, they can simply inject the video or they can present a video on the screen. So he or she can fool both the automated system and a system which uses an operator. So it's worth noting, however, that deepfake attacks, they can only happen two ways. One is either by being presented to the camera or by being injected into the camera flow. So addressing the challenges posed by deepfakes in identity proofing requires an approach that involves a combination of tools such as robust authentication mechanisms and continuous adaptation. Because deepfakes are continuously evolving in sophistication. So it's necessary to continuously adapt to this threat.
Right. I've been talking to John Tolbert, our colleague, about FRIP platforms. I think the technologies that are used there are also closely related to that, what you just highlighted. And it's always also a process of, as you said, an improvement on the on the attacker side, an improvement on the on the cybersecurity professional side. But these technologies, they are built into services because I see as an advisor that many organizations are currently looking into these platforms to improve actually their onboarding processes, for example, or to improve their customer onboarding process. So both employees and customers are typical use cases here. Do you see a rapid evolving also in the technologies used here?
Yes, absolutely. I recent... well, last year I published the LC on Passwordless Authentication. You and I had a conversation on the topic, and one of the trends that I observed was the use of, let's say, blockchain technology to improve this onboarding process that will increase both security and convenience. So I see that organizations are catching on these trends and they're integrating new technologies to combat emerging cybersecurity threats. And I think the difference between the two types of attacks that I will talk about today is that on the one hand, the identity proofing attack, it's not very, let's say fancy. You don't really see that on the news. Maybe people in our industry, we know about it. But what we see more of on the news and maybe people that are not in the industry are aware of are the disinformation campaign attacks. So deepfakes can be used to create false videos of political figures, let’s say candidates or public figures. For example, the case of the Ukrainian President Volodymyr Zelensky. There was a Deepfake video that appeared last year where he was saying basically that the soldiers need to surrender to the Russian troops. And of course, that proved to be fake. But there was also another video in 2019 that showed back then US House of Representative Nancy Pelosi, where she appeared to be intoxicated in the video. And more recently there was lots of content generated after the Burkina Faso coup that happened this year, and there was lots of information out there, deepfake content basically supporting the military junta that is ruling the country right now. So it could be used for political purposes, but it can also be used to create realistic news stories. So fabricated content can be hard to distinguish and then maybe some news outlet could use it to create a new story. But then the people that are going to be reading about it are going to be confused, and it could also create some division and undermine trust in the media. Deepfake campaigns can also be used for manipulating cultural narratives or rewriting history. For example, it's possible to alter historical footage or some iconic speeches from public figures. They can be distorted, and then the public's understanding of a significant event can be also confusing. And I think finally, another example of disinformation campaigns are related to financial fraud. So many fraudsters and cybercriminals are using voice manipulation to impersonate someone in a phone call, and they can trick individuals or organizations into providing sensitive information. So detecting and countering deepfakes, along with promoting media literacy and critical thinking, are essential to mitigating disinformation tactics.
Right. You've mentioned media literacy, that that is, of course, of importance. Is this really something that I would expect to happen, A, and B, can I find a trustworthy source that confirms what I've just seen, but also from a technology perspective, are there tools in place? Are there organized actions, vendors working on detection mechanisms that aid in the detection of these fake videos, fake pictures, and maybe even fake audio?
Yes, recently there have been some countermeasures that are useful to combat these emerging threats. And one of them is media forensics. So, for example, in the United States, the DARPA agency, which stands for the Defense Advanced Research Projects Agency, they recently created two programs, one called the Media Forensics, and the other one called the Symantec's forensics. And what they're trying to do is to understand how deepfakes are created or they're made, how they spread, and how can they detect and distinguish between a real content and fabricated one. So the United States is taking some measures by creating programs, but there are also tools that we can, that organizations can integrate, for example, machine learning and AI tools that can counter deepfakes. And like we mentioned, public awareness and education. Like, for example, few years ago, there were some, let's say, generational gap between the older and the younger ones. Because an example, I don't know, like my grandma, for example, she would write me on WhatsApp telling me about, Oh, you saw this story on the news, and I think that maybe younger generations are able to distinguish between what's false and what's real, let’s say. But when it comes to deepfakes, I think that's transcending generations, like anyone could fall for a deepfake. And since this technology is currently evolving and getting better, then it's going to be more challenging in the future. And yeah, maybe another technical thing that we can do to distinguish content could be watermarking. So essentially, I think it's important to note that the field of deepfakes is evolving rapidly and countermeasures are needed to continuously adapt to the new techniques and advancements and the threats that cybercriminals are creatively doing. So I think we need a sort of multifaceted approach that combines technical tools, public awareness and education, and also policy developments and legal frameworks that can effectively mitigate the risk associated with deepfakes.
Right. I would fully agree. But again, I had the same discussion earlier with Marina. When we say, okay, we all are looking into regulation as one of the key components of mitigating that risks. And that is obviously true. But the attackers, they don't care about regulation as they will be unhinged. They will be using that. So the level of control, the level of mitigating measures that we applied to get hold of these deepfakes will be even more substantial and will be even more desperately needed to protect society and news and businesses from the outcome, the fallout of those deepfakes. And I think that discussion will continue that cyberevolution, but also running up to that event. I think our website, our blog, as you've just written for that, will be a valuable source. And if there are any questions from the audience regarding that topic, if you want to get into a closer discussion with Alejandro or with me, or with the team that is doing cybersecurity research here at KuppingerCole, and advisory. Just leave a comment below that video on YouTube. Or if you're listening or watching this or on any other platform, use the mechanisms that are in there or just write an email, it still works, and no risk of deepfakes in emails, at least no easy ones. Thank you very much, Alejandro, for your time being here today for laying out your expertise in deepfakes in cybersecurity. I think this will stay with us for quite some time. Will maybe never go away, but we need to be really, um... we need to take care that these things don't interrupt us. A pope in a fancy dress might be funny, but this is just one aspect of the medal. Thanks, Alejandro for being my guest today and looking forward to having you soon again.
Thank you, Matthias.
Thank you. Bye.