And what better leeway into the conversation than a conversation that ended with phishing. So welcome. Today I'm very thrilled to look at you for the next 15 to 20 minutes at the future of phishing using generative AI and the shifting paradigms of cyber deceptology. So just so we all start off on the same foot, what do I mean by phishing? So when I talk about phishing, I talk about a form of social engineering by which a malicious actor penetrates a system through means of deceit or manipulation to gain unauthorized access or to gain access to sensitive data.
So if we look back at the evolution of deceptology, and fun fact, deceptology is actually a term that GPT and I came up with together, and by deceptology we mean the science of deceit. So if you look at deception in the pre-digital age, it was pretty hard to deceive people. You either had to be declared a witch or a magician, or you had to have access to one of these, which is a smoke and mirror machine from the 1700s, and also known as the grandfather of the hologram.
Moving forward, another example of deceit in a non-digital way is actually changing physical pictures, and during the purge, the great purge, a movement by Joseph Stalin, he did this quite a lot. He wiped out all of his enemies, or the people that he considered as threats, and he wiped them out by getting people with very dexterous hands to manually edit all of these pictures. This was also quite time intensive, as you can tell. And when one says deceit and manipulation, one says conmanship, and when one says conmanship, one says Frank Abagnale Jr.
So I'm not sure if you're familiar with this man, I see a few nods. If you've seen the movie Catch Me If You Can, starring Leonardo DiCaprio, you probably know him, because the story is based on him. He is an absolute king at manipulation. So in one lifetime, he managed to impersonate a pilot, a lawyer, and even a doctor successfully. So what I'm trying to illustrate here is that it is possible to deceive people in the non-digital age, but it is still quite hard.
It demands talent, it demands practice, but come the digital age, the cyber revolution, no pun intended, come the digital age, our means of communication change, which makes us a lot easier. All of a sudden, we can pretend we are people that we are actually not. We can make people believe we are places that we aren't actually at. And we can even change who our friends are and who we hang out with in our free time. But as you can tell by the quality of that despicable Photoshop in the last picture, it still demands some form of talent to deceive people.
So the question on all of our lips today is how does AI change the game? What does AI do? So we've got a bit of a short time, so I'm going to look together with you at how AI changes our perception of what we see, what we hear, and what we read. And my conclusion is that all of a sudden, we get on-demand people, on-demand voices, on-demand surroundings, and on-demand scenarios. And when I was making this bullet point list, it kind of made me think of a theater analogy.
It's almost like all of a sudden, we've got on-demand actors, on-demand human voices to tell the stories, on-demand sets, and on-demand scenarios or scripts. I'd like to go through each of these with you today. Starting with on-demand people. So here are two IT admins. I'd like you to tell me which one is fake. We're not actually going to do it. It's going to take a bit too much time. These people are both fake. They have been generated by the platform This Person Does Not Exist. Maybe heard of it. It's a website.
Every time you refresh, you get a new face of a person who has never walked the earth. So you'll tell me, Emily, okay, this is cool. You can generate people who have never walked the earth. But can you also do this with real people? Can you make pictures, unlimited pictures, of every single person in this room? Turns out you can. So here are three pictures. These took me maybe 15, 20 minutes to generate. It only needed five to ten pictures of my face. If you're an average Western human being, you have those on Facebook. You've got at least five to ten pictures of yourself.
I would like to ask you which one of these you think is real. There's one picture. I hear a few things. So I fooled you again. They're all fake. But don't feel bad, because even my own mother did not recognize that her daughter was actually not the one in the top. So how come the woman who gave birth to me cannot recognize the face of Deepfake, of her daughter?
Well, that is due to something happening in our brain. It's a cognitive bias called the processing fluency bias, by which we are actually very easily influenced by things that we can easily understand and by connections that are quickly made in our brains. Moving on to surroundings. So you can already probably guess what question's coming.
And no, I'm not going to be corny, and I'm not going to ask you the same. One of these pictures is real, and they're both pictures of Scotland. So I'll let you with them for just a minute think about which one's real. Which one's fake, sorry. The one on the right is fake, and that is the original picture. And this was edited by Peter Yan, who's a photographer, using one of the newest tools of Photoshop, which is the generative fill. So if I had had that for the Oscars picture, it would have probably turned out a little bit better.
And as you can tell, the only thing you need is a text prompt, and it will change it on the image. So you can change the image entirely, like done here, or you can just manipulate one little bit of the image. Now the thing that blew my mind entirely, completely, is that this is now also in progress for video. So I'm not sure if you followed Runway Gen 2, but they're like the advanced guys, and Hollywood is shaking in their boots, but that's a different story. So okay. We have on-demand actors, we've got on-demand sets. So how cool would it be to have like stories that tie all of these together?
Well it turns out AI is pretty good at that too. You've probably played with GPT just as much as I have, and you can tell that it's really good at generating stories.
In fact, a Twitch user was very unsatisfied with the ending of his favorite cartoon, so he just asked GPT to generate two more seasons and enjoyed them. But the thing is, is that this is a very positive, cutesy way of using GPT, but you can also use it in ways that are slightly more raising eyebrows, such as a friend of mine who is an ethical hacker, and he does phishing campaigns. And what he did is he basically fed GPT with the profile, the social media profile of one of his victims, and he asked GPT, okay, summarize me this person in five to ten bullet points.
Few seconds later, he's got a few bullet points about that person. Then he says, okay, hey GPT, imagine you've got a person who likes these things, what kind of event would this person be interested in? GPT is very good at that. So it gives them a suggestion of an event, and you've guessed it. The next question is, hey, GPT, so could you please generate me an invitation personally tailored to this first person based on the event that you just suggested? And then GPT goes, wait, this sounds a bit fishy. I'm not sure I'm supposed to be doing this.
But all my friend had to do was open a new tab or just reassure the machine that it was trying to make a friend. Now I see some of you maybe a little bit surprised, but some of you won't be, because this is quite old school already. The thing that really changed the game that I found is incredible is the use of AI agents which automizes this process to a certain extent. So what this means is the first example, it still takes some human involvement. You still need to chat with the machine, go through a few iterations, it takes a lot of time, essentially.
But what if you could get a GPT to prompt another GPT who will then prompt another GPT, and all of a sudden you get a chain of agents? What you see here is an open source visual programming environment called Rivet, and each of these boxes is an input output, and they each prompt each other. So of course you need to babysit them once in a while, you need to check in and make sure everything's going okay, but what this means is instead of spending considerable time tailoring your phishing attack, you can get GPT to help you with that, save a lot of time.
But what you can do with this is you can run several attacks at the same time, all tailorized to the people that you're targeting. This is making it very, very, very easy and cost-effective to fool a lot of people.
All right, so we've got the actors, we've got the sets, we've even got the scripts. It would be quite nice to have a human voice to maybe tell the story.
Well, the Internet's full of those already as well. You see here Eleven Labs, which is a very, very large library of all kinds of voices you can find. Some of them are fully generated, others are inspired by real people, and if you want a more personal touch, you could even clone a voice of someone near you. Now I thought, okay, it's probably quite time-intensive and it's going to take a lot to clone a voice, but in fact it only takes two to five minutes of a voice snippet of someone speaking. Now you're maybe thinking, where can I find a voice snippet of two to five minutes?
Well, I think a lot of us listen to podcasts, a lot of us are on podcasts, we're on YouTube. Even if you have a phone call and you just record part of a phone call or you hijack a Zoom call, that's a lot of voice already. And this is the part that I found quite freaky, is when you bring it all together and you use all of the outputs of all of these different AIs and machines, and you put them together in a platform such as Heygen or Synthesia. This is a screenshot of Synthesia and it's a software already used by many enterprises.
Imagine you're a CEO of a very large conglomerate and you want to deliver a personalized message to all your member firms. All you need to do is film one video of yourself and then you put it in and it will translate that video in all of the different languages to give this personalized message to your member firms.
But the thing is, what it does, and this is a fascinating thing, it doesn't take the full face, the whole image of the CEO, it only takes the micro-expressions, it only takes the movement in the face, which really compresses very large files to only kilobyte files, which also are very easy to manipulate. So if you put all of this AI-generated content in one place, you can have a custom background, a custom person of someone who does or does not exist, even a custom voice of someone who does or does not exist.
And guess what, this is the kicker, is that that thing will basically read any text you put in it and AI does scripts as well. So this makes it possible to do very, very, very large phishing campaigns in a very easy manner. So amateurs hack systems, professionals hack people, which is a quote by Bruce Schneier, a very famous cryptographer. As mentioned before, it doesn't take much to gain this information of people. If you scan their social media, all of us in this room, we probably shed bits and bytes of personal data out there on the web on a daily basis.
Never has personal identifiable information become this valuable. So I'd like you to imagine for one minute that you are one of the young Asians somewhere in the slums who wakes up every morning at 7, worker of the gig economy, to go to this building, this call center-like building, where you've got a lot of different people who spend their days offering hacking as a service, social engineering as a service, phishing as a service, ransomware as a service. These people do this every single day for work, right? They just do it like you and me.
So how would you use the technology that we just saw to make your workload lighter? We can think about that, but I can also tell you a little bit about the law of large numbers. So the law of large numbers means that if you have a 5% success rate, which is not super, super high, right? If you have a 5% success rate on your phishing emails, but you only send out 100 phishing emails, you get five potential breaches. If you send out 60 times that amount, you get 60 times the amount of potential breaches.
So I'm going to talk to you about another law, which is the law of self-selection, which I borrow from biology. The law of self-selection says that an entity will only test what they seem viable. So this is really, really good as a filtering machine to just do some victim profiling and just attack a bunch of organizations and see which one will respond, because it won't cost you that much. So this gives rise to many, many, many very interesting scenarios. I've tried to put them all in a matrix. If you're interested in the full report, you can hit me up on LinkedIn or via email.
I'd be happy to provide you with it. It's still in progress. So how many combinations of these apply? We had a really good talk two days ago of Florian, who was explaining what to do when you suffer ransomware, right? Imagine you get a CEO or a fake CEO or even a fake person from the crisis team giving out a fake message during an emergency. I can tell you that's going to be pretty, pretty painful. You could also do very easy CEO fraud or use this for blackmail, extortion. There are many, many different flavors of evil in this.
So in a nutshell, all of a sudden we've got the capacity to create things that aren't to erase people or add new people entirely and to live five different lives all in one go, of course, conning people, but in a pretty much almost automatable manner. So you're all pretty... Mood is a bit low. Everyone's a little bit scared. At least I was scared. So this is a quote of an AI expert in the field.
He said, we are all merely X amount of computing power away from making a staged decision. This is going a positive way afterwards, I assure you. But what he's saying is that if you, every single one of us, you and I, we're all prone to be fooled by these techniques, all of us. But it will take a certain amount of computing power. So he says, hence, simply increase X, increase the needed computer power to not be worth the play. If we can make sure that we are the absolute worst possible victims to mess with, we will be less likely to be messed with, because it's a business model decision.
People have limited amount of computing power. So if you can assure that your organization is strong enough, that you're just not worth the play, that could potentially help. And here are three ways you could do that. So I call them immunize, equip, and humanize. I'm not going to stand still by equip, because there's many vendors out there who can do that better than I can. But I like to stand still by immunize. So this is a tip I heard from a CISO at a different company a long time ago that really marked me.
He said, have you heard the where is Waldo technique? I'm like, no. So he says, essentially, as a CISO, you pick one person per department or in your organization who will for a month consistently have to break a security rule. And if that person gets away with it unnoticed, that person gets 100 euro. If that person gets figured out by someone that that person was Waldo, then that person gets 100 euro. And all of a sudden, you've got people calling each other out with security in mind for the very meager price of 100 euro a month.
And it's quite good, because people get enthusiastic about security. And this is the human layer we're talking about. And the last point I'd like to refer to the talk of Mike Newman just yesterday is security is collaborative. And that means that in the end of the day, it's a human story. All of this technology above, it's all distraction. Anything real things that matter are the things that happen between human beings. If you can humanize your organization, encourage people to pick up the phone in doubt, encourage people to talk to each other.
And by knowing each other well, you can potentially slow down the deepfake attacks and not be worth the play. Thank you.