Imagine fake footage of a business leader committing fraud, a politician committing a sexual assault days before an election, or soldiers conducting atrocities on foreign soil. In our environment where uncertainty exists and conspiracy areas thrive, strive neat fakes could lead to catastrophic consequences. I believe we are at an inflection point soon. NEAT fakes will move from being an oddity to a destructive social and political force. Please welcome my dear friend Dr. Dunny. Went
Well, thank you.
Now, that wasn't really President George Bush, former president of the us, but that was a Deepak I made of his voice, so I can have him say whatever I want, and I wanted to be introduced by a former president. So there you go.
Oh, okay. Well, hey, and I have a couple more friends that are gonna help me out as we get started. By the way, I, Hello.
I am, Donnie went, a principal security researcher at MasterCard. In my free time, I am also an adjunct professor of cybersecurity at Unica University, but I am not near as busy as Nicholas Cage. If you are wondering what a MasterCard security researcher does well, I explore emergent security technologies and upcoming threats that might impact us, our employees, and our customers. In other words, I get to play with really cool stuff and have a lot of fun.
Right? And one more for you.
As we get started, I see Nicholas Cage just trying to find work again, but we all know that Vin Diesel is a better actor and most definitely better looking. Now let's take a quick look at the agenda. I will start by discussing how deep fakes are created. After that, we will look at a few recent examples of the use of various technologies to create realistic fakes.
Finally, we will explore the impacts of deep fakes and some of the latest research on deep fake detection. Thank you.
All right, so let's take a look at how these are created. First of all, typically it's using something called a generative adversarial network, or again, which is actually two machine learning algorithms competing against each other.
At top, you see the discriminator. A discriminator is a typical machine learning classifier. It's trained with the images of the original person, the real person, and to try to classify whether it's a real image or a fake.
Meanwhile, you have a generator on the bottom. The generator is creating ai, synthetic media, or in other words, deep fakes from the latent space, and it's trying to fool the classifier. The real key is that both of these are being trained in conjunction, so they're getting better with each iteration.
Typically, when I, when you go through this process, you're talking 500,000 to a million iterations. The goal is to get where the generator is fooling the discriminated at least 50% of the time, showing that it's plausible examples. This is a process that I used in some of the ones you'll see coming up.
Now, when we get to actually swapping the video, swapping the images in the videos, this is where we use the imaging coder. So we have two training sets, training set A or person A, the original person and person B, who you're gonna swap their face in.
The key here is that we're using the same shared en coder for both training sets. For those that aren't aware, an en coder takes all the bits that make up that image and translate it into about 300 or so measurements. The key using that same shared en coder is that I'm extracting the same features from both faces.
That allows me to then use that, come back with person A, run it through the decoder of person B, which essentially paints a new picture. That new picture, keeping the same facial expressions of person A, but with the face of person B, it'd be analogous to an artist repainting Mona Lisa with somebody else's face.
Now, I use this technology. I don't know if anybody noticed on that picture. When I showed, when I first come up, I had this beautiful head of hair. I haven't had hair since I was 19 years old, but I wanted it again.
So I used this process and this is kind of what I came up with. So it's what I look like with hair. I actually get a Golden Globe Award. This is great.
You know, I very meet with a lot of famous people, or it could be what American actor Ryan Gosling would look like with my face. Of course, he would not be near as popular. I doubt. But anyway, so we kind of look at how they're made. What about the uses? We've had quite a few examples. And first of all, looking at the world of politics and propaganda, a really interesting one by the way, happened in India. And what made this so interesting is it's the first, as far as I know, only legitimate use so far by a political campaign for campaign purposes.
So in this case, they actually had their, their, the politician was criticizing his opponent, but he was doing it in English. Their target demographic voters were a very specific Hindi dialect. So they redid that entire video complete with very realistic mouth movements in that Hindi dialect. And then of course, for at the beginning of the Russian invasion in Ukraine, we had this deep fake, I'll show just a few seconds, a bit Zelensky trying to convince Ukrainians to surrender.
Yeah, well you
Now, I I, I've realized when I came over here, you know, US elections are going on. I realized there was a lot of interest in that over here, and I never realized that.
Well, so you probably are aware that 2020 election, there was a lot of turmoil in the us. I mean, it, it was crazy.
Well, during that election, a deep fake appeared of North Korean leader, Kim Jong un, supposedly mocking our election process.
Democracy is a pleasure thing. More fragile than you want to believe. If the election pays, there is no democracy. I don't have to do anything. You're doing it to yourselves. People are divided, reporting these tricks are manipulated. Voting locations are closing. So millions can't vote. It's not hard for democracy to collapse or you have to do is nothing.
And before anybody asks, I have no clue what his fascination is with Green Books, I always wondered that when I saw that. So, turning directly to cybersecurity, we've had a few examples there.
First one, a voice deep fake where a bank in Hong Kong was contacted by one of their customers over the phone. That customer was had just acquired their company, was gonna acquire another company and needed the bank to start transferring 35 million. This was further supported by emails. So the company started transferring all those money. A really concerning one from my viewpoint came out in the, just earlier this year, where the Singapore Government Technology Agency did a research using fishing emails, spear fishing emails. So they sent targeted phishing emails to a bunch of their colleagues.
Part of those emails were generated on a readily available AI as a service platform. The others were handcrafted by humans.
The researchers were amazed to find that their targets clicked on the ones from the AI far more often than the handcrafted ones. And it went got even worse. Not only did they click on 'em more often, they also more readily gave up their credentials. So the natural language processing is advancing rapidly, and that's one big area of concern there. And then this year it was, it was crazy.
The, the US B had to release a warning because comp companies were being targeted by criminals that were using deep fakes along with personal information to apply for remote work positions, see how long they could get paid before someone found out they didn't really exist. So now we'll also look at one that occurred just about a month ago.
Bin's chief Communications Officer Patrick Hillman, found out that he was a target of a deep fake later when a bunch of prospective customers, Binance is a cryptocurrency exchange, and it was prospective customers trying to get their, their currency listed on Binance had contacted him thanking them for all these meetings. Well, he had never attended any of those meetings. Fortunately, they were able to quickly and effectively debunk that, and it didn't cause too much damage.
Ironically, it was three years earlier that bin's CEO Chiang, or cz, as they call him, did a very lighthearted deep fake with all.ai to show how it would work. So here you can put your own CEO maybe doing some martial arts like they did.
So it goes to show that, that they can target any, especially anybody that's very high visibility. I need to do this with my, get him out there. Right? So what does this mean? What are the impacts?
Well, what we've seen is a lot of the new research is showing that some of the facial recognition systems are susceptible to deep fake technology. And as the technology advances, we have to continually and thoroughly test against these new technologies.
Now, fortunately, a lot of the facial recognition systems have put in different sorts of liveness detection, which is working for now, but make sure we continually test those and the fraud and impersonation, You know, we talk about organizational damage is one of the things we really have to be cognizant of.
Now, if you could picture somebody making a very, very negative statement about your company or that would make your company look be seen in a very negative light, putting your CEO's face likeness and voice on that video and it quickly going viral, how quick could your communications team effectively debunk that?
Because it can go viral really quick as we see with the president Zelensky one that was very effectively debunked right away in the financial services realm where I work, FinTech Futures talked about a few different types of fraud that can be helped or, or further substantiated with deep fakes when I'm dealing with ghost fraud, where that's when someone's trying to impersonate a dead person in order to access their accounts. And using deep fakes can lend credibility to this, especially when now we're used to doing business remotely.
Then fraudulent ca claims for from the deceased, that's more trying to impersonate the family member of a deceased person to make claims on their pensions. Things like that. New account fraud is taken once they've stolen your personal identity information. Also now stealing your, your, your voice, your video voice and impersonating you to open new accounts in the scariest, most complex one synthetic identity.
And this is where for those of us that came from a worked a little bit encounter intel, the idea of the sock puppet, creating that in totally fake identity and using that fake identity to start committing fraud.
My big concern lies in machine learning classifiers, cuz I look at it from that same concept that's targeting that that image classifier, those gans could be used to target any sort of classifier instead of creating, in theory, instead of creating images, we start creating data and we try to fool a, a fraud versus valid transaction classifier, for instance, and find the Liar's dividend, which was an interesting topic discussed by the US Congressional Service this year. And it's the concept that someone will be able to legitimately deny an actual video of them.
And so going into fu looking into the future, is that possibly being a criminal defense?
You know, the old fake news, it's fake. It didn't really happen. That's not really me. Somebody altered that video. So what can we do?
Well, one thing right now, the technology isn't too good. You can still look for a lot of, lot of telltale signs. And I blew some of these up on the right, you see with the frames, the picture, the glass frame, not going all the way back. Same thing was in that Ching Pal video. If any of you noticed the facial hairs often a telltale sign, as you can see my facial hair showing through on this face. Things like the incongruity, the wrinkles in that on their face, their foreheads not matching with their eyes.
And if they're trying to do it live streaming, you know, nowadays it's still, the technology has a lot of problem with live streaming. When they turn their heads, as you'll see there's quite a bit of flicker when the head turns, what's when I turn my face.
So these are all current human detection methods, but the problem is this technology's advancing so quick humans won't be able to detect in the near future. So we're looking at different sorts of machine learning detection methods.
You know, one of 'em, very common, one that's used now in research is eye blink analysis because the rate of eye blinking is going to be different quite possibly in a deep faith. One that's that's proven rather effective is looking for missing details in the reflections in eyes. Maybe the lines in the teeth one that I was completely blown away with. These researchers were able to do as blood pumps through our face, our face changes color very slightly. Humans don't pick up on that a machine can.
So it's trying to detect the difference and that heartbeat rhythm of a deep fake and finally temporal consistency that's looking for those minor, very, very minor differences frame to frame that perhaps a human can't see.
So all these detection methods, they're, they're still, most of 'em are still being developed. Some of 'em are, are starting to come out. But the idea is that this is a continual battle between technologies. So what can we do? First of all, like I say, everybody have fun.
If you, you go out and watch some of these are great. Now, some of 'em i, I recommend not seeing, but have fun. If you're inclined to do so, go ahead and make some of your own. I find as a security professional, understanding this technology is very important. By the way, everything that I did was with readily available open source software that anybody can use.
Also, I have a healthy dose of skepticism. You know, actually, would my CEO really be saying this?
What, does this make sense what I'm seeing? Social media, go ahead and use it.
You know, I do just be cautious. Be aware of what you're posting and, and to whom. If you're using facial recognition for auth authentication, just make sure you continually and thoroughly test it as this technology updates. You don't wanna fall behind if you're doing machine learning classification, which almost all tool sets seem to be doing that now ensure you're using adversarial testing in your pipeline.
And what I like say, normally I, I spend quite, I save quite a bit of time to talk about the positive uses because there are a lot of positive uses of this technology. But since this was short, I'll just leave you with one and this is from, I'll leave you a Salvador Dali.
I have a longstanding relationship with death almost 30 years. I life always believe the desire to survive and the fear of death. Were artistic sentiments. I understand that better now. But there is one thing that makes me different. I do not believe in my death. Do you?
Thank you.
Thank
You very much, Donny.
That was really interesting. But I, I think you pointed to technology paradox really is that, you know, like the internet is, is been tremendous boon in so many ways and, but we're, you know, we're all too aware in this room of the threats that it presents.
But I, I'd really like you just to kind of give us some of the positive examples. I mean
Sure, one, one of the really great examples of the positive use that I I love is for it's ALS disease that's also known as Lou Gehrig's disease.
The, the patients often lose the ability to speak and there's a company that's working with a lot of tech companies to give back that power of speech in their own voice using this sort of technology so their loved ones can hear them once again in their voice. And another really interesting one I saw was the day that, so President Kennedy, the day he was shot, was set to deliver a speech. It was about the, in trying to end the Cold War, that speech he never gave you can now hear and see that speech that he never gave.
They have recreated this using this sort of technology so it can bring history back to life too. Education.
Okay, that's, that sounds brilliant. I thank, thank you very much. Round applause for Dr.