First of all, thank you very, very much for showing up on the Friday afternoon. I know you must be pretty exhausted from this week, and I'm not sure if we have people joining us online, but this is actually the third presentation this week that Pablo and I have given, and we're very excited to be here when we promise that this will also be interactive. You're welcome to stay where you're sitting right now, or if you wanna move up closer, you're also welcome to do that. I just wanna take you a little bit through what we're going to talk about.
We, here we go, our agenda. Before I do, I wanna just add in a personal note about how Pablo and I know each other. We couldn't have more different backgrounds. Pablo is an incredibly renowned expert in cybersecurity. He's won DEFCON a couple times. He's also a very renowned expert on MISS and disinformation, but he's really more on the the tech side. We met originally at the Atlantic Council, which is National Security think Tank. And I was working on other issues related to trade, business strategy, national security, and food security.
And I, I asked folks around, around Atlanta council, I said, who do you know that can help me with the cybersecurity issues and agriculture? And everyone pointed to Pablo. Then I said, who do you know that can help me with all the mis and disinformation as well as data monetization? And everyone separately pointed to Pablo.
And ever since then we've been working together. We're we're really great colleagues and really great friends. So we're very excited to be here and we always love presenting together. So today we're gonna talk about what is miss and disinformation.
We wanna talk about the scope and the scale of the problem. And we also want to talk about AI and deep, deep fakes and how that intersects with miss and disinformation. We will also talk to you about the changing role within your organizations and what that means for you. Why should you care and, and how missing and disinformation affects, and we said CIO responsibilities, but it's really all responsibilities. And finally, how communication should look moving forward.
Underpinning all of this are standards, taxonomy and ontology, information exchange, which Scott and, and Pablo and I were speaking about earlier. And then the use of AI for disinformation defense and disinformation response plan. This is gonna be very, very important. Welcome. Hi.
Okay, so first is a level set. Let's talk about miss and disinformation. A lot of people use these terms interchangeably. They're slightly different. So I wanna just set that as a baseline. Disinformation is the production or the alteration of information and either content or this is important context with the intention of deceiving the audience. Misinformation is when an otherwise well-meaning person either misinterprets legitimate information or they've been infected by disinformation, not realized it's false, and now they're propagating it as true.
Again, not realizing that they're spreading disinformation. So when you look on a scale from left to right with the malice going, increasing from left to right, falseness misinformation, again, it could be somebody that either misunderstands legitimate information or has been infected with this information, but there's no intent to deceive disinformation. I alter the information, content or context with the intention of deceiving the audience.
And then the last one is malformation. This is very similar to doxing where I take internal documents that are not meant to be released publicly.
I release them publicly. And again, I either change the information or the context within those documents to, to give a false impression of what's going on. The difference, I was asked, how much difference is there between disinformation and malformation? What happens with malformation? Because you can show it's an internal document of some sort, is it already comes with some sort of authority because you can show it's, it comes from an authoritative source from inside a company, for instance.
So if any of you're wondering what this is over here in 2023 in November, the World Economic Forum released its global risk severity stakeholder report. This is very important. And what they talked about are what are the biggest threats, what are the biggest risks in the next two years, five years, 10 years? And guess what? Across the board, whether it's civil society, international organizations, academia, government, private sector, the number one or number two risk is miss and disinformation.
I, I would say that we're pretty proud because we've been on the forefront of this and we've been pushing really hard, particularly within the US and EU governments to work on this. And, and prior to this being released about maybe a, a year and a half before we co-authored a proposal for the World Trade Organization, which was adopted by members. And Pablo went to speak before the WTO.
And we, we are pretty sure that that helped inform some of this work here. Some of the narratives when we talk about mis and disinformation, it is very broad, right? I don't think any of this should be surprising to you.
Media, vaccines and vaccine safety, election security. Obviously there's a lot of mis and disinformation around that. Definitely. Food and agriculture, which is where I started 20 years ago, really deeply following this as well as geopolitical issues that you're seeing right now, immigration, climate change, law enforcement, social justice, cybersecurity, privacy, financial markets and trade science and technology and geopolitical disruption.
I, I don't think you can go anywhere right now without people talking about miss and disinformation. Now, it doesn't mean that these are completely corrupt, right?
These, these subjects. It just means that there's a plethora of miss and disinformation happening
Here. Okay?
What's, what's talking about missing and disinformation unless we do a little audience participation? So let's, let's watch a very short video
And for the international order that we have worked for generations to build ordinary men and women are too small minded to govern their own affairs. That order and progress can only come when individuals surrender their rights to an all powerful sovereign.
Okay? By show of hands, how many of you think this was a deep fake?
Okay, that's about 90% of the audience. What if I told you that President Obama actually said every one of those words in that speech? Would you be shocked?
Yes. Come
Together.
Ah, ah, we have somebody that got it. He said every one of those words in that speech, but not in that order. So this is a great example of disinformation where the content is legitimate, right? If you did a voice analysis, it would match up. But the context is false. It's been very creatively edited. His words to say something that was not intended,
That, just to clarify, is that a mal information in
Your district? This would be disinformation. The the question was, would this be mal information?
No, it would not be malformation because it's not an internal document. For instance, this is a public speech. It would be disinformation where you altered the con context as opposed to the content.
Okay, we ready for another one? Because these are fun.
Alright, let's play another game. As you're watching this, when you see the first thing that looks a little bit odd to you, let's, let's raise a hand. Okay.
I'm Jim Esman and I wrote a poem about what it feels like to be an impressionist. There's anything more sad and lame, contemptible beneath disdain in short provoking, I've disgusted than being an impressionist. A third grade, even fourth grade skill. The definition of cheap thrill like watching farm equipment. Rust is watching an impressionist relic from a distant day that long since should have died away.
Dishonorably mentioned is the pitiful impressionist with and slightly ostentatious tired debris from a old Las Vegas whose former fans have long dismissed allegiance to impressionists. How many opportunities passed up and wasted because he's Harold bound to follow what he must pity. The poor impressionist doomed to live in abject failure, dogged by his own echo, better to crumble into dust than wind up an impressionist. His borrowed voices can't deflect a life of well-deserved neglect. His names on simply no one's lips forgotten. Vain impressionist
That sound.
Did anybody moan that creature with the
Microphone is last on everybody's list. Forgettable, impressionist. When Peter at that shiny gate condemns those, those souls who imitate, he will, but shake a heavenly fist and curse condemned impressionists. But till that time, we will tolerate the good for nothing reprobate and hide the truth that, that we are just pissed that we can't be impressionists.
Oh, ah, that's better.
Okay, so deep fakes can also be fun and entertaining, but again, this would be disinformation if there were an intent to deceive.
Now, I will point out that for this video, the gentleman in the video is a professional impressionist. So the voice was actually his, he was impersonating real voices. But clearly you saw that the face was subtly changing. The first time people watch that video without any warning, it's usually until the fourth or fifth face change before people notice that there's a difference. And usually by the first time that he magically grows a mustache, everybody gets it.
But it's, it's a fun video to, to show.
So that video was made when 2017 or that
Video was made in 2018.
Okay,
That's six years ago now inform
It's disinformation. Yes.
So you can imagine how far along AI has come since then and all the different ways that we need to think about authentication and secondary authentication.
I, I have to call up my elderly parents who are arians and say, don't give any information over the phone if even, especially if it's someone who says that they're me, right? But you have to worry if they FaceTime or or do a video call. And so we've seen a number of the headlines, fraudsters, new tricks use ai, voice cloning to scam people. 35 million bank heist without even stepping into the bank by cloning the bank director's voice. And then FTC Chair Khan warns AI could turbocharge fraud and scams of course.
And it Microsoft new tool says that you really just need three seconds of someone's voice in order for them to mimic you. Now, for all of you, whether you're public facing or you, you work for companies where they're public facing and you have A-C-E-O-I, I can guarantee you there's at least three seconds of their voice out there.
Okay? So this really kind of sits at the intersection of, of InfoSec and and ai. And the reason for that is oftentimes for high risk operations, our multifactor authentication is either a phone call or a video call.
And so we're now at a place where it's probably easier for me to fake a phone call or video call than it is for me to obtain the password. And as we all know, attaining the password isn't too hard. So really this starts all the way at the very beginning. We talk about network security, we have to talk about authentication and how do we do this in this AI world where DeepFakes can be easily created by the average citizen. And once we affect authentication, that's gonna affect the confidentiality of our information and possibly the integrity of our information.
And that is of particular concern for regulated industries.
Because of that compliance is gonna have to change. We expect to see new regulations. And what would we hope is that industry would go to the regulators and say, these are the things we think work, because we know the regulation is coming and if industry doesn't jump in first we're gonna be forced to live with whatever regulation we get along with those compliance and network security is, this is gonna affect information security policies, it's gonna affect security assurance.
We now have to consider that breach response will have to be different because the method of breach may be very different. I may not have to use a code exploit to get into a network. I can do it by basically social engineering now in, in a deep fake and then as well human resource security. This is gonna be one more training that we're gonna have to do. This is gonna be particularly challenging as we onboard people that work remotely. How do we verify their authenticity and their actual information a day where we have deepfake voice and deepfake video.
So it's really gonna affect the whole gamut of I am. Now this doesn't necessarily mean that the CIS O'S responsible for all of this, but let's face it, the CISO usually ends up coordinating a lot of this ends up being in the middle because they tend to be the technical experts in security.
I actually, when you were talking, I was thinking about, one of my concerns is that it's if you could hack my calendar and you can find out where my calls are and you could go in there and poses me and obtain information that shouldn't go to anyone else, right?
So we kept this slide in here and there's a reason for it. We were giving a presentation to CISOs and we were talking about how they currently communicate right now with the CIO back and forth with peers back and forth directly to the corporate board. There's a dotted line with law enforcement depending on, on the circumstance, the office of general counsel lawyers usually communicate with the CIO. And then there's kind of a line from government oversight. Well that's changed. Now you have internal government affairs, strategic comms, HR media Chief operating officer.
And so the question came up in this, in this seminar that we gave yesterday. They said, so does this mean that the CSOs at the center of everything we need to coordinate it?
And, and the answer is yes and no. And and the reason that we had this here is we had a room full of CSOs, right? And so from their perspective, this is their new paradigm. But if you look at any one of these, whether they're in the office of general counsel, government oversight, hr, they also have their own matrix or would say it's strategic coherence map where they're at the center.
So when we get to this, when we deal with miss and disinformation, what this is gonna require is, is gonna require multiple communities where you're gonna need the office of general counsel.
You're gonna need academics, you're gonna need government, you're gonna need various industries. So multiple communities are gonna need to hop in to solve this. This is not a silver bullet problem, this is not a problem that can be solved with some magical AI's not gonna solve this problem for us. It may be a piece that helps. What we really need is a patchwork across communities that will help us mitigate this. But in order to do that, those communities have to be able to communicate effectively. We have to be able to share information.
In order to share information, we have to use the same taxonomy, we have to use the same language, we have to use the same ontology, the same methods of operation. And that will enable us to communicate effectively with other communities and coordinate our actions because this is really gonna be a team sport.
So with that, I'd like to give you just a very brief overview of the disarm framework. Disarm is disinformation Analysis and Response measures framework. Those of you that work in InfoSec or cybersecurity are probably familiar with the attack framework for cybersecurity.
This is very much the same thing, but for miss and disinformation, please don't go blind reading this. You can find this, it's free and open source at Disarm Foundation. But I just wanna give you a very quick overview of how this works. Up at the top, the purple and the red boxes are the different phases of the operation. So there are four phases.
Plan, prepare, execute, and assess the plan and prepare happen before you see any public evidence of a disinformation attack. Execute is normally where we've been coming in historically and said, oh, there's disinformation. This is horrible. We need to do something about it.
Down below the purple and red boxes, there are a series of white boxes. These white boxes represent the kill chain. A kill chain is a military term that we use for the minimum number of tasks that must be completed in a particular order for an attack to succeed.
And the reason we call it a chain is just like a physical chain. If I can break any link, if I can prevent the adversary from doing any of those tasks, the attack will fail. Below the white boxes, there are gray boxes. Those gray boxes represent tactics, techniques, and procedures or the different ways that I can accomplish the task. And so if you were to click on one of those gray boxes, what you would get is you would get a description of what the particular tactic or technique is. And so in this particular case, it's creating false online polls.
You would see a relationship to the blue network, which is what the defenders would do. And those are counters that defenders can use to break this link. In the kill chain. You can see detection methods, how you would be able to detect online polls that the adversaries would do. And then if we've seen it in a real world attack, we'll give you one or two examples of a publicly known provable disinformation attack where we've seen this tactic being used. So the red is what the adversary has to do to com to complete an attack how they might mount a campaign.
Now it's important to point out, the disarm framework does not tell you what is and what is not disinformation or malformation. What it tells you is here's how you prevent disinformation and malformation attacks from happen. Here's how you can respond to 'em, here's how you can lessen their impact.
But it does not decide for you what is or is not disinformation. The other thing that I will point out is for the sake of academic completeness, there are some defenses listed in the disarm framework which are not recommended.
So for example, there are examples in there where one of the options is to conduct censorship. Obviously in Western societies. That is not something we wanna do. It's something that could be done, but you will see it as labeled as not recommended. We also didn't consider legal authorities because we wanted this to be a global standard. So it's really up to individual societies and governments to decide what are and what are unacceptable actions for defenders to take and whether or not they have legal authorities or they should go ask for the legal authorities.
So again, the red is the attacker framework. The blue is the defensive framework, and it's very much the same way.
It's got the same phases up at the top and it's got the same kill chain. And then you see the tactics and techniques and procedures that a offender could do to prevent that portion of the kill chain from executing.
And again, the red and blue are tied together. So you can start with either one, and if you'd start with a tactic in blue, it will show you which red tactics it will defend against. If you start with a tactic in red, it will show you in blue which tactics can be used to defend against it.
So again, for a blue countermeasure example, this is inoculate the population through media literacy training. And there's a summary of what it is. It will also list the sectors that might be capable of doing this. So in this case, civil society, government media, and social media companies all have portions here.
And then it will show you down at, at the bottom, if you do this as blue, what are the red things that could be done to counterman this move? So you can use this for table topping, for gaming, for planning. It can be used for all sorts of things.
So now that we have a framework, we have an ontology and we have a taxonomy, the next thing we want to do is be able to share information it. So we have partnered with Oasis Open, who is an international standard setting body, and they own the sticks and taxi standards. Sticks is a standard way to de to describe objects in a js ON format. Taxi is a way to share those objects across the internet. And this is the defacto standards that all cyber threat intelligence tools use. And so they have a really great listing of objects that can be used for cyber threat.
What we need to do is now extend that to objects that describe disinformation attacks per the disarm framework. And we're just standing that up. Oasis Open is standing up the dad CDM project, defense against disinformation common data model. If you are interested in participating, please go to Oasis Open, join Oasis Open and and join the technical committee so we can develop these standards to be able to share information.
Okay, so you can't get the representation these days without talking ai. Everybody wants to talk about the horrors of ai. Let's talk about how we can use AI to actually prevent disinformation. And I'll give you a hint, it's probably not the way that you're thinking. Everybody's talking about using AI to either create or detect DeepFakes. I'm gonna show you better ways to do that. And it's you, you're gonna see that detecting the deepfake is really just a race condition. So what you see here is the, or the four phases of disarm framework and, and a little bit of a cheat sheet.
The color coding tells you how effective AI would be in doing actions to defend in each of those phases. So it's not very good in planning, it's really good in preparation, execution, and it's pretty good in evaluation. We'll talk about how, so in the planning phase, there are two tasks that you have to two sets of tasks.
You have strategic planning and you have objective planning for the strategic planning. If you're a defender, you know what your centers of gravity are for your corporation.
You know what the narratives are that are gonna force you to go out of business or cause significant loss. And you know which ones are just inconvenient. And so you wanna concentrate on the ones that are gonna cause significant loss. And the other ones you just let 'em go and you deal with it through normal strategic communications. But we know what we need to protect the objective planning.
We, again, we know it's critical. We know how things normally work and we know what's anomalous. And so we can certainly use AI to look for narratives that seem strange or out of whack or unusual. If you know you're a normal corporation, all of a start sudden you start trending on X or on Insta.
If you haven't done a big PR push, you probably wanna take a look at Y. It's trending. And you should know based on your market segment and your clients where you should normally see traffic from your clients.
Certainly if I'm, you know, maybe a large financial institution and I start trending on Instagram, I'd be very concerned 'cause teenagers don't usually talk about large financial institutions, right? So that might be indicator that things are are not quite right. So let's talk about preparation. Now that I know kind of the things I need to protect and I go to the preparation phase, the tests are network development microtargeting. So for network development, I know who the influencers are for my market segment. You know who the people are that do reviews of your products.
You know who the regulators are for your industries, you know who the influencers are for your market segment.
And so you can certainly monitor those accounts and monitor those channels. And if all of a sudden they start talking about your company or your institution, and it's a surprise, you should start looking at to why I identify influencers.
Who, who's heard of Roaring Kitty? Who's heard of Roaring Kitty? Everybody's familiar with the GameStop? Yeah. So GameStop was, was in financial trouble and then somebody on Reddit with an account of Roaring Kitty decided to turn it into a mean stock and it shot it up several hundred percent. And ever since then, everybody's monitoring Roaring Kitty to see what is he gonna do with names with GameStop next.
And so when you see influencers like that, whether they're directly related or collaterally related to you, it's important to monitor those accounts because if they start spiking, you should know why Microtargeting Microtargeting is a way to tailor a message towards a very specific audience.
And that is exactly the business of social media. The social media platforms are built specifically to learn what you like, what you don't like, what keeps you engaged and present you those advertisements.
So if you're gonna do a PR push or you're gonna do an advertisement, then social media is there to help you do AB testing and let you know exactly which versions of the ads are hitting with your core audience. So AI can be very well used for that to make sure that your proper message is getting out there. What it's not so good for is channel development or, or content development or channel selection. Channel selection is based on your audience. You have to know where to get 'em. So if I want to get middle aged parents, which social media platform do I go to these days? Facebook.
Facebook, right? If I wanna get the teenagers, where do I go?
TikTok, right? So that's what I mean by channel selection. Look at your audience, figure out where they go to get news, where they interact, and that's the channel selection. The reason AI is not good for content development is this. If you are trying to put yourself forward as honest as the authoritative source of particular information, any indication that you are altering that message instantly makes you look guilty.
So if you're a corporation and you're saying, no, no, here's what's really going on, and then somebody out there on the internet can go, this video has been altered, all of a sudden you become less credible. And and a great example of that UN unfortunately is a couple of months ago, the royal family in the UK put out some photos and some of them had been edited. I don't think there was an attempt to see there.
I think that was just basically normally fa normal family stuff. But because of their position, think of the backlash they got.
Now expand that out for your corporation in the middle of a communications crisis, it becomes worse. So please don't use AI to generate content.
It's, it's not gonna go well. It'll backfire in the exposure in the ex execution phase. Pump priming. Pump priming is when you start getting versions of your media out just to kind of do testing and get it pre-staged to go forward. AI's not a very much use there. And then for the exposure phase, there are a couple of ways to do it. You can do basic fact checking, you can do data provenance or where the data's coming from and you can do deepfake detection. I'm gonna just spend a couple of minutes here.
A lot of the advertisements, a lot of the media that I see now for AI talks about doing basic fact checking and deepfake detection.
And I'm gonna tell you why I don't think that that is gonna work. Let's start with deepfake detection. This screenshot is actually a beta tool that Microsoft had put out, and what it does is it analyzes videos. And what it looks for is micro fluctuations in pigmentation of the speaker that correspond with a heartbeat. And if it doesn't see that, it goes, oh, deep fake. So what's the next thing that a deep fake creator's gonna do?
They're gonna put in those micro-pigmentation. And so we end up in this constant race condition where, ah, I see what you did there. Good. I'm gonna evade your defenses. And so we're always gonna be chasing the ball. Is it useful?
Sure, it's useful. It's one more tool in the tool chest. Is it gonna solve our problems? Absolutely not. Same thing with watermarking of ai. There's a lot of talk about, well, you must watermark things that are created by the ai.
Well guess what? The bad guys and the bad gals out there, they don't follow the rules. Why are they gonna follow this one? They are not gonna watermark their AI stuff.
So that's, that's not a good idea. Basic fact checking. What I mean by that is automated fact checking. Certainly AI can be used to help actual human researchers and analysts do fact checking. That's fine. But automated fact checking doesn't work. The reason is that there are two models. One is the open world model and the open world model. You assume that everything is true until it's provably false. What that means is if there is a disinformation narrative out there, you're gonna spend a lot of time with your audience ingesting that disinformation before you can prove it's false.
If ever with a closed world model, you assume everything's false till it's provably true. And that's problematic because if you have to get information out quickly because there's an emergency, there's a hurricane, there's a tornado, there's a tsunami, there's violence, there's an accident, the information's gonna get out there too late. And so that's really not a good model either. More so there are things that are not provable facts. AI does really badly with satire. It does really badly with comedy.
And so again, basic fact checking in an automated fashion, AI's not gonna be a good solution for this.
So going forward, persistence on the social media. Normal news, if you look at how it travels off social media, normal news, you see it, it comes into social con conscious, it, there's a peak. Everybody accept as accepts it as true. And the world moves on. So there's one peak, and then you see it die down with disinformation. What you see is that there's an incentive to keep it in the, in the populace and in current conversations.
And so what'll happen is you'll see a peak and as soon as it starts to go down, the disinformation actors will typically use bots to retweet, repost, or get it back in the consciousness. And so you'll see peaks. So that's a way that you can maybe use to identify some disinformation. The problem is, again, if you're, you know, the good, honest person that's trying to be the, you know, the source of truth and you're using these tactics, it looks shady.
So again, anything that doesn't look crystal clear automatically makes you suspect. So for defenders, this is not great. Same thing for bots. You don't wanna use bots if you're a legitimate agent. Measuring effectiveness, again, AB testing, all those tools are built into social media. That's what the platforms are created for. And so it's definitely useful for being able to tell if you're intended messages getting to your chosen tar target audience.
So I actually wanna share a little story with you.
Pablo and I have been working around mis and disinformation for so long that we can usually see a story and know where it's going. Not only know that it's a disinformation narrative, but it's likely to blow up and what their next steps are and what the next step of the narrative is going to be. What we're seeing in the United States right now is with this, let me be honest, a very dysfunctional Congress at the federal, central level of government, we're seeing a lot of initiatives start at the very low level. So we're seeing them in school boards, we're seeing them municipalities, right?
One good use, I would say of AI right now is to be able to scan everything that's happening out there. And there, there are companies that are doing this and can make it bite-size for you. So you can, you can read through every, every different narrative that's happening across, you know, the country, for example, that might affect you. And I say this because many of the big companies I work with, people are very busy, and if they're lucky, they're able to read five to 10 journals a day.
They're, they're limited, but they can't see all of these stories percolating. So the story I wanna tell you happened in Michigan, okay? This was several years ago and it was during the Trump administration. And the story goes that there is a Norwegian couple or a couple of Norwegian descent, and they have a very successful bed and breakfast in Michigan, and they're always fully booked a year in advance, and they fly their Norwegian flag out there, but all of a sudden everything has changed.
Why?
Because people are looking at the Norwegian flag and, and pardon my language, but the dumb liberals think that it's a flag of succession. So the Confederate flag, they got confused. And so no one wants to go there anymore. So guess what? It gets printed in the Ann Arbor Gazette. And then it's almost predictable the pathway that it takes through what I would call Russian funded media, right? So part of the problem that we have right now with the democratization of news and the media is there are so many outlets you can't as a single human being cover all of these things.
Usually the first stop is not, you know, mainstream press. It usually starts much earlier than that. And the same thing with a lot of policy initiatives. So one very effective use of AI is to be able to scan all of that because Lord knows we can't. One other thing I would like to mention about disarm is I, I'm not sure if you're aware that Pablo was the co-author of Disarm along with Sarah Jane turp and Disarm, what he presented has been adopted by nato, the EU government, and US government as a framework that they're using to address mis and disinformation.
We wanna talk a little bit about a disinformation response plan. Can I ask, who in the audience has a cybersecurity response plan for your organization? Does anyone not have a cybersecurity response plan? Do you have a code of conduct for human resources? When people come on board, they read, does anyone not write? No. What we're asking you to do, and this is the call to action, is have a disinformation response plan.
And it doesn't mean that you have to create something completely new, you just have to take the body of work that you already have within your organization and consider how disinformation misinformation plays into this. So you should be looking at your, your governance for the corporate board. You should be looking at hr, how you onboard people, you know, when they talk to people on the phone, you need to make them aware.
Who are they talking to? How do they authenticate that For operations?
For cybersecurity, every single person within the organization is susceptible to missing disinformation. And you have to figure out as soon as possible where these vulnerabilities are and how you're going to address this.
Now, I know that for SMEs and there are a lot of SMEs in Europe, it's a little bit more challenging, which is why we really need industry to spearhead standards around this. And that's why Pablo and I are so focused on developing standards, because everyone's talking about misinformation, disinformation, they're just not sure where do we even start? It just seems like such a complex area. So we're gonna give you a shortcut to what we think. Number one, you need to have standards around it.
It's gonna have to be industry driven because industry between industry, academia, government industry can act, and civil society, pardon me, industry is able to act quickly, quicker than all of the others and can really drive the standards around this area.
Number two, I think for larger corporations, they can really help onboard the smaller and medium sized enterprises as well to adopt these standards.
And as someone who worked in standards for US government as a US trade negotiator, I would say, you know, particularly with the way the US and the EU work is they look at industry standards and see if there's acceptable for public consumption. So this is the direction that we would like to see you. You go the other area, we talked earlier about, you know, how misinformation disinformation is affecting so many areas of society.
If you start talking about election security becomes very, very challenging right now, especially, I know you're having European elections right now, we're an election year in the us. No one wants to talk about it.
In fact, we just found out a couple days ago that our US Congress, they're members that are trying to defund Department of Homeland Security from employing any of those funds towards MIS and disinformation.
The one area that we think that we've assessed is probably the low hanging fruit, no pun intended, that people, it doesn't matter if you're democrat, republican, or what political party you are in Europe, many political parties or what country is food and agriculture because everyone needs to eat. And we're four meals away from anarchy.
And they realize that mis and disinformation is not only affecting our bottom line for food and agriculture, but when people don't trust the food that they eat, they don't trust the government, they don't, they don't trust institutions. So this is an area that we've been working, we've been working across the board, but one area that we've been very laser focused on, I'm gonna pass this too.
So one of the things that we really need to consider as we, we start to look at, well, what's acceptable for misinformation and disinformation offenses, we really need to consider limitations and privacy. And what I mean limitations, I mean limitations on government, limitations on industry that society wants to place on them. We all live in developed nations. Those developed nations have a framework of laws and legal standards.
We need to look to those as we decide how we wanna develop counter disinformation and misinformation offenses to make sure that they either fit into the existing framework of laws and standards, or we update those laws and standards as necessary. We need to very well balance individual privacy versus national security. And in most western countries, we value individual privacy higher than just about anything else. And it's important to keep that in mind. It was also important to keep in mind that not all nations are that way.
So oftentimes when I start talking to people about this and I tell them that one of the problem is that anybody can post anything on the internet and you have no idea where it's coming from. One of the defacto solutions is, well, great, we'll tie every account on the internet to a unique person. And that way we know exactly who posted things. And then I'd like to point out that there are places in the world where we want people to be able to post anonymously.
We need whistleblowers, we need people to tell us when our governments and agencies aren't doing the things that they're supposed to be doing. We want dissidents in countries with oppressive governments to be able to post with anonymity. So we have to be careful as we plant these defenses, as we decide what we wanna do, that we protect individual rights and that we consider what people in countries very different than ours are gonna do.
The last thing that I would want is to implement a solution and then have somebody like the Iranian government go, see, you're implementing the solution that we've had for years. Clearly NATO and the US are behind Iran and being a modern society that would horrify me. So we don't want to do that. One of the things to keep in mind also is that as always, technology advances faster than we can regulate. So part of what we need from industry is as they innovate, as they develop these solutions, as they develop their technologies, they really need to self-regulate and implement guardrails.
And they really need to get with government together as an industry and get with government go. These things make sense as guardrails. These things do not, and really drive that, that innovation to regulation. And then we need to develop standards for public and private consumption.
So just to kind of revamp what's your call to action for industries, you really need an enterprise wide misinformation and disinformation response plan. Just kicking it to strategic comms doesn't work.
We met with a group of CISOs yesterday and we had a very astute observation, which is that anytime a company goes offline for any reason, the assumption is that it's because of a cyber incident and all of a sudden the stock starts to crumble. And it could be, you know, maybe the web provider went down, you know, maybe it was a bad patch. So that is a case where, you know, it's not really a cyber incident, but people assume that it is. Strategic comms is not gonna know what to do with that.
Same thing with, if I create a deep fake of one of your senior executives, most of 'em are gonna have more than three seconds of audio.
Most of 'em are gonna have several hours of video on YouTube. And at in industry conferences. What happens if a deep fake gets created of one of your senior executives doing something? Let's call it extra legal and offensive, right? Even if it's not true, that is gonna stick around for a long time, it is gonna affect the company. Disinformation happens primarily before the incident.
If you wait until after the incident, there's a lag time between the incident happening and you detecting it and there's another lag time between you detecting it and you implementing solutions. And all that time that disinformation narrative is out there and it's being ingested. Once it gets internalized by the audience, it's much harder, harder to disavow them of that belief. So really you need to prepare for these things ahead of time. You should tabletop war game, these things you should think about, what are the narratives that are really damaging?
Many of the narratives in various industries are the same narratives over and over. The, my favorite example of that is, one of the disinformation narratives around the covid vaccine was that the vaccine gave you covid. Anybody know when the first time the narrative was used that the vaccine gives you the disease that's meant to prevent? And maybe we wanna hamper a guess. The cowpox vaccine, the very first vaccine. So the fact that we still pull out vaccines and pharmaceutical companies don't go, let me explain to you how the vaccine works.
We don't actually give you the disease, it's just horrifying to me. We know what's gonna happen. I dunno what the next vaccine is a after covid, but there's one coming, I'm sure there's gonna be a vaccine. And the narrative is gonna be, it gives you the disease. So we should probably put out a narrative against that.
Train your users and clients where to go for authoritative information. If your clients see something untoward about your, your company.
If your users or your customer sees something untoward your company, make sure that they know who to call or where to go to to read the information that your company puts out or the regulators put out. Make sure that you, you know, if you work in the financial industry, make sure that your customers know that you're not gonna text them out of the blue and ask for their password. And some companies are getting ahead of the game. They're starting to do these things. It's great.
The other thing that you should think about is if you're currently using either audio calls or video calls for authentication for high risk operations, that is not gonna be sufficient much longer. We're about three to five years away from being able to create Hollywood style deep fakes on cell phones. So when you can no longer trust a Zoom call or a Microsoft teams call, what is another form of authentication that you can use? And then the last win is outreach and coordination, not just to your customers, but talk to your employees.
Oftentimes these disinformation narratives get discovered not by strategic communications and not by the c-suite. They're covered by a mid or low level employee that's just reading trade press or they're on social media and they should know to report those things in the same way that they report a spearfishing email these days.
Anything to add? Did I miss anything?
No,
No.
I, I actually, given the time I wanted to see what kind of questions we had from the
Audience. Thank you. It was an amazing presentation. Very interesting topic. I would like to invite you to sit with us, so then maybe we can answer a couple of questions.
Sure, sure. In the last 15 minutes of our session, please, we already have some questions online here in the audience. If someone has a question, please free to raise your hand and I will approach you a microphone.
Start with the questions online. 27 question?
Yes,
Just a moment, please. Let me just open here. Okay. All right. Absolutely, yes. Right. The first question is, do you think that the cultural factors and social bias are contributing to the creation and spread of identity based misinformation and disinformation?
I I think the definite answer is yes. Absolutely. And you know, I, I see this all the time because in addition to working in food and agriculture, I'm deeply involved in geopolitical activities and you see where you have cultural misunderstandings and biases that are already there that are exploited.
Pablo actually said it very nicely. It's like you, you have a a, a bed of of rocks and in between are where all the weeds grow.
Well, next one. Do you think that the, there is any possibility to build commun to build resilience between communities against the misinformation and misinformation?
I, I think there absolutely is. Somebody asked me if I was pessimistic or optimistic about the way that mis and disinformation go in.
And I, I'm actually quite optimistic, which is not my usual method as a career cybersecurity professional. Numerous years ago when, when we started working on this, nobody was talking about mis and disinformation, except in very small rooms with intelligence communities or certain academics. And now the average citizen is talking about miss and disinformation. I think we have fabulous examples throughout the world of countries that are really, really resilient to this. The Nordic countries have done an amazing job. Switzerland. Switzerland has done an amazing job.
The Ukraine, before they were attacked by Russia, one of their most popular TV shows was a nightly TV show where they went through Russian disinformation narratives and made fun of them, and it was wildly popular. So I think that there are a lot of reasons to be positive. It is just gonna take us a little while to get some traction and be able to coordinate those, those actions.
Well, and the last question that we have here virtually is what do you think that social media should do in preventing the misinformation and misinformation? And I, I would like to weigh a comment here. Social media is actually the channel to spread the misinformation and the disinformation. So what do you think that, that, that they should do? Because I know that they regulated, they have different policies, privacy, even many, many Asian across the world checking, you know, ticketing and, and the reporting. But is there anything else that can be done?
I'm gonna pass this to Pablo in a second, but let's take a step back and look at this from 50,000 feet. Do any of us want social media being the gatekeeper here, waiting for them to do something? And I would say, I think social media is a big part of the problem and what we need. And what if you, if you make a list of what has actually worked, a lot of it is critical thinking. How do we teach critical thinking? We had someone in our CIO who came from Switzerland, he said, it's less of a problem.
And I said, listen, as someone who spent a great deal of time in Switzerland and working with Swiss Swiss authorities, the way you educate people in Switzerland is different. And I, I think, so we have to think about antidotes also. Not just saying, okay, if only social media did this and social media does that, because at the end of the day, it can lead to censorship, which can be a, a bigger problem. So we have to take a step back and look at whole of society. What do you think, Pablo?
You know, I, I feel a little bit for, for social media.
'cause they're, they're kind of the favorite whipping child of this. And, and much of it is earned, I will admit, but not all of it. At the end of the day, social media companies are publicly traded companies and they owe their investors a return on their investment.
That said, I think that the laws regulating social media companies have gotta change drastically. I think that the current economic model, I'll say something bombastic here, is ethically unsustainable. The product that social media sell is your cognition and your time.
And I'll, I'll tell you, as somebody that worked doing dirty tricks to bad people in the government, I wish I had the level of detail on my targets when I worked in the military as meta or X can give you Now on the average person, I don't think that that level of tracking is necessary for advertisement.
I think it's dangerous and I think it needs to be controlled.
The other thing, and this is very much a US-centric problem, is early in the days of the internet, US Congress decided that internet providers were gonna be treated like broadcast media and they would not be held responsible for what's on their sites. And yet they aggregate news. And so now a vast majority of people get their news from social media sites. You can't have it both ways.
If, if you're gonna be news, you need to be treated like a news organization. If you're not gonna be news, then you need to be treated differently.
But I, I think giving them vast immunity, no matter what is on the internet is, is no longer viable.
You, you embraced an important point about ROI think about media, mainstream media, and they also have the same issue. And even academia, you see it used to be published or perish. Now it's published in like 10,000 journals. So everyone right now is corrupted, right, or corruptible. And it's something that we need to think about is provenance, provenance of data and intent. Those are the two things we need to look at.
And you know, sorry, we just make a comment here.
When you talk about the regulations and the sources, and now people really, I, I see people that they just get informed of what is happening. Just reading the news or following maybe, you know, some news channel or something like either on Instagram or LinkedIn, you know, so then that's the, the way to, to know what is happening in the world. But I know that some companies, they still, if they have to process, let's say like a ticket or a report or something, they need to base their decision or the agent has to base the decision whether this was a reputable source or not.
But according to what you just show here, it would be very easy to actually fake the news as well.
That's true.
And so, you know, just like being a scientist and doing research, your best option to check multiple sources, right? And they're not gonna completely overlap, but there should be enough overlap that you can say this is probably 90% true. And this is probably less than that. All legitimate news media at some point due to the pace of information come out at some point are gonna make a mistake that that's just the nature of the beast. And most of them will print retractions that the audience may or may not see. What gets us in trouble is when we rely on a very small number of sources.
And that leads to trouble for a couple of reasons. There's some interesting things that happen with human cognition and, and human psychology.
And, and you can go back and read some studies from O'Reilly. The one thing that's interesting is that we tend to believe that more people believe as we do than not.
We always think we're in the majority. And so the easiest example of that is 80% of people believe they're above average drivers. That's not the way math works.
Terrible, terrible. I can testify, she's terrible.
I no, I'm, so that's the first one. The second one is that decision makers tend to want a lot more information than they need to make a decision. And the more information they get, the more likely they are to A, make the, make the wrong decision and b, be more convinced of that wrong decision. And so because the way social media works, it shows you more of what you like.
And so it, you end up self radicalizing, you end up in your own echo chamber because cognitive dissonance, hearing things that are different from our views is, is hard. And so we end up in these echo chambers and we end up with these very biased news sources and never getting an outside perspective and that's problematic.
So we, we have a, a question here from, from the audience on site.
Hi. So I have a question that I wanted to ask you yesterday, but I didn't have time do in your research. Is the rise of influencers a problem that you see? I think in on Tuesday, Pablo, you mentioned that social media has generated a sort of democratization of attention. I don't remember how you coined that term, but if we look at the past that, you know, the bud, that Jesus Christ, it didn't have that much attention span in that sense.
So is that a potential scenario that maybe an influencer can have a dramatic effect on a corporation or some story?
So yeah, no, there's a couple of things. So if you look at the history of information technology, we, you know, we started out with scrolls and quills and the, the number of people that could read and write was very small. You had to be educated. And so you were either, you know, in the church or head of state, and since you were handwriting these scrolls, there were only so many copies of the message and they could be hand carried.
And then you go forward to the Gutenberg press and now you can mass produce more copies of this information. But you couldn't just show up and self-publish. That's not the way it worked, right? Amazon didn't exist yet and so you could reach wider audiences, but it was still, there was gatekeeping as to who could transmit to to audiences. Then you get to radio and television.
Now you've got commodity devices and radio's. The first time you see governments go, wait a minute, I want to be careful of who can transmit to massive audiences. You start to see regulations over transmitter power.
And they're saying, look, broadcast companies can transmit at this power. Individual citizens can only transmit at smaller power. And same thing with television.
You know, as late as the, the eighties or maybe even the early nineties, if you wanna address a large populace, you couldn't just show up to your local television station and go, Hey, I'd like to talk to Berlin. That's not the way it worked. And so what you saw up until the internet was the gatekeeping function of who could transmit to more audience was there, but who they could transmit to got broader and broader and broader. The audiences got broader.
What you saw with social media and with the internet is they've now removed that gatekeeping function and anybody that wants to can create a massive audience. The, you know, the most dangerous person on the internet is Katie Perry. Last time I checked she had something like 50 times number followers is a p president of the United States and like 500 times the number of followers of the prime minister of Britain. She's an entertainer. And
Not to mention the Russian children billionaires that are opening presence on the internet. Have you seen this trend?
So
It's pretty astounding.
So, so, so the challenge is before you, you had some gatekeeping function and so you knew who was transmitting no massive audiences and now you don't, there there's no provenance of that data. So
Speaker 10 00:57:26 I've just been informed that our clock here is off. So we have time for one more comment from each of you. Oh. And we'll have to go to the 27 questions offline.
What is, in 15 years out, this is my impossible question for every panel, the year 2040, what does good look like?
Speaker 11 00:57:45 Oh,
Speaker 10 00:57:47 What's your, what's your help? We need an imagination to move towards.
Well, you know, I, I don't like black and white terms. Forgive me for saying you know, changing the question a little bit, I don't like the terms good and bad. I think that we have to take a step back and, and be observant. I always look at, you know, as a former regulator, you do your best right to figure out how to solve the problem. But at some point you need to assess, right? Take a step back and say, did it work or didn't? What was the effect of it? Right? Is it positive or negative?
And so it's hard to say what, what good looks like, but we shouldn't be afraid to move and do something and screw up and say, we did that incorrectly. Let, let's try this again. Right? And I think people are afraid to get things wrong, which is why we haven't done anything right now. So we need to assess, we need to move forward and think about, you know, I said, you know, food and agriculture is a good starting point. Standards an important sta starting point, but it's not the end point. It's the entry point to gain confidence so that people, you know, understand the importance and the value.
And they can look at it and say there was a positive effect. The price of food went down because you don't need all these extra certifications. More people were fed nutritious food, for example.
Couple of things. The first one is I would love to see internet companies show you where you are in your beliefs relative to the rest of society. So there's a concept of something called the Overton Window. It's a political science concept. And so on the political spectrum, there's all sorts of things you can say.
So if you're talking about, let's say migrant workers, you can say, well, they should be deported back to their original country. You can say, no, they should be given a path to citizenship. What you can't say is they're lizard people from the pan planet Mars, and they should be, you know, line up and shot that, that's outside.
So the, the range of things that you can say in, in polite society is called the Overton Window because social media company knows what you're looking at and they know what you're ingesting and they know what everybody else is ingesting.
Wouldn't it be great if they could show you, Hey, look, you're, you're kind of in the middle of the Overton window or you're way outside the norm. Let me show you some things to kind of bring you back to the way most people believe. Although a lot of people think they're smarter than other people, which is why Sure.
But just the norm, just an awareness that my beliefs are in the minority, I think is that awareness alone I think is enough for people to go, well, why are my beliefs way outside the norm? Let me investigate that. And it might lead some people to go back and look at it. So that's one thing. The other thing I would like to see is a, a return to something that journalism did in the past, which was very nice and we've gotten away from, in the past, journalists would report the news and the last five or 10 minutes that it would editorialize, but that editor, the editorial was labeled as editorial.
I can't remember the wa last time I watched a news channel or read a paper where I didn't see editorializing in the news story. And that's problematic. I think people need to know what the reading is news and what the reading is opinion.
Speaker 12 01:00:51 Thank you so much. It was a great presentation. Thanks for all your answers and please applause for these speakers.