Well, welcome to the, to the panel Hila Meer, Nick Thorne, Joani Brennan, and Pablo Royer. Maybe you can in two lines introduce yourself if you like Joani. We already, you know me. I've seen your introduction, so Hila.
Okay, I'll next. What's your daily business?
Yeah, so hi, nice to see you all staying here. They're offering champagne outside and you're sitting here, so that's impressive. That's super cool. Hila Miller, I've been in the cyber industry more than 25 years. I am the co-founder of Leading Cyber Ladies, which is a global movement promoting diversity in the cyber sector. And I'm also currently the president and Chief Revenue Officer at Fingerprints, a company, a global leader in the area of biometric authentication and verification solutions.
Hello, I'm Pablo Brewer. I am a 26 year veteran in the United States Navy, specializing in cyber warfare, retired in 2020. I'm also the president of the Disarm Foundation, which wrote and promotes the disinformation analysis and response measures framework, which has been accepted by the eu, US governments in NATO as a way to tackle foreign information manipulation.
Hello, I am Nick Thorne. I'm old, is that right? I'm a former British ambassador to the UN in Geneva and spent many years dealing with, well with many, many issues, but internet governance as a particular priority and later worked with I can on similar issues. So I guess the reason I'm here is that I've got a bit of experience of international organizations trying to control things.
So we have very a lot of knowledge here on stage as I was already speaking about MDM with Joanie who just gave a presentation. It's their consensus around the definition of MDM.
Do we all agree on the same definitions or is that something to have opinions on?
You wanna read?
I mean, I, I think that the, the definitions are right and clear about the reality is that on a daily basis, it's, it's used differently, right? It's acknowledged differently by, by the public, right? So I think there is probably a place for more education around the right terminology and how to use it and when to use it and what, what's the difference. So I really like the fact that you start with the definition today, inox, we can all set the sin be on the same page, but we also know that it's often, often people do mistake between the categories and, and, and it, it is the reality.
Is there people who disagree with that?
I, I don't disagree. I think that we need to add one caveat, and that's specifically for misinformation, which is unintended. And the reason I think we need to add a caveat is as we progress with large language models and artificial intelligence and we get hallucinations that is slightly different than a person misinterpreting information and having it be misinformation. So I think there, there probably needs to be another category or a subcategory of misinformation for AI produced hallucinations.
I've got nothing to add to
That. Okay.
So more or less agree on the definition with an extra addition for AI generated hand hallucinations. So the world is in crisis, energy crisis, climate crisis. There's a hybrid war going on on the European continent, the DY dynamite in the Middle East. And technology also is sort of exploding with ai and we're more, more and more depending on it for operational technology. And decentralization is prominent, so fragmentation and everything becomes a lot more fine grained service change are becoming more connected hyperconnectivity.
So these are all things that make the world more complex and more difficult to guard. So then we have AI added as a extra to make more and disinformation even in hallucination even more difficult. So what is, to your understanding, do we underestimate the impact of misinformation or is is there enough awareness for it?
I think we underestimate it. I think we absolutely underestimate it. And the ability to ver the, the internet was not built with an identity layer. That's part of why we're here.
It also has not widely deployed a verification layer across information at large, which is part of the beauty, but also I think it's underestimated in terms of the potential harms on a mass societal scale versus it's always, it is always been a problem with newspapers, books, television, but, but at internet scale with ai, I think it's vastly underestimated.
So yes or no, do we miss, do we underestimate it hila
For for sure. Yes. It's a very definite yes. And I can give a few example and also potentially some, some reason to why it is so complicated, right? To address.
And it's very like we've seen elections being heavily yeah, impacted and, and super relevant. Right now we have the UK elections coming with tons of fake videos running on TikTok right now. And only now TikTok start starting to talk, for example, on how they're going to countermeasure it now that it's already done. Damage is already here. They're now talking about how they want to add fact checking or signaling what, what content is AI generated. But that's only happening now. Some damage is already done and some other elections already took place and they were impacted.
We talk about harm that can be done by misinformation, disinformation, and you, you showed some statistics about covid and I can share that in my previous employer, BT British Telecom, we had physical harm caused because of that kind of information being spread about covid. We had cell sites being burned, 70 cell sites in the UK were burned because of that, 40 employees of BT were assaulted because of information related misinformation and disinformation. Yeah.
So really high impact.
Nick, what is your opinion? Are we underestimating?
I I tend, I I'm not a a I'm not an unbeliever, but I think I'd make a couple of points. The number one, we are here to talk about AI generated bad stuff right now, bad stuff has been generated for centuries, right? I think the Greeks probably started it, but somebody can correct me on that. The important thing surely is for us to have more transparency in what is and what is not generated by artificial intelligence in Britain at the moment. Opinions seem to, to range from the hysterical on the one hand to the complacent on the other.
And what is lacking, forgive me for being rather tedious, but what is lacking is reliable and verifiable information. Please don't ask me how you get that. I've got some ideas, but I think it's more of a technical thing that I could, that, that I'm competent to suggest, but I'm reluctant to, to get overly excited and I'm concerned, but I'm not overly excited. I'd like to see much more evidence that I can believe
Yes. And Pablo.
So yes, I to all of the above, I think we're grossly underestimating the damage that can be done both with disinformation misinformation in the information space, but also in the physical space. Information is one of the instruments of national power that historically was wielded by nation states to affect other nation states. It's now been democratized so that anybody can reach a mass audience. AI in particular, what concerns me about AI is not the capability of AI to create bad information.
What concerns me is that we have failed to learn the lessons that we should have learned throughout the last 40 years of cyber, which is we're still trying to figure out how to label bad information as opposed to labeling what is good information. If we could magically pass laws globally that required artificial intelligence to be labeled in such a way that it could not be altered, the adversaries, the bad people would still not do it. Yeah. They're disincentivized from following the law. That's what makes them bad.
So what we really should be looking is how do we get verifiable good information? How do we get data providence?
Yeah. But so that we are discussing already the mitigations, but first I would like to, to have some more view on the impact. Who would be suffering most from this? Are these nation states or individuals or
No, just, it is something that I wanted to make a comment on that, that kind of both, both of you basically mentioned that similar stuff existed for years, right? You said bad stuff was created and we had cyber issues for 40 years.
But I think that what we see right now and also how us as a society, we, we see it challenging to deal with is because of the pace of change. Big changes happen in societies before, but they took, it took many, many years for them to kind of be spread and, and and access the entire society. And that allows us to develop culture rules, families to educate their kids on how to behave. What is happening right now is massive changes impacting society in no time.
And it doesn't give us time as society to prepare and develop our mechanisms and, and legends and, and myths and culture around how to protect yourself. And that's part of the big challenge here with cyber and, and with fake news and everything.
So yeah. Not just what is happening, but the pace with which yeah. Is happening and law is always behind. Yeah. Regularly low
Culture, everything. Everything.
Yeah. So that's the extra addition to the impact. Yeah. And who serves, who is suffering the most, the impact or the worst?
What are, who are the victims?
Well, I think if, if society is suffering, we're all suffering. If we aren't able to sustain the basic common agreed principles that are required for a democratic approach, then we're all suffering. But I would say absolutely the most vulnerable sufferer, the most, the the lowest informa the people who are, don't have the ability or haven't even been trained to tell the difference, even as children between how do I research information, how do I tell what's, what paths should I follow to know if I should trust this or not?
So I think that the, as as typical the the most vulnerable parts of the population suffer the most
And Pablo the state in initiated attacks. Is that, how big is that?
Well,
I
Think we need to examine, I agree that the pace of change is becoming more rapid. That's always been the case. But when you look at information technology over the span of history, something fundamentally different happened with the internet. So when you started back with, when we were writing on parchment and scrolls, literacy was limited. You needed a certain amount of time and money to be able to produce these scrolls. Those messages could only be carried so far.
Then you get the Gutenberg press, now you can mass produce documents so they can be carried further radio, you can now reach further television, you can reach further and you have visual. So every revolution, up until the internet was a few gatekeepers that could transmit over the television could reach a broader audience. What happened with the internet, as we've now democratized the ability to reach a mass audience, which is formally something that belonged only really to nation states.
And I think that causes immeasurable harm because we have no provenance, we have no guarantees that whoever's transmitting to a massive audience has our best interest in mind. Yeah.
So we need mitigate to mitigate all this risk. And it's a huge task and a vast lake of problems that we have to manage. So if we look at mitigations, what could law help in this case, Nick, what do you think?
It could, yeah. I'm not sure you are going to get enough politicians interested in cooperating. There's no, there's with, with huge respect to what's happening in Canada.
And I listened to what you were saying, Joani, and, and yes, you are doing more than just about anybody I can think of, but it, one country is not enough. We're talking about the internet. So what we need surely is cooperation amongst countries across frontiers. And we need to come up with, if we want to take the legal route, we want to come up with some laws. But that means that you've got to demonstrate to the lawmakers that there is an issue, which they can, in concrete terms, come up with a law to address. Yeah.
I I think that's gonna move us on to, in which forum can you have as discussion? Yeah. Where I do have views, but I, I think we are, we are hiding behind or we're hiding from the elephant in the room, which is at the moment, people simply aren't cooperating. They're rather more tending to compete with each other when it comes to controlling ai. Because one country, one group of companies is so far ahead that others are wanting to catch up and to try and remove the advantage they've got. Now that's the way the world works.
I'm not suggesting it's good, but I do suggest that it's something we need to address.
So what other mitigations do you think are,
So from my point of view, law alone is, is clearly not enough. Right? It's a complex problem. It needs complex layers of solution. But just taking the law perspective, okay, law with law enforcement is worthless.
And, and we've seen that in the cyberspace. Cyber crime right? Is illegal. Of course it's illegal yet people do it. It's a very good profession to be, it.
It's, you know, you sit somewhere in a beach, in a country, tropical country, probably with no extradition and you, you make bank, right? So it's, and it's, it's very clearly illegal. But enforcing those laws and driving international collaboration is, is a very difficult task. And we are really, really lagging behind.
And, and so yeah, it's to, to my part law, law enough is, is, is not, is not enough law, law alone. And we just have enough history of fighting cyber crime to see that it's, it's a tough one to, to follow. Yeah.
'cause it covers every area you can think of. Of course. And it's not so transparent how to define it. Will the new technology, which is now working for the worst, for eternal evil, will it also be one of the magic wands to help for the eternal good to improve the situation? Do we expect technology to invent some magic wand?
No.
Would technology help us?
We, we can't technology our way out of this problem. I mean, we have to take a multidisciplinary approach to what we're doing with every new technology. There's french philosopher, I forget his name, I'm sorry, but he, he said that with every new technology that you invent, when you invent the ship, you invent the shipwreck. Everything, every new thing that we invent has a potential harm that it invents. But we said, let's put in lighthouses, we put in lighthouses and the amount of shipwrecks, shipwrecks went down vastly.
So we, we can't only use, technology will not be the full solution to this problem. It has to be combination of regulatory, economic drivers and benefits, education and awareness. There's no singular, we can't technology our way out of this. We have to have a multidisciplinary
Approach. Well that's a generic thing that happens in many domains. Where are we now if we look or who should act? 'cause you have legislators of course, for for one part, technologists for the other part. And it should be multidisciplinary.
And who, who could take the lead in this?
So everyone is, everyone is the answer. Agreed. Yeah.
This is, this is not a technology problem, this is a social problem. It's just enabled by technology. So we need industry to put up guardrails and decide what they, the industry, industry, the industry to decide what is acceptable and what is not. Until society can better educate itself and decide what is acceptable to them or is not, or is not. We need governments and legislatures to pass laws to protect the citizens. We need academics to do studies so that we better understand the problem of disinformation.
When you look across the spectrum of research and academia and when it comes to disinformation, governments are far more interested in figuring out how to influence people as opposed to how to keep them from being influenced.
So they are actor, but in a different perspective.
That's right.
Yeah.
So, and where, if we look at the timing, where are we? Is it five to 12 or is it one minute to 12? Or where are we at the peak? Or when will the peak arrive?
I we're nowhere close to as bad as it's gonna be. We're having these issues now with static text and static pictures. Sometimes the facts presented are wrong. More often the facts are correct, but they're presented out of context so that you get a different interpretation. We're probably about three to five years from being able to create Hollywood class deep fakes in audio and video.
And when we can no longer believe what we hear and what we see, that's when things are gonna get really problematic.
And for most people, they don't even need to be Hollywood class.
I mean, it, it can be very low quality, but there are certainly many benefits. I mean, there are going to be many benefits that are going to come with ai, but I believe that we as an industry who are in the space of trying to prove authenticity about information, we have a very important role to play to decide how close we are on that clock and where the benefits will be.
And Nick, what do you think when is, is are we at the peak or,
Well, first of all, thank you for the five minutes to go or five years to go.
That's, that's good. 'cause I'm not qualified to make a judgment like that. I think the problem, forgive me if I keep talking about elephants, but
Pink elephants,
We need cross frontier cooperation.
This is, this is not a problem which can be resolved locally. And the, the question which needs to be asked is who will cooperate and who will play and who will not play. And there's a pretty clear correlation between those who will not play and those who are probably responsible for some of the more malicious stuff out there. And I think it's this sort of difficult problem, which needs to be this sort of difficult problem, which needs to be addressed primarily by governments. And I would suggest jumping forward a little bit that that's not possible in any existing forum that I'm aware of.
And in circumstances like that in the past, what the international community has done is to bring together what's called a coalition of the willing. Now I don't mean to invade Iraq before somebody tells me, but you can get countries who are, who share basically the same values and share the same concerns to actually get together and work out what we do to at least put some lighthouses up. I love the lighthouse analogy around, around the coastline coming from Britain. It's a very good analogy, but I, that will be my suggestion.
If we're going to do something, I, I don't think there's much point in going for any existing forum.
But if there's one thing, and if there's one thing that we have in common is that I do think the general public is fed up with thinking everything they see is fake. I think people are tired of that and they want to know what's real. So I think there's hope in that, in that demand of the signal of distrust is because people want to see information that they can trust. Yeah. And that's a good signal. Yeah.
And that is, that is a reason for hope in that seven years ago, the only people that were talking about foreign information manipulation were in basements of government buildings and now, or this conference and now the average citizen is openly talking about it. We're here openly talking about the problem of disinformation. So the first step to fixing a problem is recognizing there's a problem. And I think that's a good start.
So, so we do underestimate it. We agree that there should be a multidisciplinary approach. The impact is devastating, especially for the most vulnerable, but also for society as a whole.
And well, we don't know if we are at the peak of the misery or if that's yet to come and we should act in instantly, but that's very difficult on a global scale. That's sort of the summary.
You, I'm correct. That's very optimistic.
I would, it's maybe a bit optimistic. So I would like you all of you to make one final last statement or sentence or opinion before we close the panel. So anyone pop-up style, who wants to start?
I would just say as we're, as we're speaking outward, whether it's our parents or CIOs or, or leaders in government, I've, I spoke with the, the president of a federal department who said, I don't know how to explain this topic to my other government counterparts. Please give me the simplest language possible.
So let's use our technical language and our complicated language in, in our tense as we talk. But as we're speaking to cu customers, adopters, citizens, let's be very plain and simple about what we say and meet that audience where they are.
Right. Thank you Nick.
One, one small thought. There are organizations that are companies which are making a lot of money already out of ai. If what we seek is transparency and reality and evidence, if you like, then it seems to me that if there is pressure to be imposed, it should be on those companies which are making lots of money to actually demonstrate that they're doing the right thing and others are doing the wrong thing.
Yeah.
I, I want to touch back on the item of education, right? And, and how we can educate society. But I want to touch specifically on, on on children, right? Because critical thinking become a life skill, a survival skill in this world. And probably something that ministry of education need to start including early, very early on from early classes and, and gain this skill to our children because it is a survival skill right now.
A survival skill. Yeah. Yeah.
And Pablo, what is your last,
I'm gonna steal a line from other efforts. I'm gonna say act locally and think globally. Influence the things you can influence, but consider how those things might be twisted on the outside. Earlier today we were at a round table and it was suggested that we have positive identification for anybody that posts on the internet. And my question back was, that's wonderful in western society. Yeah. What do you do with dissidents in places like China and Iran? What do we do with whistleblowers if they're possibly, if they're positively identified?
So affect what you can affect. But talk to groups outside of your own bubble, outside of your own tribe, and make sure that your solution isn't worse than the disease.
Thank you all for being in this panel. I'm closing the panel and we move on with the next speaker.