First of all, good morning everyone. It's really nice to be here. Pablo and I flew in from Washington DC a couple days ago. We're very excited to, to speak with you.
And I, I do wanna note that we're gonna be together for two hours, and we're very mindful of that. We don't wanna bore you, so this is gonna be very interactive, and if you want to, you're more than welcome to move closer, but if you don't want to, you're also welcome to stay over there.
So I, I would like to first of all start by saying that, give you a little brief intro about Pablo and I, because we're kind of a, a, a strange professional couple here. We've worked together for a number of years very closely, and we, it's a little bit of background. We met through the Atlantic Council, where we were both senior fellows and, and the short version of it is, I had a background as a business strategist.
I was a US trade negotiator. I had been following disinformation, misinformation and cybersecurity issues in agriculture for about 20 years.
And I was really frustrated because no one was paying attention to these problems. So when I was at Atlanta Council, I asked around, I said, I, I need someone who's a great cybersecurity expert, and everyone pointed me in the direction of, of this, this guy right over here. And I was very excited to meet him. And then I said, I, I need someone who's an expert in misinformation and disinformation. And they also pointed me in the direction of Pablo. And ever since then, we've been working very closely together.
He worked with me creating Agritech action as agriculture, you know, for example is getting, you know, increasingly more tech focused. So are the problems with misinformation, disinformation, and cybersecurity. So with that, you know, a little bit about us.
I wanna take us really quickly through the agenda, what we plan to do today.
So, you know, to kind of manage your expectations and what, what we will accomplish today. We're gonna present for about 20 minutes or so, and we're gonna talk about mis and disinformation, define it, the scope and scale, and what is disarm and dad, CDM, defense against disinformation. Then we're gonna break for about 10 minutes and have q and a, so we can just kind of level set. Then we want to create some breakout groups over here and talk about various roles that folks have in mis and disinformation.
Then we're gonna take a 10 minute coffee break, and then we're gonna have another breakout session. So you can see it's gonna be very interactive because we really want your, your opinions and your thoughts.
It's, it's really a pleasure for us to be around these illustrious professionals here and want to hear what you have to say. And then we are going to have an open discussion and next steps. How does this affect your various industries? Who needs to be involved in these discussions? And what do you think are the next steps? And with that, I wanna turn this over for right now to Pablo. Good morning.
So if we're gonna talk about misinformation, disinformation, it's really important that we all understand what those things mean. A lot of people use them interchangeably.
They're not, they're very different. Disinformation is when there is information that is created, altered or manipulated with a specific intent to deceive.
Now, what can be manipulated is either the content or the context surrounding the points that are given. But with mis, with disinformation, there is an intentional effort to deceive you. Misinformation is what happens when otherwise honest people either misinterpret information in such a way that it's now false or they become infected with disinformation and they now believe it to be true. So when they're spent passing off bad information, they believe it to be true. They don't intend to deceive you, they have been infected by disinformation.
So there's a spectrum here from misinformation intent. A misinformation is when something is misunderstood, it's an unintentional mistake, or they're victims of disinformation. Then there's disinformation, there's intent to deceive or malformation. Malformation is often referred to as doxing. Oftentimes it is when information that was supposed to be private is made public. And usually the context around those facts are changed to give a false impression of what's actually happened. So what's different about disinformation? Disinformation, Hass been around for a very long time.
Historically, it's belonged to nation states. The power to control the information space belong to a nation state. And the way that nation states affect one another is what are called the instruments of national power, diplomatic, informational, military, and economic. Historically, these were things that only nation states could do. What's happened in the last 25 or 30 years is every one of these things has now been democratized.
So no, you no longer have to be a nation state to be able to reach a mass audience. You can do that via social media. You no longer have to be a nation state to have a military. We have private military companies. Bitcoin isn't backed by any nation. So a lot of the protections that were there, because these instruments could only be wielded by nation states are no longer there. And that's why we end up with some of the problems that we have.
I, I actually wanna ask you guys a question before we get into the slide. Can I, can I see by a show of hands, how many people here have been affected in your, in your jobs by missing and disinformation? So there are a bunch of you. How many people have not been affected? Your job has not been affected by miss or disinformation, your industry is not, I want to talk to you guys and figure out what it is that you do that's really interesting. So the World Economic Forum actually came out with its global risk report in November of 2023.
And it went across sectors and said, what are the biggest risks? And across the board, if you look, civil society, international organizations, academia, government, private sector, miss or disinformation are number one or number two, that is right now the single biggest or second biggest threat to these industries in the next couple of years.
So one of the things that we hope for you guys to get out of this is in the seminar is not just, Hey, let's all share our stories, but really, how do we start tackling this? What's the first step?
What, what's the entry point to, to kind of coming together as a community and figuring out what we do next? Some of the narratives that we've encountered, and I I think some of these should be familiar for you around mission and disinformation, vaccines and vaccine safety, election security, food and agriculture, science and tech, cybersecurity, climate change, natural disasters, immigration, law enforcement, media, social justice, privacy, geopolitical disruption, financial markets and trade celebrities.
Every single one of these, you've seen major disinformation and misinformation narratives and, and I will say where you see food and agriculture, one of the things that Pablo and I have really kind of assessed is, which of these are not too much of a political hot potato to, to touch? And we think that food and agriculture is the one area that, for example, United States, whether you're Democrat or Republican, you can agree that's a threat. And I would say whether you're a European or Chinese or Russian or United States or Brazil, this is also a concern.
So it's the one area that we feel where we can have a starting point to, again, level set and, and have a, a discussion around the severity of miss and disinformation.
So what roles do we have for addressing miss and disinformation? And disinformation is not a technical problem, it's not a legal problem, it's not a social problem. It is all of those problems, which means that we're gonna need people from various communities, academia and industry and government to address it.
And the first thing we have to do if we're gonna work with a group outside of our own tribe is we have to be able to speak the same language. We need the same taxonomy, we need the same ontology. We need to be able to share information and intelligence in a way that's easy to digest and easy to interpret regardless of what it is that we do in our daily lives. And then we need to be able to coordinate actions. There is no one silver bullet solution for this. AI is not gonna solve this problem by itself.
Regulation is not gonna solve this problem by itself.
Education is not gonna solve this problem by itself. We really need an entire mosaic of solutions to address this problem in a substantive way. And that is one of the things that Oasis Open and Disarm are working on. Specifically, we're working on dad, CDM, the defense against disinformation Common data model, which is based on the stick standard put out by Oasis Open and the disarm framework put together by the Disarm Foundation so that we can describe disinformation intelligence in an easily digestible format and share that intelligence across the world. So what is disarm?
Please don't go blind trying to read this. If you, if you want to see it, go to the Disarm Foundation and click on framework. But very loosely disarm has two framework. It's got a red framework that describes how attackers conduct disinformation attacks.
And then it's got a blue framework that describes actions that defenders can do in order to prevent the attackers from succeeding. So up at the top, the purple and the red boxes are the different phases of the operation. So plan and prepare happen before the attack goes live. And then there's execute and assess happen afterwards.
The next line down the line of white boxes are what we call a kill chain. Anybody here got a military experience? No. Okay. So in in military verbiage, a kill chain are the specific actions which must be taken in order and succeed in order for an attack to go off. And just like a physical chain, if I can break any one of those links, the attack fails. So those white boxes are the specific steps in the kill chain and the gray boxes below this are the tactics, techniques, and procedures that can be done to complete that link in the kill chain.
If you were to click on one of those, which you would see is a description of the tactic. So in this case is a red tactic called online polls. You would see a summary that defines what the tactic is. You would see what counters the blue or defender actions could take to counter this tactic. And if we've seen it in a real world attack, you will see which data sets we saw this in in a real world attack on the defender side, corresponding the blue countermeasures framework. It's very similar. You have the same phases of the top, the plan, prepare, execute, and assess or execute and assess.
And then you've got the same kill chain at the bottom. And then you've got what are the blue tactics, techniques and procedures that you could take. And if you were to click on those, you would get similarly a description of what the, what it is that the blue could be doing and what those actions could counter on the red side.
Now, something I wanna point out there is for the sake of academic completeness, we put some defenses in that are not recommended, but we put 'em in there so that people would understand that yes, we did consider it. So you will see if you look in there that one of the tactics is censorship and it will clearly state not recommended, but it is there for academic completeness. The other thing to point out with the disarm framework is the disarm framework does not tell you what is disinformation and what is not. We leave that to the analyst to do.
But if you see something that is disinformation or if you're thinking that disinformation is going to happen, it tells you how you can plan to defend against it, either proactively by pre bunking or reactively after the message comes out.
So how does this come together?
Well, Oasis Open was kind enough to develop a couple of standards for us. The first one being sticks, which is a way to define objects of various different types in a standard language and put them out in A-J-S-O-N object Taxi is a trusted automated exchange of intelligence. It's a protocol that you can use to send back and forth these six objects. And we are currently developing, just starting to develop dad, CDM, which is the defense against disinformation common data model.
And that is how we're gonna take all of the objects in the disarm framework and put 'em together in a standard format so we can share threat intelligence back and forth. So who do we want to share threat intelligence back and forth?
Well, so far, nato, the EU government and the US government have all adopted the disarm framework as a way to counter disinformation and foreign information manipulation.
And we're starting to see a desire to share amongst information, so government to government. There is a bilateral agreement right now between the EU government and the United States industry to industry, starting to think about how they wanna share information within their industries, and then the government sharing information with industries and then ideally someday industry sharing information with the government.
Some things that we want to consider though are sharing limitations and classifications. So TLP is a traffic light protocol, red, amber, green. So green being, you can share this with the world. Amber's being check with us before you share it outside of our group. And red being, please keep this to yourself because you're gonna put at risk some government resource that we're using to monitor. Okay? And we're gonna stop here and
Take a pause to see if we have any questions from the audience.
Yes, rj? Hi.
Look, if, if the red team framework kill chain is the blue framework implement chain.
No, great question.
No, the, what the blue team does is it breaks red's kill chain right? Now I do wanna point something out here though, and I should have pointed out before right now, the modality that we're using is we wait for something bad to happen and then we wanna figure out how to solve it afterwards. But if you'll notice, more than half of this kill chain happens before we would see an attack. So there are a lot of things we could be doing proactively to make disinformation less sticky and less successful.
And what's really left is for us to decide that we want to do it and then have a discussion amongst ourselves as to what are our acceptable protections and what are not acceptable protections. And some of those things will be educational and some of 'em be legal and some of them will be technical. But we really should have those discussions because we really can't afford to wait until after disinformation sticks and is now implanted in its target audience. It makes it much more difficult to react.
Sir,
We're probably all chain.
Absolutely. So the kill chain that most people are familiar with for cybersecurity is from the Mitre Attack framework. I was actually one of the subject matter experts in the early two thousands that helped put that together. So we started working on disarm, we very much wanted to model it after that type of thing.
So the kill chain in this case, starting from left to right is first there's, you have to plan a strategy plan objectives, you have to do some target audience analysis, develop narratives, develop content, establish social assets, establish legitimacy of those social as assets, conduct microtargeting, select channels and audiences. That one's important, particularly important, the channels that you choose to communicate your message are gonna vary wildly depending on who your target audience is. So for instance, if I want to reach the youth, what social media might I use? TikTok.
TikTok, right?
If I want to use Facebook, who am I gonna get hold of? Old people? Old people, right? I don't want to put out the message for young people on Facebook. It won't reach there, right? Different countries and different popula in those countries use different channels in Latin and South America and in and in India. WhatsApp is the prevalent social media in Africa it's WhatsApp or it's, it's WhatsApp and Facebook in in Europe, in the US it's a lot of TikTok.
So that, that's one of those examples. So once you select those channels, you want to conduct some pump priming. Pump priming is a how you do AB testing the way that you would maybe test out an advertisement, except that with social media, social media is built to help you test out advertising. All the tools in social media are there to help you test which message sticks more with your target audience.
So it's almost purpose built for you to do influence operations.
After you do conduct that pump priming, you prestage your message, you start delivering your content, you start to maximize exposure. And then you try to figure out what harms you want to deliver. Do you want to divide the populace? Do you want to dismay the populace? Do you wanna deceive the populace? What are your operational objectives? You wanna drive that to an offline activity because you can't have real world effect if people are only talking about it online. You want to use that to drive offline activity.
The way people vote, the way that people feel, the way people protest what they're protesting. Going beyond that, you wanna persist in the information environment. If you look at legitimate news in social media, what you see is as the news is new, it peaks once everybody's aware of it and then it decays very quickly. What you see with disinformation is you see multiple peaks. And the reason you see multiple peaks is that the adversaries will put out the message and as it starts to die off, they'll start a new round of messaging.
And so you'll see these multiple peaks is they try to keep it relevant and in the information environment. And then the last step is to assess the effectiveness of that attack. And then you start the entire kill chain over.
Sir, does that help? Thank you.
Yeah, pleasure. There
Was a similar question for the Bluetooth.
Can you,
Of course there
Was on the right side infiltrate platforms, it feels more like a red light. Can you explain this a little bit more? It's completely on the wide from the bottom platform, right
Over here.
Oh, got it. Yep.
Yeah, yeah, yeah. I'd have to click on it. But what that, what that means is if infiltrate is probably not the best word to use, we wanted to start with an action word to define all of the tactics. Infiltrate for the blue side is really can you partner with them? Can you set a desk up that does disinformation defense? Can you set up some way to report disinformation? Many of the social medias already have a desk to do that. The problem with those desks is they're often overloaded because they get everything from somebody said something bad about me to legitimate disinformation.
And so that's different. But you're right, that's probably not the best choice of words. Yes ma'am.
Information question. Do you AI framework algorithm driving this information that it regulatory on the side of the US ecosystem. Secondarily, there's also the part about people and what about inoculating people presence, like the better readers?
No, that, those are two excellent questions. So the first one is was do I see movement that the US government is gonna take some sort of action?
Yes, we've seen some movement. Again, they've just stood up a bilateral partnership to share disinformation threat intelligence between the US government and the European government. It's not gonna be fast enough, frankly.
I will, I will go off script and I will tell you that Pablo's view, and I'll, I'll distance myself a little bit from Danielle and Oasis open on this one, is that the current economic model for social media is ethically unsustainable. The entire model is based upon the attention economy. And the best way to keep you on a platform is to either feed you something that enrages you or feed you something that helps you self radicalize. Neither one of those things is healthy regardless of age.
We, we spend a lot of time talking about, you know, what this does to teenage girls or teenage boys. It's not healthy for grownups either on the education ba Yes, absolutely. Education is certainly part of it. There are a couple of challenges with education. The first challenge with education is education for school children and university students is fairly easy. What do we do with all the adults that aren't in school? How do we educate them? How do we reach that audience? And then what is the message that I need to reach somebody in their thirties as opposed to somebody in their seventies?
How do I do that differently? The other challenge with education is education is always gonna lag a little bit behind technology. The technology changes so quickly that, how do we keep that up?
There are, there are countries that are doing this very well.
The Nordic countries have a tremendous educational system for countering disinformation. The Ukrainians were so good at this that the most popular show in the Ukraine was a nightly TV show where they would show you the weekly Russian disinformation narratives and make fun of them. It was brilliant. Most of the rest of us are lagging behind. And so I think we have to learn from some of that. Education is certainly a part of it. Critical thinking is certainly a part of it, but it can't be the only thing we need.
Some of the things I think in the us one of the challenges that we have is our, our laws. Historically, we have looked at social media and internet companies as broadcast companies, which means that they're afforded certain protections. That decision was made in the 1990s before most Americans got their news from the internet.
Again, Pablo's opinion that needs to be revisited. I I wanna add
A couple things to your question and to Pablo's response is AI and cyber and mis and disinformation cannot be separated. So we're right now talking a lot with governments, but particularly US government. And we're actually writing a lot of policy around this right now and meeting with, with decision makers and helping them out. I will say this, as someone who's former US government and I, my my career was international in policy, so I worked with a lot of governments. Pablo was former government.
It's not gonna be government that's gonna fix this. Government can play a role, but, but there are gonna be a number of different stakeholders that have to play a role. And the question becomes, you look at this, this is a quagmire.
I mean, quite frankly, you look at where do you start? Because any, anytime you try to start, if you start talking about any one of these subjects, people are gonna say, oh, you're censoring or you don't, don't like our political opinion or our opinion on vaccines.
If you start with any one law, you know, you, you always have the, those who want to be able to continue their campaign of misinformation, disinformation who are fighting against you. So we've really tried to distill it down, I would say, to where we think is, is the entry point.
And we do think it's gonna take a community and I I think it's gonna be industry, industry working with government. And I think that we do need to start on food and agriculture. And I I just wanna give you like a little story just to kind of explain layout. Imagine something like, and and I I was talking with the committee, a subcommittee in the senate about this recently, about misinformation, disinformation, imagine that you are a mother and you have a child and you just read a story in California, you, you live in Maryland about two babies that died because they ingested infant formula.
Okay? And, and whether it's true or not, you, the first thought in your mind is not my child, I can't do this, I can't possibly, and, and you panic, right?
I mean, remember, I don't know how it was in Europe, but the pandemic, everyone, you know, cleared out all the toilet paper iss I mean, for God's sake. So imagine you're, you're panicking about that now. And meanwhile, you're Abbott, which is a company that makes most of the infant formula, okay? In the United States. And you need to get out there and, and put the kibosh on this, right? You need to say this, this is not true, but what has happened, you have now been subjected to a ransomware attack, okay?
You have no access to your emails, you go to your social media platforms and you have found that they've been taken down because you've been sworn by bots, right?
That have figured out the algorithm to take you down. And it's happened to me, quite frankly, on social media before because someone doesn't agree with what, what I have to say. And now imagine that the regulators are in the same boat, right? FDA is no longer credible because there have been a lot of mis and disinformation stories about how they regulate infant formula or don't regulate infant formula.
So it's interesting 'cause I had this discussion a few months ago with this, this committee, and they stopped and they said, we, we literally just stopped the nation state attack that looked exactly like that very recently. So these are like really real stories and you, you can't s you can't separate them. And if I haven't mentioned this yet, and I think you guys may already know, Pablo is actually the co-author of the disarm framework, which is why he has mastered it so well. But are there any other questions?
So I guess one of my struggles here is that oftentimes, and I would say even in the food and agriculture, which seems safer, there are still a lot of people with different perspectives, right? So there's a whole group of people that have valid concerns about Monsanto and other people who are all in favor, right? And so how do we represent to, and if you look at news, right? News has stopped being facts in more opinions. And it doesn't matter which, you know, in the US it doesn't matter which channel you watch, it's all opinionated. It's not facts.
So how do we allow for people to publish information that is, this is my experience, or I have seen this to be true, right? In a way that doesn't get, you know, taken down by the FDA or whoever we're talking about food and agriculture, but is also not saying this is, you know, supported by a $12 million study, right? You look at like, you know, herbal remedies and other things of that nature in the context of covid. There were many cases where the FDA went over and like forced people to take stuff down when in reality they actually were helpful.
The people weren't claiming to say this will solve your problem. They were saying, I have seen this to be helpful in cases. So there's a lot of nuance here and I'm really worried about ai. Yeah.
'cause
Generally AI affects a single model and in this context, in maybe other areas, right? It becomes an ethical thing and people have multiple ethical models. And so how do you, right, you're effectively asserting for the entire population a particular ethical model if you know, from an AI perspective.
So I'll take that one.
The first one is that I, I don't believe that AI scanning information to see if it's true or not, will ever work. I've got a separate talk, I can talk an hour on that.
But the, the short version is this. When you build an AI to do fact checking, there are two predominant models. There's an open world model and the closed world model and the open world model.
Every, you assume that everything it's true until it's provably false. So the first problem with that is if something's false, it's gonna be a long time before you can prove it to be false. Which means that the target audience is assimilating, ingesting and internalizing that me that misinformation or that disinformation. And it's much harder to disavow 'em with that later. The other way to do it is a closed world model, in which case you assume everything is false until it's provably true.
And that doesn't work because sometimes we really need to get out true information very quickly.
And at the scale and speed of the internet, we will never have enough computational power, number one. The second problem with AI is AI doesn't do well with either satire or comedy. And let's face it, there's a lot of that on the internet. So I don't think AI is a good solution for this. The other thing is, I think it is the wrong way to head to use AI to detect disinformation. And here's why.
There's, there's a lot of, there are a lot of discussions right now about, well you have to watermark things that an AI makes. So we know they're made by an ai. Here's the problem with that. The bad guys, the bad gals are not gonna follow the rules that what's, that's what makes some bad guys and bad gals to begin with, right?
So I think that's a, that's a bad way to do it. A better way to do it would be to provide data provenance. I am signing up and saying that I said this thing and now you can decide if Pablo is trustworthy or not trustworthy.
And that's up to you, but I'm signing up to say it. So in the case of regulations, in the case of medicines, maybe the right way to do it is for Monsanto to go, if you wanna read the study on this pesticide, here's the link to the FDA with their report and their MSDS sheet and their how to guide. And now it's not the company saying it, it's the regulators saying it. Now I recognize that not everybody trusts regulators, but now you've got multiple sources signing up, going, my organization said this and we think it's true.
And so that allows you as an information consumer to decide, I at least now know where it's coming from and I can make an educated decision on whether or not I wanna trust that source. We need to label things that are good, not label things that are bad.
Yes, ma'am.
I just wanna add two words to that, just two words to, to that providence intent. Yeah. Tho those are the two things I think that we really need to know.
Yeah. So that's what I verifiability on. Are we talking about truth and is saying that verified it hasn't been maybe verified and sharing this piece of, to this funds, this organization that
That is within sticks and taxi. But not only that, you as an individual or an individual organization can decide how much you want to trust that source.
So you and I can both look at a source from the New York Times and you can decide that they're, they're okay and I can decide that they always tell the truth and we'll see it. But yes, it absolutely lets you track the provenance. It also lets you decide how much you wanna share outside of your organization and who can subscribe to your organization. So absolutely.
Other questions are you,
We won't because the technical part, because normally you have some kind of compromise then you can open them with to your platform or whatsoever intelligence platform.
And I made the experience that if you use too much fees, most of them provide the same anyways and it's some kind of guy you brings it.
Yeah, no it's, it's the same thing with CTI. Absolutely. Right?
It's, you have to know who your sources are and the different platforms will deconflict. So right now dad, CDM is not out, we're just starting to develop that standard.
However, there are implementations of disarm objects and sticks that are built into open CTI and into misp, the malware intelligence sharing platform, if you want to play with it. Again, they're not standardized yet. We're partnering with with Oasis Open to do that. But you're right, it's, you have to be careful of your sources of information and most of the software packages will do some deconfliction, but don't expect AI to solve this for you. This is still gonna require smart people and good analysis. You
Have recommendations
For disinformation? Yep.
Not yet because they're still standing up. There are, there are a couple of groups depending on your subject. So certainly the internet observatory in Stanford is very good. Obviously I'm a fan of, of the limited analysis we get to do with disarm OLAS has been pretty good. EEAS is very good. There are several out there. Team five in Taiwan is fantastic for Chinese disinformation, but we don't have a disinformation ISAC if you will yet.
And so that's one of those things that, that we as a society need to discuss is do we want a separate disinformation isac or do we want a disinformation desk in existing ISACs? I have my own feelings but I have my own biases as well. And so that's something that we as a community really need to figure out. Can
We take one more question over here before we continue this gentleman over here? Yes. Yes.
I heard that you about content and I you whether that fit this framework or
I, I think it's a good step forward. I haven't seen the final standard. I don't know that it's been ratified.
I think it's still in development last time I looked at it. It's certainly possible. I think that there are still a couple of challenges with that. I don't think, I, I hate that I'm gonna say this RJ don't throw anything at me. This is actually one of the very few cases where I think that blockchain will work. And I think that the reason for that is one of the problems that we have is that news aggregators will take news from legitimate news sources and aggregate it, but out of context in ways that can be easily misinterpreted.
And there's no way for me to trace it back to the originating source once it's been aggregated. I dunno if CTPA does that, but I know blockchain will do that for us. The other thing that we would get for that as a side benefit is if you're a reporter and you post an article every time somebody views it on a news aggregator, you would know how many clicks it got even on the news aggregator. So you could charge a fair price for it.
Okay. Okay. We're gonna have a lot of opportunities for you to participate and have more questions.
So just just to know and, and Pablo, you had what about 8,000 different cases that you studied?
Yeah, when we developed, we first developed the Disarmed framework.
The, the first version had a, a different name. It was called Amit, originally the Adversarial Misinformation and Influenced Tactics and Techniques. It was announced in 2019 at Black hat USA. We started out by looking at about 2,500 confirmed miss and disinformation incidents. And then we broke those down. We looked at advertising funnels, advertising models. We took a look at the psychological operations manual for us, department of Defense and for nato. And we used that to look at those disinformation incidents.
We down selected from those to about 300 that had different tactics, techniques and procedures. And that that's how we had started our categorization. And we're now up to about 70 508,000 incidents with people that have analyzed it with the disarm framework. We've just released version 1.2 60 days ago.
And we're, we're always looking to improve it. It is an open source standard. So you can go out, you can download it, you can talk to the disarm team and say, I think this is wrong. I think it'd be better if you did X. And we're really looking to hand it over to Oasis open so that it can be done by, by committee and the world at large.
It, you know, interestingly, I've been asked this, oh, I'll get you two seconds, Sarge. I've been asked this question by Department of Defense, like where I thought we would have an attack on the food supply. And my thought is we already are under attack in the food supply and we have been for quite some time. I've been watching it unfold. And let me explain how, you know, if you talk about, hey, would we introduce a disease, for example, soybean rust? The the answer is you could, but the, the challenge is to contain it, it would affect everyone.
Or for example, in the United States, we produce a lot of feed that we export, so it would affect other nation states as well. However, why, why do you have to do that when you can, you know, put in place mis and disinformation stories that change the entire economic model of food and why?
If you look at the cost of food, how, how much it's gone up.
I mean, I, I was in Seattle recently and I was shocked that I went to get a dozen eggs and it was $12 for, I mean, that's insane, right? So you look at that and you say, okay, by, by, by putting out these missing and disinformation stories around food, what happens, you know, you have additional certifications that you need. People say, I only want organic, I want it, you know, I I was told that, you know, I can't trust this brand. I need need that brand and so on and so forth. Or you see it translate in, in regulation.
And I'm a former regulator, so I, I have, I have no problems with regulation, but I would also say that there are some regulations that are pushed by public opinion that are actually not necessary and do economic harm.
So that's one way that it affects the food supply.
And the, the second way, way is everyone eats right? When you don't trust your food, you don't trust the government, you don't trust, you know, civil society, that's the first level of defense.
So, you know, I'm less concerned. I mean, I hate to say it about the introduction of a particular disease that could spread everywhere and, you know, more concerned about this that's already been happening in transpiring for the past 20 years. So that's why I say agriculture is a very easy example of something we should all be concerned about.
It's, you know, the bottom up of Maslow's hierarchy, which it's why it's, which is why it's such a sensitive subject. Whereas things like election security and integrity are very challenging to tackle. So we're gonna do a little breakout right now and I'm trying to figure out the best way given 'cause we don't wanna have groups of 20, but we wanna talk about, and this was based on a question I think that we got from the audience. What is the role, for example, for government? What is the role for industry in academia and, and society?
Each one of these plays a role in combating mis and disinformation. So Pablo, do you have any suggestions how we break this group up?
Oh, let's see. First three rows are group one, next three rows are group two. And then these two rows were pretty full. So this can be group three and the last four rows can be group
Four.
Okay, group one. I want you guys to kind of huddle together and talk about what is the role for government. And I want the first thing that you need to do is elect someone to kind of lead the discussion and elect someone also to take notes on this group.
Two, what is the role for industry group three, what is the role for academia and group four, what are, what is the role for society? So we're gonna have a half hour total, which means I would say about 15 minutes or so, roughly 10 to 15 minutes where you huddle together. And then we're gonna unpack all of that and have someone also report it out. rj one question. Sure. Sorry.
So on conflict security, so there's a couple of intelligence programs that are now health security and media content is part of their pipeline.
So I'm wondering does framework, as it stands, can it account for, for cases targeted dis actually not a person, but aggregation of content down the line.
Yes. Does that make sense? Yes.
And, and if you look, what that that would look like is if you looked at the disarm framework, when it talks about developed narratives, there are multiple narratives. They're all trying to reach a different target audience. But the operational goal is still the same. It's to insert distrust in, you know, agriculture or poultry or this particular product from this particular country.
We, we
Could generalize to say the park is an LLM agent being trained for its training by
Would that Yes, yes, you could definitely do that. I've, I've actually advised on four doctoral dissertations in the last two years that all looked at how do you, how do you influence the way an AI builds its model to give you answers you want. So this is an ongoing research area. Thank you.
Okay, so 15 minutes to do group work and then we'll, we'll unpack a little bit and then we'll get a 10 minute coffee break.
Yes. So that will be 35 past, I'll give you a two minute warning. Okay. Would you like to break into your groups? You can turn around.
I'll, if you want us to join you for any of these groups, we're happy to,
I just clicked off my,
We're gonna call you back to get your fresh and wonderful ideas here. Pick your brains. This is a work in progress. So this is really truly, we want to hear your thoughts.
No, no, no, no. Let me clear that up. If you haven't solved it, there will be homework.
Dang it, Pablo. Don't spoil it for everyone. Okay. Is this working? Does this work? Okay. All right. All right. Nice to meet you. Group number one.
I'll wait, I'll wait a moment here for you. Who's your spokesperson here? And you can have other people speak. It doesn't have to be one.
I'm, I'm gonna pass the microphone to group number one. Here you go. Thank you so much. We can talk about government and I'm taking notes.
We, we had a rather vibrant discussion. It got impassioned. So thinking about the role of government, we started out with kind of the obvious categories in terms of regulation and enforcement compliance. This can be a random walk by the way of what we talked about. We talked about the role of government also being informing constituencies about information, not just enforcing truth. We talked about how different departments in government can play oppositional roles and that might be a healthy check or healthy different incentive.
For example, the economic area, the treasury might have an economic motivation versus other motivations.
Also, there was a really vibrant discussion on the role of government and its interaction with culture that you are going to have governments that say, we are the only source of truth and that might or might not be true or might or might not be accepted by that culture or that society. And so we left that kind of as an open question. I would say, we pointed out that as, as I said before, philosophy of governments influences the role they play.
That financial backing is a key indicator of intent and motivation. Kind of a follow the money concept.
And then we, the end, we started getting into two kind of broad questions. One would be free speech questions. I would say George, in combination or connection to truth. If someone wanted to advocate, say a flat earth policy, do you shut them down or not? Which led into a discussion of back to kind of intense and harms. And we kind of left it as thinking about how government interacts with real world speech versus online or social media speech.
And talked about the stakes being different when the actors are authenticated people rather than just random yahoos with a keyboard and an internet connection. So kinda a wide, wide breadth of of topics. But it was good discussion.
So easy. You've got it.
Oh, totally nailed. Totally, totally nailed it. Awesome.
Government is fixed.
Is it wrong that I'm, I'm happy that nobody else has solved this yet.
I, I feel like I haven't wasted my time. Do we have somebody from group two industry?
Yes. So we talked,
Thanks. We talked a bit about the role of industry in terms of, oftentimes their industry is, is the platform, Facebook, TikTok, et cetera, et cetera. And the role of industry there is of course potentially to verify people are who they say they are. So for example, if somebody is on TikTok and they say they're a doctor, it's important that people can easily verify that they actually are a doctor and not just, you know, they bought their degree from a university in nowhere essentially.
So yeah, obviously industry needs to at least provide some way for this verification. I think these credentials. But that also raises the interesting point as you brought up that, you know, in some cases these platforms, people may not want to be verified or may want to be anonymous. For example, if they are, you know, let's say speaking up against policy in Iran or China, et cetera, et cetera, how do you, how do you allow for that?
So I think there is the, the concept of potentially levels of trustworthiness in industry who knows difficult to, to say, I think we as this industry, obviously there has to be a way to either monetize this or reduce costs because industry won't care otherwise. Right? It's probably not something that they, well, re re regulation of course may maybe a a point there. But I think to really drive this, there has to be a way to monetize trust or trust a way of trust to reduce costs.
It's, it's different. It's difficult to say. So I think additionally industry probably will need a way, or potentially we discuss this to assess information. So potentially there is a way that they can objectively, or they, they already try to do that, right? They try to objectively assess information. So I think obviously potentially there's ways technology can make that, let's say an easier process that doesn't require, let's say teams of people doing fact checks in the background. Potentially some of these j formats could bake that in, difficult to say.
And yeah, then we, we talked about essentially industry establishing trust across domains. So you say, let's say you have an opinion and you are able to, let's say, get three or four different places to verify that opinion or at least say that they agree with it. So you kind of a, a trust across domains would be a way industry could potentially combat misinformation, disinformation,
I'm, I'm fascinated by the concept that you raised about monetization of that trust.
Yeah, that has
To, there's something in there
For sure, for sure. Yeah. Yeah. Money makes the world go round.
No, I, I agree. I agree that usually when there's some incentive, you know, that that can, I, I don't think this is a problem that we're gonna resolve, right? This is that we were having a conversation earlier. It's not a problem that you can resolve. It's a problem that you can try to manage to keep it from being completely weaponized. Right.
I i I will point out that I, I loved your take on it 'cause there are two ways to look at the monetization from the business perspective.
One way is to regulate it and then fine industry when they don't comply, a better way would be to find a business model where the companies can make more money by being more trustworthy. And there's gonna be some elements of both, I think.
But I, I love that you took the positive spin on it.
Anything else?
Yeah. Excellent. Thank you. So academia from the academia group, excellent. We don't solve problems, we just write lots of papers.
Speaker 10 01:06:01 Exactly. No. So we have identified basically two roles. So one is obviously to, to teach people how to critical thing. That's a crucial part I guess. And that as early as possible, maybe in undergrad school already. So at or at a young age. Yeah. To really think about information you get presented and not just believe everything you read.
And that also brings us to the second role because then academia could also do research and how to change the concept of social media and how the algorithms work. Maybe explore some options with authentication of specific texts so that you can authenticate the source who wrote it, that it is clear. That would be one option. And we also had other option, I don't know if somebody wants to speak about other changes.
Yeah.
The, the, the other would be the, you know, we, we created this kind of social media. We could create a different kind and different type of algorithms and explore those algorithms. I think that would be also very interesting for academia.
Can, can I throw one more in, i I wanna tie in government and academia. 'cause academia does the research that gets granted, that gets funding. What I've seen from governments is I've seen a lot of grants for how to conduct influence operations. I haven't seen really very many grants on how to protect people from disinformation and influence. I would love to see government fund that kind of research
Ex exactly. I think one small success example I think you might alluded to is in Taiwan, how they use social media to reach consensus.
People discuss very polarizing information, but the, the group eventually come to some type of agreement or you're more likely to unite them rather than divide them. Similar things. I think other social medias start to experiment in basically a different way of this how this algorithm may work.
And if we, you know, put a money and the monetization into that, that will be a success story hopefully.
Yeah, I think that's it for now. Okay. Thank you very much. And then from the last group, from so oh, upfront,
Speaker 12 01:08:42 I must have misunderstood. I thought we were supposed to come to the front.
N
No, love it. Thank you.
Speaker 12 01:08:47 So I'm, I'm representing society and we, we, we have this very interesting discussion where I think we can narrow it down to a couple of dimensions. Society I is effectively a name that we give to a kind of a culture. And a culture is really a set of shared beliefs and behaviors that sort of tend to support people who believe in those beliefs. And there are a lot of different cultures around the world, but they've all got this same problem. And what has happened over time is not that really we don't have disinformation and so forth.
It's rather that the amplification and the strength of this has increased with the amount of technology that we have. So the second important dimension is one to do with tolerance. And at one end you have societies that are highly tolerant to behavior that is abnormal to that, that, that that society's values and ones that are very intolerant and impose lots of sanctions.
Speaker 12 01:10:18 And wherever you go in this, you don't get an ideal, an ideal answer.
And in a way, if you look at some of the cultures, and we talked about how, for example, the notion of trust is developed at the one extent from we don't need to trust anything because we have laws and we have contracts through to, we can't have contracts and laws, so we have to have other mechanisms. And those mechanisms usually tend to be expulsion. You are cast out from the group in some way or another, or that there, there are mechanisms in society for identifying people who are behaving badly.
And then clearly in terms of the dimension of tolerance, that a, a society in the west, which tends to be highly individualistic, di divides itself into a set of mini echo chambers that are extremely intolerant. Whilst in the middle you have the, the sort of Japanese type of consensus society where everybody will talk for ages about what they're going to do.
But then everybody conforms because again, the ex expulsion is, is is part of the society through to the sh shall we say the, the very strong expulsion societies like North Korea where complete obedience to the party line is all that's accepted. So I hope that helps. I'm not sure, but there we are. Thank you.
Thank you.
I I was taking a lot of notes and giving a lot of thought to this and I, I've, I think to myself, very often of lessons learned from other things in life.
And I, I recall when I, when I was working at the Environmental Protection Agency, one of the most successful programs we had was a voluntary program called Energy Star, where we actually moved the needle. It was completely voluntary where, you know, we were trying to tackle acid brain and, and we did it very effectively.
And I'm wondering as we're talking about society, if, if we had less social media, in fact, if social media went away, and I'm not saying because mis and disinformation existed way, way before social media existed, but I, I would pause it that probably the amount of, of mis and disinformation would go down tremendously. Now we can't say as government, like stop looking at social media, get rid of social media.
Even though right now I'm sure you guys are aware, we're looking to get Chinese divestment from TikTok, which is a very interesting step by US government and we'll see how successful they are. But I mean, just some thought I would love to hear from anyone over here.
How, how do we get people to use social media less or
Should we, I I don't know how many research studies have been done that compare how much information on social media is disinformation as opposed to truthful information and how that compares to other information channels. So again, that's one of those academic studies. I would love to see that. Now there's no argument to my mind that the biggest, the biggest thing that happened with social media in the internet. Here's the fundamental change.
When you look at information technology, when you start out with the, the quill and, and scroll, very few people could read and write and you could only make so many messages because they were handwritten and you could only physically carry them so far. Then you get the Gutenberg press. Now you can mass produce written work, but you have to have a fair amount of money in order to produce these books.
We'll point out that this is the first time that the technology bit the founder, when the Catholic church helped fund the Gutenberg press and put out the Gutenberg Bible, they never foresaw that Martin Luther would mass print his 95 theses.
And then you go to radio and again, now you can reach an even broader audience. But that's when you start to see regulation limiting the transmission power of what an individual citizen could have as opposed to a radio station that was licensed. And same thing with the television.
Up until really the early nineties, if you wanted to reach a mass audience, you, you needed to be a head of state or head of church. You couldn't just go down to your television station. But when we get to the internet, it's, we now see the reverse. Now everybody has an opportunity to reach a massive audience. We've democratized that. And I don't think that society was ready. I know the laws weren't ready for what that meant. We wanted to give everybody a voice, but we didn't realize we were giving everybody a microphone.
Ma'am,
Speaker 13 01:15:32 Last comment put right now. So do we think that the source of dis information, this information or the fact that we do have a bigger issue, do we think that is because now everyone has stepped because we, we were kind of cool with the fact that six people and roles had the chance to affect and influence people. We don't say that there were the correct people that were tax in the masses, but would you say that we had specific issues because we had specific people who had brought their masses? So I would like to reverse that.
And I'm not to into a statement that it's not a problem of who that, of how many people have access to the audience, but into what kind of people that the
Audience n no abs Absolutely. And there's certainly before social media, there were certainly people that had influence that used that to do horrific things, horrific things to humanity. So absolutely valid point.
And, and I apologize if I suggested otherwise. And I think it's a combination of those. I think it's a combination of who should have access, how are those people vetted, and then how do we hold them accountable?
So the in, in the US when we talk about the freedom of speech, the, the proverbial argument as well, you can't just gel fire in a crowded theater and you can, but now everybody knows who yelled fire and you're gonna be held accountable. How does that work on the internet?
My, my bigger concern and this one that has me truly frightened is if we're having these kind of problems with still pictures and static text, what's it gonna look like in three years when I can make Hollywood style deep fake videos on my cell phone and now I can no longer believe not just what I read. I can no longer believe what I see or what I hear. And I think we need to start thinking about those things now before they arrive. Right.
I just think there's another, another lens, there's another lens to that question about, you know, should we stop? Should people stop using social?
I'd say yes, that's my own fault. I have that fight every night and how much to use it. But the other lens is when you have defacto government representatives who are using social media and they're using social media to, and let's just say Twitter, they seem to love to use social media to put messages out. They're also measuring sentiment in, and there's no verification. Are these really people? Are these really, do these people live in Canada?
That's right.
So that is very dangerous, which is another window to how dangerous is it to society?
How dangerous is what's being projected back in the rea and what's verified or not to the policymakers
That that's the intersection. This is one of the challenges we have with rulemaking, for example, United States and administrative procedures where it's an open process. We wanna be transparent as a democracy. So we accept comments in, but when you get a hundred thousand comments and you don't know who they're coming from,
It's an identity problem.
Right?
It it, it, it's an AI problem. It's Right. All of those, and,
And one of the groups did touch on this when they, they suggested that there maybe should be levels of trust, right? So think about when you go to website, anybody can apply to get A-A-T-L-S certificate. But the one for Amazon requires a lot more verification than the one I might use for my home movie server.
Yes, they're both verified to some level, but it's a, a different level of trust. And so I think that there may be something to that, and it's not only the identity of the person, but it's also the identity of the channel. And I think it's the combination. Can
We go to one more question then take a quick coffee break and then come back and we're gonna have one more breakout session really quickly and get your comments in. And this gentleman, I don't
Speaker 14 01:19:44 Think you have any chance of reducing social media consumption. That's like prohibition.
We're gonna have to find something that gives a better dopamine for less effort than social media. Right? That's what we're gonna do.
Like exercise or,
Well,
Speaker 14 01:20:01 I mean, you can't get away from that. So I think, and I love your point about not knowing that sentiment that comes back from that signal, that becomes an influence, right? 'cause somebody puts out a flipper message to influence and now they're getting influence from some other source, right? And they're gonna make a policy decision. So I don't think you're gonna get people to use it less.
I think it's this, I I do like this sort of blended approach, right? There's all these things you have to do. It's education, the culture.
I, I think it's gonna be that. And then you're gonna have to become tolerant to the fact that this is gonna get abused.
Yeah, absolutely. And I think, you know, to address the dopamine hit, every citizen should get a box with chocolate and abs of steel video. Okay. Let's take a quick 10 minute break and then we're gonna talk about some of the limitations and then that's it. These are great comments. I'm actually taking copious notes. Okay.
So five pass please. Alright.
Okay, we're in the home stretch. We only have 15 minutes, so we'll go through this really quickly and ask you to reconvene in a minute.
Okay. So last session we talked about what are the things that government and industry and society and academia should do? Now we're gonna talk about all the things that they shouldn't do because there are many more of those. So when we talk about things like regulations, every nation state has an existing body of laws and regulations that they just can't throw out. We have to somehow make it balance.
We have to make it fit into the existing construct or change the construct. But we can't just throw it out. We have to balance things like individual rights versus national security.
Again, group two very wisely pointed out that, hey, maybe we don't want providence for all posts because we want dissidents in dystopian governments to be able to speak freely and anonymously. We want whistleblowers to be able to report when governments are misbehaving. There are differences in laws between nation states, SubCentral states and laws and regulations.
National laws may or may not agree with European union laws.
We, we all get to live that kind of a glorious argument. Same thing in the us.
You know, the different states have state laws and then we have federal laws and they don't always agree. We have to figure out how to make that faster. Technology advances much faster than our legal framework does. That is always gonna be that way in most western governments. We try not to predict where the problems are. We wait for the train wreck and then we go, ah, we should probably have a regulation there. We're very rapidly approaching the point where that may or may not be acceptable. We may have to put in some guardrails ahead of time and we should have those discussions.
And then do we have to have different standards for public versus private consum consumption? And then lastly, think about how things that we do in western governments could be twisted and abused by other governments, right? So if we went to a providence where every, we have a positive identity of everybody that posts everywhere, countries like Iran will go, see, you're doing the same thing we're doing. We're a democracy just like you. I don't think that's where we want to go to. So what we're gonna do is,
Well, let me ask, we please, we have about 10 minutes. Yep.
Do you prefer to break up into groups and report back? Or is it better that you can just kind of sh
Shout it out.
Shout it out. What do you prefer? What's the question? The question is, what are the limitations? Those same groups, government, industry, society, academia, what can't they do? What shouldn't they, what shouldn't they do?
Shout out. Shout out. Shout.
Shout it out. Shout it out. Okay. All right. You don't have to stay within your guardrails.
So, all right, let's, what do,
Like, let, let's start out with, with government, this should be easy. What shouldn't governments do? They shouldn't spread disinformation. I like that. They internally or externally or both? Preferably both. Preferably both. Okay. Just making sure we have it there. Okay. So government shouldn't spread this information. What else? It
Speaker 14 01:24:24 Should interfere with your fundamental rights.
Government shouldn't interfere with their fundamental rights.
Let's, let's do, by raise of hands, how many of you feel comfortable with the government deciding what's true? Anyone? One. Excellent. We all have a healthy distrust of our governments. I love it. Or
What's false?
And Mike says, if he's in power, he will trust the government to tell us what's truth.
Speaker 14 01:24:54 'cause truth is truth. You can't just don't have a choice. Whether it's true or not. A fact is a fact. I think you're getting confused with opinions.
I will tell you that as a scientist, there is no truth with a capital T There are only facts in the current context as we know them. Yeah. Right? Yes. Yeah. Yes.
It's, it's really hell being a scientist, talking to non-scientist about how there's no truth with a capital T. But maybe that's something that we should teach generically at at schools. Yeah. What else should governments not do?
Should, should governments solutions. What's that? Pick their
Speaker 14 01:25:31 Favorite technologies.
Pick their favorite technologies. What do you mean by that? For
Speaker 14 01:25:35 Everyone else to use,
They shouldn't pick their favorite technologies for everyone else to use.
I, I guess my question would be then if we, if governments can't choose and they can't collaborate, what do we do with the global community? If the government can't say we're gonna use ISO standards?
Speaker 14 01:25:55 I, I think, Joe, my understanding more government shouldn't be
Ah, right. Shouldn't be super prescriptive.
Yeah, I love it. So there's a difference between favoring a standard and dictating a product. That's right. Right. So
Versus imperative.
Yes. Declarative versus imperative. Absolutely. What about industry? What should industry not do?
Lie, sir,
About hyper specific like c loss for for kids that you can't do it. Why not the adults?
It's like, it's so close to stock and, and it's, I dunno if it's
It. So it's both industry does it because government hasn't told them not to.
I'm, I'm gonna say something bombastic. So we used to buy and sell human beings and we decided that was immoral 'cause that was slavery. And then we used to send young children off to factories to work. We decided that was not okay because that was child child slavery. What I would say is that the current economic model for social media is cognitive slavery. You get a free product because you are the product, they are buying and selling your cognition to advertisers.
And if you don't think that's scary, I think you can take a look at what's going on with things like Elon Musk, implanting chips in your brain, and with augmented reality and virtual reality where I can measure your pulse, pupil dilation, eye movements. It's really dystopian. Should we be paid for our own data?
I No,
No, no.
I think that, I think that industry and, and government and really society are gonna have to demand better. So I absolutely agree with you that that hyper targeting should really be controlled, better controlled and limited. That's
How they should be limited. But that's not the limitation that they're doing
Now. That's right. Unless somebody tells you.
Well, yeah. So they're not currently limited, but it should be a limitation we should impose on them, sir. Yeah. So
One thing industry should not do is profit from
This, ah, industry should not profit from this information.
Can I, can I give you the back half of that? Yes. So there's a current, I'm not gonna call it a scandal, but I'm gonna say a lively discussion because in the United States, intelligence agencies are not allowed to collect information on their own citizens. So what they've been doing is buying that data from social media. I would suggest that that should also be illegal, right?
If you're not allowed to collect it on your own as an intelligence agency, you shouldn't be allowed to buy it from, and I, I say that as somebody that worked at intelligence agency for years, I wish I had the level of data on my targets when I did this for a living that Facebook can give me. I wish I had that level of intrusive micro-targeting.
But yes, they shouldn't be able to do it. I think GDPR is a fantastic first step, but I think we've got a long road march to make things better.
Sir,
Speaker 14 01:29:20 What I was gonna suggest is we have this mentioned social media, we have certain parts of social media. Well, we're not regulated, so don't blame us, but actually they should take responsibility for actions, for lack of action. So they can't avoid comparability just because they're not regulated. That's the thing that where we have, okay, our morals and ethics change over time.
They, it's truly their problem that they haven't, they not up to speed with what society was. But just because you're a techie person and you can imagine doing something doesn't absolve you, the lack of regulation surely can't absolve you might be doing harm to people, so they can't avoid to ability for their actions
Action.
So, so the comment for those in the back is that the lack of reg regulation shouldn't absolve social media companies from culpability. And I agree with that. I will give the other side of that coin, which is if you put out regulations, then what will happen is companies do, their very nature will do the bare minimum required to meet that regulation and then they will be absolved of any culpability because we did exactly what the government told us to do.
So this is a place where yes, government could help us with regulation, industry could help by self-regulating, but what we really need is society to let government and industry know what is and what is not acceptable. Pablo,
You said that, you said earlier that GDPR is a good first step. What would be a good second step?
What would be a good second step? I think a good step, second step would be not just knowing what is collected on you, but giving permission how it can be used and being able to track where that information goes to and being able to pull it back.
I, I think that would be a start, sir.
Is those are two?
Yep.
So again, the comment, I'll, I'll see if I can simplify it a little bit, is that industry has a long history of, let's call it disin, informing the public that the data collected on them is really for their benefit and convincing us of that. Yes, absolutely. That is the case. I think one of the issues is if you go to, even with GDPR, if you go to a website and you register, how many of you read the end user license agreement? They're 40, 60, 80, a hundred pages long. And even if you're a lawyer, that's, that's not a fun read.
I think one of the things that should be required is very simple in one page or less, here's what we collect, here's what we track, here's what we, we do with it. And most importantly, here's how you can ask us to remove your data or pull it back, sir, I
Want take that one step further inside.
Yes.
And, and there's a, basically you, the company are making a statement that may be in the machine readable form that this is what we do. So there's a liability and get proven policy, right? Yep.
And, and if you machine readable, then you can start building tools for humans, right? That basically can evaluate why is it person comfortable with and doing an evaluation as I navigate the internet. But many of the technology privacy things that we're dealing with today are addressing it in the wrong layer. And this is, I think the more direct place but
Address.
So, so the comment for those of you in the back was that not only does the assurance need to be human readable, it needs to be machine readable and verifiable so that companies can be held accountable if the contract is broken or the assurances are broken. Yes, sir.
Speaker 14 01:34:24 I'm thinking just right now when we have to answer those questions or click agreements or the always be the one being asked to click, can we request it? So as I say, I'll say, these are my conditions if collect my data and if you agree or not.
Yeah.
So it's interesting, the, the comment was right now we're the ones that have to agree to the conditions. What if we reversed it so that we told the company what our conditions are worth for our data? And I love that idea.
I, one of the ideas that I've heard put out there, which is great, is that you get to classify your data to certain levels and say, I'm okay with you just taking this data until I call it back. I'm okay with you taking this data for this specific purpose, for this amount of time, and I'm not okay with you touching this data at all. And so having that kind of level of, of data and how it can, limiting not only who it goes to, but how it can be used, I think is, is brilliant. I think there are a lot of technical hurdles to overcome.
We're almost at the end of time.
Can we take our two quick and then we wanna make a really quick introduction to Claudia?
Yep.
Do do you want right over here and then in the back.
Yeah, I was just gonna say it complicates the role because I don't trust people getting choices, even if they're informed, right? People would sell their kidneys if there's not and that's not good for them. So there's a role for government to play. You asked about data, right? Yeah. Should be right. So that's why
I was No, absolutely right.
So again, it, there's no one solution that's gonna require the patchwork and that's why we we split up this way. Sir, you had a question or a comment
Here? Data people ready for,
Okay.
So, so the comment was that we're here, we're, we're educated, we may not care, but the general populace doesn't care. What I would suggest is that's a really great place to do education. I think if most people realized how their data can be used to uniquely track them, I, I think they'd be horrified If you've not read Dad and Goliath by Bruce Schneider. That's a fantastic read. I don't necessarily agree with everything he says in there, but if that book doesn't scare you, I don't know what will last one sir.
Speaker 14 01:37:15 I think that as a group, probably not aware of all the different ways in which people can be harmed by technology. And so people are not so careful with the way they use technology. And I think that's, you mentioned education, there are a lot more resources now from many non-government agencies, United Nations, about the amount of harm. And I had when I still, I started reading it different ways, harm and was harmful. Just think about your own frame of reference. Lots of other ways out there that it had you harm
Beyond reg regulation.
I would say I, I saw very recently a documentary about kids that have committed suicide as a result of TikTok videos where they actually went to their feed and one video after another was about despair and throwing themselves in front of a train. I mean, and, and, and what you're gonna see, I think are lawsuits. And I think they're gonna be successful quite frankly, against some of these, these, these companies that are purveyors that are targeting children in particular and vulnerable people in our, in our, in our society with this kind of information.
And I think that's gonna be one of the, the ways in which we tackle this. Can we introduce Claudia?
Claudia,
Can you come up
Please? Yeah. Really quickly before we, first of all, thank you very, very much for two hours with us.
We really, really appreciate. I'm losing my, my, my microphone is firing me here, so sorry.
So yes, so do you have we'll get you, oh,
Speaker 14 01:39:16 I
Thank you.
Speaker 15 01:39:19 I, sorry I didn't really prepare anything. I just wanted to say hi. If you're interested in contributing to the dad CDM Open Project at Oasis Open, please come and talk to me. I work for Oasis Open as a program manager. I work with our open projects, which are open source projects. If you don't know Oass Open, we are a standards organization, a globally active, a membership consortium. So anyone can join that CDM as well.
It's a, an open source project. So please, I'll be here today, like also on Friday and yeah, come and talk to me if you're interested.
Thanks. I turn off my microphone. Can you just, can you
Thank you everyone for joining us. I really appreciate how you all dove in and asked hard questions and, and said straight things. This is not a problem that's gonna go away overnight. It's gonna take all of us. So I would encourage you to keep asking hard questions and keep looking at this problem.
If I can ever be of any help whatsoever, please feel free to write me an email or reach out to me on, dare I Say it, social media. Thank you very much.
Thanks so much guys.