So, Max, thank you so much for joining me today for this interview. Your presentation was great, but there are still some concerns about privacy. You were a big winner against Facebook, but the situation changed a bit from that moment until now.
Yeah.
How do you see the future of privacy? Not just in Europe, but across the world?
Across the world is a big question. I think generally what we see is that kind of the European system of the GDPR is at least adopted in a lot of countries in the world, in different ways and variations, but like the general structure, seems to be somewhat similar. The big exception right now, I think in the, let's say, democratic countries is obviously the US that has a different approach. But even that is like basically there is Mexico that has a similar law in Canada, in the north. So even that is squeezed in. So I think at some point the system in general is going to prevail in for the time being, because we need also consistency across the world to even make it manageable in any way. I think that would for a global part be interesting. What the bigger problem in Europe is right now is we do have all these laws and they're all unified and so on. But we do have a compliance issue of just there is very little incentive to actually follow these laws, and there's very little consequence if you don't. so I think that is where on the European side, we have to ask ourselves if we can't just legislate stuff further and further and further, if no one actually follows any of that, then the whole point of legislation is not really there. So I think that is the part where Europe has to live up to these promises a bit and actually deliver in on the phones of people, and not just in law.
Well, now we come across a maybe controversial topic, but probably, you know, that Meta is using the data from their users to train their AI models. And now that everybody knows that they can say, okay, I don't want it. Some influencers on the web, they said, okay, you can send an email to Meta and say, I don't want you to use my data, but they usually take it more seriously when they talk about the GDPR. What happens if you are not in this region then? So then, you know, sometimes they answer could be negative. How can we prevent?
To a certain extent that’s democracy. In other countries the laws don't exist. It's a bit hard...
But not even in Europe, you know. So then if you don't mention GDPR maybe you get a negative answer.
So the first problem in Europe is that Meta is basically saying it has legitimate interest to take all your data for its own purposes. The Court of Justice has already said that that's not true, that they have to ask you for consent. So it's not that you have to run after them to say no, but they have to run after you and ask you for your consent. Meta is not doing that. We think that's illegal. I think that's just the next big data breach. Or like the next big breach of the GDPR that they have here. But we see that the regulator in Ireland has said, okay, as always, the Irish regulator basically just sees itself as a consultancy firm for Meta, not as a regulator. And so we would assume that that probably becomes the next big case, so to say, against Meta. Because it should be as a default, you can't just use other people's data and say we use it for new technology. It's first of all not even defined what this technology does, is that going to be for self-driving cars, for a chat bot, for credit ranking or for, I don't know, killer drones? Like, all of that can be AI, that's not defined. And secondly, the system, at least in Europe, is you have to ask for consent. And last but not least, you also cannot say I'm just going to use all the data for anything you have to say, I need this specific piece of information or that type of data, at least, for that type of, for example, training an LLM or something. And all of that is just like, oh, we use all your data for technology. And that is not really how anything works.
And we're good in that because like this that you mentioned about the data is a very interesting topic. Do you think that synthetic data could be a solution to protect data privacy? But still they would mimic the data that they have from the user. So, you know, at a certain extent, it's still connected.
Yeah. So, to a certain... from a purely GDPR perspective, as a lawyer, once you can not identify the individual anymore, you're out of the whole regulation anyways. So that is an option that you anonymize the data properly.
Properly. That’s the word.
That's the word. Then you can do a lot of these things really, without having the personalized data. An intermediate step could be what they call, pseudonymization so that you basically at least, yeah, the data may still be traceable to an individual person, but you at least don't slap the name and all the hard core identifying stuff on it. So, for example, if you say, you use health data to a certain extent, it's very... on a regular basis, you will be able from health data, let's say a CT scan to figure out which person that is, because, you know, bodies will be different or, you know, whatever the reason is. So I think to a certain extent that will still be personal data, but you could just de-identify everything around it. And [...] then say, for health purposes, for this specific thing. Then overall the balancing is right and we can do that. Last but not least, I think one option that we also have to talk about more is the consent option that you really see, okay, I'm happy with that. And you have to frame it well. The problem is that I think right now people oftentimes frame consent as, oh, I want this from you and there's no other option, and so on. If you frame it to say, oh, you can donate your data to actually help with the sickness that you're suffering from yourself. That could oftentimes convince people to say, okay, if that's used for, I don't know, curing HIV and I have HIV, and probably a lot of people would even, this very sensitive information, would even provide that and say, okay, guys, if you have a proposition that helps me in the future or other people in the future, I'm happy. I think that is a way that we have to also reframe this consent narrative. Instead of having a company basically taking shit away from you and forcing you to kind of agree to, say, okay, what is the benefits for you as well? What is the benefits for society if you agree? But that takes a bit more effort than companies put in right now.
Yes. But you mentioned something very interesting in the session that said, okay, sometimes when they give us the option, the options are not really clear. So people say, okay, so you have to have the cookies to read it, they accept them, that’s it, they are giving up the data. Do you think that one of the problems could be that people don't really, value much their privacy or they don't really understand, what it means when the companies are using their data? Because Meta is not losing many users, even though they have much data leakage, you know, in many occasions.
So, people have 10,000 other problems than privacy or security or all this other stuff. And as people that work in a field, we all think the most important topic in life is the one thing we're working on. But there is 10,000 other topics as well. And I think we have to build a society or build systems that people do not have to worry about it all the time, but that by default they're protected. That's... I usually say, you know, no one cares about, hygiene rules or technical specification of high speed trains. They just hop on a train and think, okay, runs 300km an hour, and that's fine. And that is how we should also regulate the digital space. To not say, it's your duty or you should really understand, or you should, or you're the bad person because you haven't really taken care of it. That's not how we would, you know, ever react after a train crash. You say, oh, you should have really checked the trains before you got on that. And you know, that's not how anything works. And it's interesting because in a digital space or in this innovation space, suddenly that narrative is actually quite all over the place and quite accepted, which I fundamentally just don't accept. And that is really where we have to rethink how we do regulation and who has the duties to actually comply with the law. And that is usually the person that creates a problem or the person that has power over the problem or the issue. That is true ever since 2000 years since Roman law, that I still had to study at university. If you create a problem, or if you have power over something, you also are liable for the consequences and for mitigating the consequences, not the person that has the effect. And that was true at Roman times. If the cow would run over to pasture and damage something on the other side, it's your cow, you're liable for the problem. And I think to a large extent true for AI today still.
There are many things, you know, that the history taught us, and then we can see that they apply today, for example, the Industrial revolution. We cannot stop technology. The thing is, we have to adapt and maybe create the frameworks and what they are doing with the ethics, you know, and the EU act is very helpful. My last question for you is some people say that privacy will become a luxury in the near future because, you know, we are basically giving up our data for any app or websites that we are checking. How do you see this? So do you agree with this comment and how do you think that the privacy activist, as you were doing this so far will deal with this?
I mean, it's a fundamental right. So it cannot hardly be a luxury. It's like, you know, the right to live is a luxury or the right to breathe is a luxury. So I think that that fundamentally, at least for the European Union, cannot be true, because if it's a fundamental right that we would have to change with all EU member states agreeing that we do, which I don't see coming up. That has to be granted for everybody somewhat. Where I saw that comment a bit, this with the “Pay or Okay” approached at some of the newspapers started and for example, Meta then did as well, where it's basically pay €10 a month or give us all your data. And that is quite alarming for us as well because, it gets you exactly into that situation where privacy is a luxury and only for the people that really care and really want to kind of, you know, put money on it. And if you do the math, basically Meta is charging 200 and something euros, 250 or so per year, for Instagram and Facebook together. But if you have 36 apps on your phone on average and you have a family of four, we did a rough estimate. And that's more than €35,000 per year that you would have to pay to still not agree to everything. Which is more than the European average income. So, there is definitely a bit of a move in that direction to say, we have the God given right to take all your data. And, if you don't want that, you have to pay us now. And that's quite amazing because like, if you basically say all you have a right to vote, you have a right to privacy, you have all these fundamental rights. The narrative is now all you only get them if you pay for it. That is not how fundamental rights work.
That’s the point, like, you get what you pay for. So then, you know, in the end, it's not just Meta, but for example, openAI is doing the same. If companies, organization are paying for a private service, so then they cannot use their data. Max, thank you so much for your time. It was great meeting you. And, well, I hope that I see you next year.
Yep. Happy to be there. Thank you so much. Thanks a lot.