Well next up we have a panel discussion and while the panel members are taking their seats, let me just tell you that we're sticking with privacy in future, but more specifically how privacy legislation will need to evolve in the age of AI. And here to share his insights and drive the conversation. Please welcome fellow KuppingerCole Analysts, Dr. Scott David.
Thank you.
Thank you very much. This is what an exciting time we have in what a great time to have this specific conference, our track, which is going to extend over the next several days.
We'll be talking about a number of issues of AI and putting AI in a bunch of different contexts. We're gonna start off here with this panel today talking about privacy and some of the issues that are being aggravated or maybe made easier to deal with through ai. So we have a wonderful panel you've already met, max, of course. And we also have Kai Zenner who's the head of the office for MEP, Axel Boss and also Lynn Parker, Dupree partner at Finnegan Henderson, who is a former Chief privacy Officer, department of Homeland Security.
I'll suggest that you go to their bios to get any further detail because we have limited time for our session and so we'll spend our time with the discussion. So the first question I wanted to ask, and I'll, I'll direct this, Lynn, to you, what are some big privacy and identity issues that you've encountered kind of in a a high level and will, do you think AI is gonna make those more challenging, less challenging? What are your perspectives on that?
I think with respect to AI in particular, one of the biggest challenges relates to sort of data elements and re-identification.
I think when we, from a privacy professional perspective, when we're thinking about artificial intelligence, it opens up the possibility for drawing insights about individuals and ways that we probably haven't contemplated before. And so when we're thinking about the constructs of how we think about data, we think about sensitive data, nonsensitive data, all of its personal information. I think that we're really have to shift that paradigm away from a sensitive data data element as opposed to another data element.
I think with ai, all data could be potentially sensitive and potentially revealing in ways that we haven't before thought of before. And so we'll need to shift that, you know, sensitive data elements to get specific protections or extra protections or extra notice and really think about making privacy more contextual and more operational so that you know, in a particular circumstance your data is sensitive. My address, my home address is maybe not particularly sensitive, but the address of a domestic violence shelter is.
And so thinking about data and context I think might be a new change coming out of this.
That's interesting.
You know, with the synthetic data ideas, the digital self kind of ideas, that seems like that'll really loom large and be a real challenge. Kai, how about you? What are your thoughts on that?
I think I would name one positive and two, two risky developments. Let's say the positive one, as many of you know, we in the European Union have recently adopted the AI Act. I will actually talk later a little bit about it and give you a general overview.
And if this new law is really working, I think it could make the life of Max and others much easier because it will provide a lot of additional transparency, documentation and so on. So to give one example, right now, if I am rejected at the university because yeah, the, the person in charge is not thinking that I'm qualified for a master degree and so on, I don't really know as in failed or rejected applicant why the decision was going against me and so on. With all those mechanisms that the AI Act is now bringing, you could more or much more easier at least identify certain biases.
But in the same way also when we are talking about privacy violations and so on, probably it's better detectable this again if the AI act is, yeah, working as planned could be a really positive point negative and now connected to privacy and then to identity is first of all that with ai, we see right now a similar situation when it's about market concentration, market dominance, that those companies that are currently standing for the biggest privacy violations are also extremely powerful already now in the market when it's about development of artificial intelligence, deployment of artificial intelligence that many, many downstream companies are now depending on them.
And again, this kind of just makes an already existing problem even bigger and yeah, not really good for our European SMEs and startups. And the third point, as I said, linked now to identity something that rather late was integrated in the AI Act but not in a really sufficient way is identity theft, deep fakes and everything around artificially generated content we have in the AI Act an obligation to disclose.
But yeah, a bad actor, well doesn't normally follow rules what is happening. We added transparency or traceability obligation for model provider and also for other actors along the value chain. But I think back to your question, the challenge is not really addressed right now. And I think we all, and yeah, the broad, the broad population really lacks right now the AI literacy skills in order to be really able to detect AI generated content which could have a huge risk for our democracy, for our legal system and so on and so on.
Thank you.
Kai Max, do you have thoughts on that?
Yeah, I mean in our practice we haven't seen like we are basically from the litigation side. So at the end of the whole policy process where like what's on the desk of an average person, we haven't seen that much AI on the desk of an average person yet. But we also absolutely know that this year is probably the time when this really hits the markets in in real term as well. Where we starting to see issues from purely GPR perspective, let alone the AI act is first of all accuracy of data.
I mean we know that a lot of these models are just not there yet and that has to become better, but especially it has to become better when it applies to individual people. So I mean if you have a picture generated with four fingers, what's the harm? But if you generate information about an individual that is just wrong, that can quickly become a problem for that individual.
And that is I think the accuracy principle of the GPR was exactly brought up with that idea of om murky algorithm that does something weird and then something comes out of it that we can't, that's not accurate anymore.
So I think on that side it's quite interesting. But also the question of which data gets actually ingested into a lot of these systems, how far can that go? We just now had week ago the announcement by meta to basically say, oh we're just using all the customer data from all times that we ever had, even dormant accounts that no one has ever looked at for years and just use all of that to train some artificial intelligence. It's not even said which type or which system at all. So that may be elements that that may go very far.
What's also interesting is a lot of them, we had a case recently against AI against OpenAI and they first of all said no, we cannot even comply with the access request because we don't know where the data comes from. We don't know what we have about you and we don't know who we have given it to. That's interesting. And then when we said, okay, the data is wrong, just delete it. They say No, we can't delete it from the system, we just have no option to change it or do anything about it.
And that obviously already gets into strong tension with even the most baseline principles of the GDPR and again, let alone the AI act where I'm not an expert but other people on the panel are.
Well it's interesting from the the context you talked, we talked about, mentioned AI literacy, that inability to trace causation, you know, fundamental in law you're looking for when people talk about liability and responsibility, what's the cause of something and if multiple causation, et cetera.
There are so many black boxes that arise as a result of the use of these systems that it really challenges traditional notions of causation. And one of the things that some folks I've heard are considering is the idea of strict liability like you have with the inherently dangerous activities and then maybe pooled funds. So kind of a collective pooled resource for paying claims, but it's a fundamentally different idea than trying to trace liability and responsibility and it would change a lot of legal regimes as well as a result.
So in that context, what are some of the ways that we can frame privacy and either how, how, how it has been framed or how it might be framed in the future so that we can help give more traction in those contexts where we have this amplification and aggravation of possible harms and sometimes maybe the inability to trace back and to assign liability or responsibility.
What are some framings on that?
Lin, do you have some thoughts on that?
You know, I, I worked in-house at a company prior to going back into government and one of the things that became abundantly clear to me is that there must be a business incentive for any change to move forward. And so when I talked about privacy or you know, privacy compliance, I didn't talk about it in terms of, you know, what we need to do as a matter of individual rights. I talked about it in terms of brand impacts and reputational harm, legal fines and compliance issues.
And that was something that really I think motivated c-suite, you know, people in the csuite to really focus on privacy compliance. And I'll say that in the United States, there are two laws that I think could possibly have an impact in this way as well.
One is the state of Colorado recently passed a an AI act and basically it lists out a, a series of requirements including data providence, understanding where your, your data comes from under being able to demonstrate that the data that you trained on was fit for purpose and that with a series of other requirements will give a company the presumption of reasonableness in the development and deployment of their model.
I think that is going to be something that incentivizes companies who are trying to operate in an, in an basically a world without really rules at this point in time.
Also in the United States, the Federal Trade Commission exercises jurisdiction over privacy and artificial intelligence through its authority to regulate unfair and deceptive trade practices. And one thing that I've seen this FTC do in particular is be very aggressive on the question of fairness.
Meaning that it is very clear that under this FTC it is not acceptable to take a vendor's word that their tool works, that you as a purchaser of the tool must do your due diligence to ask questions about how the model is developed, what is it you know, particularly designed to be used for, try to do some independent testing to the extent that you can because of the, you know, sort of evolving nature of AI and generative AI in particular.
It may not be possible for a purchaser to test a vendor's representations, but there is also an expectation that once you deploy this tool in your own environment that you do testing to monitor for bias, to monitor for disparate impacts. And the FTC is very clear that if you don't do that, relying on those, those assertions alone will be insufficient and you will be penalized. And they have been issuing significant bins, they have been instituting algorithmic discouragement, meaning that you must purge the data collected for the algorithm and the algorithm itself.
So that can be financially costly to a company. And those are the kinds of things that I would say you need to talk to comp that, that would incentivize companies to not misuse data if they feel like that, you know, their time and their effort will be purged from them.
You know, it's interesting when you have the new technological innovations that change relationships. We as humans we've always had to adapt to those.
And one of the examples is eaves dropping when the early, early periods of cities and villages being formed, eaves dropping was a violation of trespass violation of actually going in under the eve of a house so that you could listen in at the window. And the reason that eavesdropping was applied is there was no privacy law. They had to use real estate law to achieve privacy results and here FTC, things like that, we use the existing tools very often.
So it's, it's interesting in the, some of the framings are existing framings that we bring forward and then some new framing. Kai, what are your thoughts on some of the framings that might be helpful going forward?
Yeah, I can actually really well compliment what our US American partners are currently doing. Of course the EU is always having a much more prescriptive approach. You are more principle based. But actually now, not only with the AI act but also with other recently adopted legislation, we have really, let's say key principles in all those laws. For example on data governance and also within data governance as a kind of checklist of principles that you should not have a discriminatory data set. Also making reference to the EU Carter of fundamental rights and so on and so on.
And in this laws like the AI Act, it's already a little bit specified via recitals, via paragraphs, houses should look like. But the difference to earlier digital laws like the GDPR is that hopefully this time we were complimented with a lot of secondary legislation that is then, as it was already being said, really specifying and giving a kind of checklist what companies need to do.
Right now we need to say probably no company really understands fully how to build privacy compliant AI systems within the European Union, but in the future there is then hopefully clear via standards via delegated x, via guidelines and so on and so on. A clear list of points that you need to fulfill that you need to do. And then going back to my first statement, this addition or much stronger obligations to document everything should be executed really along the value chain.
A little bit like a blockchain where everyone is after each other adding points and making it in the end downstream really traceable and a much better understandable. And I think this is again really a big step forward that we are seeing now in the European Union but also in a diff in in many other regions in the world, even though they have of course different policy or regulatory approaches.
Excellent.
And Max, you've been involved in reframing privacy globally, so thank you for that effort. What are some of the other framings that you're thinking moving forward may be helpful particularly in this AI charged world?
I, again, I can't like speak all too much specifically on the AI part. We usually separate like for cons on a consumer side. I think the reality is that we will get to a situation where people have a stomach feeling of this is right or wrong, but I don't think that we can really let's say get the whole society to a level to fully understand all of that and they shouldn't. And I usually highlight that a hundred times, like no one has, you know, no one checks the building code or the hygiene law.
They just go to restaurant and eat and hope that it doesn't collapse above them and they don't that have to throw up. So that is also I think how digital policy ideally should work after all that the average person just can trust in a developed democracy that systems are not falling apart.
And that's I think kind of what the general public should get to. And if that does not happen, we have the good old trust prom of can I really do that? Should I really blah blah blah. And I think that is from a consumer perspective what we wanna achieve at some point.
And that means that they have to have a general understanding of proms but that doesn't mean that they will understand each detail and can make the super informed decisions after all.
On the other side, when it comes to the, to the, to the companies, I mentioned it before in in the presentation a bit is we did ask these 1000 DPOs for example, of what would change anything within the companies because they're at an interesting point that they're like basically independent from the c-suite so to say should actually push for more privacy internally but then usually report back that whatever I do, they don't do it anyways.
That's the very common theme and that was interesting because what came out there is what was rock bottom is more guidelines, which in Europe we love to do the ED BP is issuing new guidelines all the time. Turns out no one reads them apparently. And but what did, what was very much on the top of the list is what you mentioned before is publicity, like is that gonna be public? Is are people gonna know that we have a case or that we have a problem?
That was a very interesting factor that for example in Germany is super interesting 'cause the DPAs in Germany do not even publish their decisions even though a lot of them are in charge of freedom of information but they don't issue even their own decisions publicly. So that would help. The second part is fines. Like that's really the like is it gonna financially hurt or not?
And the third one on the list and that goes a bit to to the point you're made before is B2B relationships.
So if I have to sell my product to someone else and they have a headache because my compliance is not good enough or I get 10,000 calls about it or I can't sell my product, even that is a bit factor. And that was already in the G-D-P-R-A little bit where basically the idea was that businesses watch each other. I was personally always a bit skeptical if that happens all too much, but at least if you look at the data that does seem to happen. And I think also generally that will, I was amazed when we tried to look into that that we have almost no evidence of what drives company behavior. Yeah.
Like we do have a lot of evidence on what drives criminal behavior. For example, we know why people, you know, do all the different crimes, but what drives company behavior is actually something where there's very little evidence on it that would be interesting to, once we go ahead with policy to do, to push the right buttons, not create some like, you know, burden that actually doesn't produce anything. But then also make sure we have a law in that space that actually moves stuff forward. So I think overall that would be interesting to look into more.
You know it's, it is interesting we talked before about the business incentives and what and what drives it and the the raw concentration of power that's in businesses now, not just in AI but generally in information flows has global reach and global implications and the ability to regulate or constrain the behaviors of those companies is not uniform globally. Mm Right.
And so one of that's, that is going to be a interesting source of reliability going forward and one of the, some discussions are at least some discussions I've been involved in, I've talked about corporate law reform as a way of getting at this 'cause it's not just about ai, it's there's many things that are involved in corporate behavior and so let's talk about corporate behavior in Europe and the United States in terms of GDPR and the AI Act.
And so the, there are some, everyone's grumpy in a company, they always say they don't wanna be regulated 'cause they want to innovate, but really being regulated forces innovation. And what are some of the ways Kyle ask you first in which GDPR and the AI act might either help or hinder the privacy accomplishment of privacy? Is it a good foundation from which to start and what are the things that are missing that we might need going forward so that both companies and people can get a kind of rational exchange being a monitored or structured by those laws?
Yeah,
So we had this discussion basically not only on the AI act but on on most digital laws recently in the European Union that, yeah, especially with those new technology waves, we were extremely worried that if we are now jumping in with full regulations, that we would hamper innovations that we would even fall stronger behind in the global race, for example on AI leadership and so on. And with the AI act for example, we are really just focusing on risk prevention. It's almost nothing in the text that is really promoting innovation.
This was being done via the AI coordinated plan, which is however not legally binding and therefore, well it depends again on the willingness of member states and they often do not have the funding and so on and so on. So yes, well in the end now there is tough new regulation.
We were, however, so I'm working for a conservative parliamentarian, we were, however, in the end, quite surprised that most German and European companies were in the end when the AI act was a little bit on the line because some last minute drama in the negotiations were asking for regulations.
So companies were actively saying please regulate us.
And the reasons they were saying, and you mentioned it already as one scenario, was that they felt themselves so overwhelmed with all the new possibilities, with all the potential risk, with liability risks and so on, that they were saying, okay, even if it hampers us in certain ways, it's better to have a kind of baseline understanding what should be there, what we should fill, fill and so on. Now again, the big challenge for us will be to not over interpretate all those new rules in a way that makes them not fulfillable not achievable and so on.
But yeah, have a rather pragmatic approach to really understand them more as common principles and have some public private partnership in figuring out how to do it in a, yeah, privacy friendly but also innovation friendly way.
For example,
You know it's interesting because in the United States we have the HIPAA law for healthcare, we have Grand Leach Bliley and both of those laws are very similar to GDPR in the structure and hipaa, I've talked to a number of health organizations, if under hipaa, if you strip out 18 identifiers you can then circulate data and so you, the causation chain is broken as a result. So there is certainty as you were alluding to, because they know that this behavior means that my organization is Okay.
And so Lynn, what are some of your observations on, again, the GDPR and AI laws and whether again, it's not directly relevant in the United States for United States operations, but obviously the United States companies, et cetera, and whether that'll be helpful or more challenging for a future in privacy.
You know, I, I think it's going to be helpful to that same point.
I, when I was in government had a CEO of a very well-known company, say, I don't care what the law is as long as I have some understanding of what to do. And so I think that that companies will try to try their best to comply to, to the best of their ability and understanding like what are the bounds of which they can operate when there's no sense of understanding where they can operate. That that's when you get sort of the, the chaos across the business landscape that that has sort of, you know, promulgated the past, you know, many years.
I, I'd also say that while it doesn't have direct applicability to the us, you know, GDPR absolutely changed the way US businesses conduct business. The AI Act will do the same. And so what you'll see in the US is a building of their internal policies and practices to kind of meet these guidelines, particularly those that are operating globally because it, it, again, it makes no financial sense to have a US you know, standard of practice and an EU standard of practice. So by default the EU standard of practice becomes the standard.
Can I, yeah, please. Just very shortly also to compliment what I think will help us quite a lot is that I will also talk very shortly in my presentation on it that on AI, actually we are not so far from each other. Basically we are all following the OECD principles that G 20 was basically also adopting. So this alignment that you were talking about, I think with AI will be much easier because the core principles are the same. Also our regions are already working quite well on risk assessment also in the OECD as a forum, but also between NIST and the European Commission.
And therefore there is a lot of cooperation and hopefully, well we will not have everywhere the same situation or the same legal framework. But my, my hope there I'm really positive is that in the end on ai, the differences will be much less compared to other regions.
I I definitely agree with that.
The, the fair information practice principles are a common language that we all share and I see honestly so many privacy governance practices lifted and put into AI governance FRAC practices. And I think that, like you said, it would just make everything much easier.
Max, do you have some thoughts on the
Future? Yeah, I mean one thing that we see, at least for privacy side, which is gonna be interesting moving forward is how much we can have exactly that interoperability because like for, I mean my fear is, I mean Europe now has one law, but we may potentially have like 50 state laws at at some point and that's not gonna make anything operational. Back to what you talked about before is this kind of innovation versus legislation. I think the big question is I don't see that as a country.
I think it's really about doing good law, like not like really having quality law and having a quality version of that. And usually joke for the G-G-D-P-R side is that it's right now the least stupid privacy law we have, but we are also have to be aware, we're very much at the beginning of a huge development and it's like if you look back today at the first like workers laws 150 years ago with like industrialization just rolling in, we probably laugh about anything that says there.
And we also have to be aware that that's not gonna be the last one.
Like we're gonna have to gradually move forward. The more we can already fix it now or make it well now, the better it's gonna be. And I would echo what you both have said before is the legal certainty part, like as an Austrian exchange student being in California trying to get his car fixed the first time, I realize how wonderful it is that in Austria you need to have a license for that because you're suddenly at 20 car repair shops that all don't know how to fix your car.
And so these transaction costs, for example, are just lower in a regulated market where, you know, okay, anybody that opens one of these shops kind of knows what a motor is. And, and this I think also is something that we have to be aware and factor in that regulation can also do a lot of good and, and you know, make a market more dynamic or more trustworthy and not just be against innovation.
So I think this, you know, these two things are just very simplistic oftentimes and we need to break that up and say it's about how to have a quality regulation, not about having regulation or not so to say.
Yeah. And one of the things that just for an empowerment to the folks in the audience, it's, you know, both contracts and laws and regulations create enforceable duties. We can't unilaterally control leg legislation regulation. We can form contracts, we could form a contract right now by voice with each other.
And so one of the things I'd suggest for folks is get involved in some of the initiatives for trust frameworks like the city hub initiative and other initiatives where people have been working to try to create logic for enforceable duties, which can then be referenced in future legislation regulation. But the contracts are something that everybody in this room has an ability to affect and you can do that immediately next Tuesday in your, in your offices. So I really encourage that participation because that's gonna be the laboratory from which the practices are taken for the future laws.
So we all have responsibility to help the legislators and regulators to have the use cases to see what works and doesn't work. And we can all do that. Now in the last minute, can you each of you share a positive view of the future 15 years out on how privacy might be improved globally, locally, et cetera? Lynn?
You know, I certainly don't think that that AI is, you know, the end of of privacy. And I think that in many ways it can be used to help facilitate privacy.
I know that from a privacy compliance perspective, we need a host of privacy enhancing technologies to help us understand where data is and how it flows and the provenance of it. And I think that AI can help with all of those things. So I think that we, we are at an, an important time in history where we get to decide who we want to be to each other and how we want to interact with the world and what are the values that we are going to build and to technology.
And so I think that, you know, 10 to 15 years from now, if we do this carefully, we can have the maximum value outta the technology while minimizing the harms and and disparate impacts of the technology.
Okay.
Yeah,
I would go back to the value chain points that I made and I see also our countdown sticking down. So I keep it short. I really hope, and this would be my positive outlook, that with all of our new approaches, we are really yeah, improving the situation where basically there is a black box actors as has all the power and in B2B relationship, all the downstream actors don't really understand what they are using, what they are refining, what they are building up on and so on. But that everyone is much more, yeah, in cooperation and sharing all the knowledge and so on.
And I think then we will make a huge step forward in the whole field of privacy, but also in other fields.
Excellent. And max last word on a favorable,
Because we're at zero seconds. I think my positive outlook would be that I'm out of a job in 15 years because everybody complies with all of that wonderfully. And I can go back snowboarding and that would be great.
Well please join me in thanking our wonderful presenters.