Well, good evening to you all. I think it's now evening. I have a question for you. Who wants to declare something dead tonight? Raise your hand. Woo.
Alright, well you're participating in a grand tradition here at EIC. I think we've seen SAML declared dead and XACML declared dead.
So, but why pick on consent? And by this I mean digital consent and data privacy. Why pick on that specifically?
Well, I kind of have a stake in the matter. I really care. I know I'm not unique in caring. 10 years ago I actually stood on the EIC stage. I was honored to receive an award for user managed access.
So, you know, you know, somebody might wanna ask me, well, what happened to it? And that's a good question. So we know when things are not well entirely well in the digital world. And as a crazed fan girl of Max Schrems and listening to him and, and the panel that followed, you know, we sense that there are things that are amiss.
And so I think we need to be clear-eyed about what the challenges are if we're gonna change things.
So do we, do we all agree about what consent even is? Well, you know, GDPR has a really tidy definition and I bet if I were to recite it, you would all be able to recite it with me. I'll just sort of show it here. It's great.
It's a, it's a really pretty good solid definition and it reflects some things I'll talk about in a moment. But I wanna point out that it's actually missing something really important about consent. It's actually missing the point of consent, kind of its superpower, it's missing. The thing that companies really crave about consent.
Consent is about letting one party traverse the ethical or legal boundaries of another party with that second party's permission. Now that actually turns the impermissible into the permissible in some cases that we may have seen or experienced.
It turns the monstrous into the now suddenly acceptable. But like the nasty looking prongs on this fence. Consent always has three elements to be legally binding. No matter where on the globe you look, the first things that it needs is an act or manifestation of consent. And we're used to things like clicking, I agree or we've filled in the information and that is an actor or manifestation. The second thing it needs is knowledge so that you can be informed about the proposition that you're signing up for. The third thing is, this is actually kind of a lawyer's word. Voluntariness.
I would've said volition, but okay, it means that you have some free will in signing up.
Now, technically without the actor manifestation, there's no consent at all without both of the others you have a case of defective consent. So tonight what I wanna do is critically examine some of the beliefs that seem very firm in our systems where we're trying to make consent possible and correct and see if they're limiting us and see if we can identify any beliefs that are limiting and transcend them. So I wanna take a look at the first of three beliefs.
Let's examine this proposition that we can force data hungry companies to sip tiny bits of data through a straw daintily.
Yes, I hear the laughs, I see where you're going with that. This is a belief that pervades the world right now. I mean you look at all the regulations, we've had decades of experience actually going back to a data privacy directive. So we've had some practice at working to limit the flow of personal data through these global ecosystems. Has all of this forcing had much effect? Let's take a market view on the question. Who here is familiar with Nick Jonas? Actually no.
Who here is familiar with identity resolution? Raise your hand. Ahuh.
Okay, I wanna talk to you about the identity resolution industry and why am I showing a pop idol coming into focus like this? Alright, that's enough of that.
Identity resolution is different from the I am that we know and love. There are three ways that it's different.
First, it is typically handled on the backend by aggregation data processors and other third parties with no direct relationship to the users in question. Second, it's very heuristic in nature, not deterministic like we prefer our identity verification and our authentication to be.
And third, it is made up of massive aggregated data lakes and huge identity graphs that are about correlating each consumer out there. I know a lot of you're familiar with the identity strategy for liminal. They have this kind of famous honeycomb landscape where they look at identity writ large and they place identity graphing and resolution on their honeycomb and they identify it as a golden cog, meaning that it has a dominant position in the market. Now they happen to place it directly underneath data privacy and consent management.
And I for one, wonder if that was meant ironically, I don't know, let's delve a little bit deeper. Oh I just heard that Delve is a chat GPT word, oops. Identity resolution is a companion to data customer data man.
Yes, customer data management platforms. And what it's doing is feeding them answers to figure out who's whom and correlating people and figuring out who they are. So what you see here is a list of partners of the company LiveRamp, formerly known as Axiom
For Cookies and Identity sinking in its France market alone. Now last I heard, France had a pretty heavily regulated privacy regime. Their coverage worldwide is similar.
In fact, LiveRamp achieves according to their own numbers, approximately 100% of the global online population. I don't know about you but I'm a little offended that they use the word identity in this at all unless you think that this is just about third party data sharing for consumer facing organizations of sufficient size, what you're seeing is they're massive ecosystems in and of themselves And notice what's a primary success metric. Collecting consumer data records.
Now I think we've vaguely known that this was going on, but this is a whole other kind of subterranean industry as far as I am. As now we know a little bit more about consumer demands not to be tracked like this because we had Apple's app tracking transparency go into effect April of 2021. And I find it really interesting that social media was flat at 23% year over year and still 23% is pretty impressive. I mean that's nearly quarter everything else has been rising in some cases dramatically.
So I don't know that it's doing that much good 'cause Facebook did have a stumble in its stock price and then recovered nicely in recent times.
So I gotta say there seems to be something seriously wrong with any kind of consenting that people are doing and having that be enforced. And this is after six solid years of GDPR enforcement. So I want to move on to a second belief. We can prevent identity correlation.
So
If all that data is being used so heavily to market to consumers, can't we just share less for goodness sake, can't we just take away from them the thing that they're using to do? What is inappropriate according to GDPR when it comes to singling out an individual, if we shared less, couldn't we achieve sometimes anonymity or at least pseudonymity a lot more of the time? So in the context of what I was just talking about with identity resolution and the massive ecosystems that are in play here, let's now talk about self-sovereign identity.
The theory behind self-sovereign identity, the self-sovereign part of self-sovereign identity anyway is that individuals can pick and choose what we want to share. As I mentioned, I'm quite invested in that proposition and also that technology can actually help us in our aims to do that with technology like zero knowledge proofs.
So can't we simply hide our true selves from services if we wish by sharing more selectively. I wanna move from a market orientation to an academic research orientation here to examine this proposition. I'll get a little bit technical.
I want to describe to you a de de identifi identification technique called K anonymity. It involves transforming data records with personal data in them to kind of take out some of the specificity such that if a query is made then some number of some number K of records will be returned greater than one so that you remain kind of in the background and not identifiable through the search. This is currently K anonymity is currently the way that most service providers are de-identifying data and thus they can treat the data as not personal data and thus not have it be regulated as personal data.
There's a problem though.
Wikipedia says the guarantees provided by K anonymity are aspirational, not mathematical. Oops. There is a readily available attack called profiling re-identification. One study that was done looking at members of a social graph that was anonymized in this way found that just by using deep learning techniques against the time duration and type of interaction that the person may have had, 52% of them were re identifiable.
Okay, that's readily available. Now there is a member of our community, actually he's sort of decentralized identity royalty, Dr. Sam Smith who wrote an amazing paper called sustainable Privacy, where one of the things he does is say that this is such a weak technique that we have to consider this privacy washing His paper is really amazing and I recommend it to you. I have a link in the slides if, if you ask me for the slides.
Now here's the kicker. Selective disclosure as we see in the decentralized identity model, does not escape this trap.
In fact, it has some aspects of it that put it at more risk. So if I use the words disclosure and recipient, those would be like the credential holder and the credential verifier. Everybody with me on that? It doesn't help. So whatever the holder, the disclosure decides to disclose, no matter how parsimonious they make that disclosure, it's going to be accompanied by auxiliary data surrounding the presentation of that credential. And if you remember what I said about time duration and type of interaction, there's also location which is readily available.
Lots of lots of auxiliary contextual data as you know is available.
So this is a species of attempted K anonymity where the disclosure is doing it themselves. Unfortunately what they're actually doing is kind of singling themselves out and we're going to have to use techniques we know well to try and, you know, maybe regulatorily beat verifiers over the head to not design their presentation contexts to collect enough auxiliary data to re-identify the individuals in question. So the individual will disclose some data, they will feel like they're manifesting consent pretty directly.
And the verifier, the the recipient will be able to re-identify them. The technical protections will seem strong. The verifier will look like a privacy champ, but it's likely not the case.
So I think we really need to question whether we can prevent today prevent identity correlation. There's no way a disclosure will have enough knowledge about the surrounding context to prevent this sort of thing. And that's despite the trappings of cryptography and you know, privacy that we're trying to give the situation. So let me move on to a third and final belief.
Can we empower people by asking them something at the point of service? This is the central conceit behind quite a lot of the regulatory move towards kind of giving data subjects more agency. I mean I think GDPR really, you know, worked hard at that. So to discuss this, I suppose I'm going to law enforcement here, has anybody heard of the Miranda warning us folks?
I know, yeah this, if you ever watched a police procedural TV show from the US you will have been exposed to the Miranda warning. This is where, you know, a policeman who is arresting somebody will give them, give them this warning to tell them their rights and say you don't really have to talk. I have heard that people who are not US-based have kind of started to expect to enjoy such a right just by pop cultural assim assimilation.
Similarly, I sort of feel like us people are like, I want me some of that GDPR, like we're sort of feeling like well we deserve it too. And there's plenty of websites that kind of give us that feeling.
So there's an important place where Miranda style rights and the GDPR intersect, if you think about it, the digitization of passports and mobile driver's licenses.
You know, what can a policeman or a border guard force you to do? This is why I was fascinated to find a research paper called the Voluntariness of voluntary consent. So you know, it had lawyers involved in it somewhere 'cause they said voluntariness, so voluntary it investigated what's known as consent searches. So this is where like it's a warrantless search, but somebody said it was okay in the US 90% of warrantless searches are consent based.
So if you were to come across, let's just say a random person, not a police officer, random person, and they say, Hey listen, could you just unlock your phone and give it to me? I'm just gonna go off into this other room. You sit there and I'll be back. Would you do it?
No.
Well, we now know from research how many people actually do do it and it's 97%. And yes, this image was AI generated so you don't have to wonder why there's a left hand on his right arm.
Oops.
So
We already suspected that there was something wrong with the picture of asking somebody for something at the point of where they're trying to go somewhere, get some service or something. But there's actually a structural reason for that and, and this thinking comes from a research paper that I wrote with Lisa Lacher called Beyond Consent.
The challenge is that you've got some, you've got a consent seeker, you've got somebody asking you and it's not a symmetrical relationship, it's an asymmetrical relationship and they're sort of trying to pull something from you at the same time as offering you something typically online. And it, it's frankly impossible to sort of even the odds in these circumstances. And this is why we see numbers like 90%, 97% click.
Okay, except I agree as my friend Amp puts it, when denial of consent means denial of service. Is there a choice?
I love that one. So there's another legal option in the picture that we see frequently that also involves a user agreeing to something when you see terms of service and privacy policies. Contract is a different legal structure and superficially has some of the characteristics that would seem to make it better than sort of straight consent.
However, in practice we've all experienced why that doesn't work. So privacy researcher Daniel Sav recently published kinda a final version of a paper that he called murky consent where he was talking about how opt-in and opt-out consent are fictions. So I think you know the, the penny is dropping for a lot of folks. So suffice it to say that the question of empowering people at that point is, is really suspect.
So I must observe the purpose of a system is what it does.
Have you ever heard this phrase, it comes from Cybernetician Stafford Beers who developed systems thinking in the seventies. He also said it's pointless to say that a purpose of a system is something that it constantly fails to do. And that's a little bit more evocative I think. So the system we have now, including technologies, including regulations, seems to have a purpose different than what we thought we were designing it for. And so I wanna ask, where can we go from here? How do we move the needle? And I wanna propose a few other beliefs we might wanna try on for size.
And I think I'm running out of time, so I'm gonna go through this quickly. But I want you to sort of do a little thought experiment in your head around these. How would regulations look if we believed something different and if we acted on them differently. So the first one is, individuals have the right to determine their breakup, their breakup status.
Yeah, well that too. Their relationship status.
Human to human. Yes. Why not? Human to organization, human to service. Couple of approaches that I believe fit this belief nicely are the work of the ME was the me to be alliance internet safety labs now. And they've done a relationship model that they base on behavioral economics. So that's one again, going quickly here. Second approach that I believe fits this belief nicely is a third legal structure.
Licensing, right to use licensing. If we had the structures that allowed people to offer a license, just like I don't know on Flicker and Unsplash, you can offer a license to photos you've taken.
How would that look different. Second belief I'd like us to try on for size permissions should be interoperable when we think about how we want to create a system that works for the outcomes we seek. Shouldn't permissions be a first class object? I was talking to Ian about this backstage and he's got some thoughts on the matter. Maybe he'll propose them next.
Very quickly, A couple things I wanna expose you to that may fit this belief nicely. Something called the consent name system.
Very, I would say these are tactical solutions, but they exist now and they should be looked at that allows you to calculate precisely based on laws, based on contracts, based on consent you've collected. Whether right now and in this moment in time you can actually do something with a piece of data that you've, you've gotten from somebody. Second consent receipts, which have been a feature of a landscape for a long time been developed at Cantara. They change that no to a yes when it comes to whether the terms were recorded.
And this work has actually moved nicely into ISO IEC 27 560 consent record information structure.
Last approach here is just a, an idea for now. What if there were a way a standardized API that allowed do not sell elections that people make to be passed down to data processors and from data processors to further third parties. And I invite you especially to think about regulations in this context.
Finally, the last belief I'd like us to try on for size data shielding requires potent solutions. And I mean really, really potent solutions. They need to be able to pave the way to new business models and enable the ripping up of old ones with no regrets. Apple's app track tracking transparency apparently wasn't enough to achieve this.
So we, they need to be really strong. So I've come across two things I think are really interesting that may fit this belief. The first one is fully homomorphic encryption, which is reaching a new stage about now.
Now I know privacy does not equal encryption and I'm no cryptographer, but I've been asking around and this idea seems to have legs. The idea is that you can do operations on encrypted data with the results themselves encrypted and that could unlock whole new business models where there's no privacy trade-offs.
It was computationally expensive and thanks to AI chips it's getting more tractable. Now last thing is not a technological approach, but it is a legal approach in the US there is a new lawsuit that is related to Facebook cease and desist order for somebody who is trying to do an unfollow. Everything where you can just unsubscribe from social media feeds. And I think that unfollow everything 2.0 is gonna be really interesting and may again unlock real change.
So I'd like to wrap up here late by saying, even though consent is dead, if we transcend our old beliefs about it, and I think that we can reach for success when it comes to human capacity to make informed uncoerced decisions. So with that, I wanna thank you for your kind attention.
Thanks so much Eva. Entertaining, informative, enlightening. As usual. I had some questions but we time my fault. Just briefly then, what's your one call to action to this audience? If they take one thing away to go and implement in what they do in their everyday jobs?
I would say question your assumptions and ask what it is you're really building because the the thing built will tell the truth about what you wanted to achieve.
Thank you very much. You may.