Well, thank you so much for having me again, congratulations on the 15th EIC. I well remember meeting in the, the IMAX theater of the museum in Munich and how special that was at the time, and it's still special. So today I want to present to you some food for thought, some possible next actions around kind of a sticky subject.
You know, what does it mean to package ethics into a technology stack? So hopefully everyone's seeing my slides. Someone will yell at me if not there they are. I wanna talk for a second about setting into context, nothing bolts of technology, ethics, why bolts. In the old days, we used to talk about the BLT sandwich of doing technology and designing protocols, and that was business legal and technical.
And I, I, I first heard this from Don Tebow. I don't know if he innovated the term bolts, but business operational legal, technical, and societal impacts and, and forces on identity have become so important in the modern era.
And that's why we're here. I think talking about ethics. So I wanna just take a look at where we're headed and just talk about three of the macro trends that I'm seeing in this area. First of all, actually identity is used in a lot of things now, more things than ever, I would say.
And, and, you know, I, I actually put together this for the very first time for set ven at last EIC. So I thank everybody for the feedback on it. And what we see here is that protection and personalization, the outermost sets here using a literative language to refer to security and experience really.
And, and those are kind of our twin drivers for so very long, really 15 or 20 years now. And you can see there's an awful lot in that intersection. We have new drivers though.
I think they're, they've always been related to each other, you know, payment and identity go very well together, but we're seeing new trends in this area that I've, I've given a name with a P at the front of it, beware venture partners just recently noted that SaaS businesses see greater success when they add embedded FinTech features out of the gate versus kind of as their second act and the us white house in its work on the executive order on, on cryptocurrency noted that 16% of American adults that's 40 million people about have invested in traded or used cryptocurrency.
So these are new factors. It's not just sort of the eCommerce of old, if you will. And then the people set here is it's really, it's all, it's always been important to us, right? But maybe going back to when we, when we said all the time that, you know, the internet was born without an identity layer, and that's a problem, you know, identity was sort of born without an individual layer and we've tried many times to get user centric and it hasn't to, to date worked super great.
So, but this it's becoming more important because people have positive and proactive use cases in their lives for infusing identity. And it's kind of like the reciprocal side of digital transformation, if you will. So that's one macro trend, another macro trend. I'm sure a lot of people have been talking about this in the conference already about the latest news and, and the, and the trend here, but we really have a trust problem with a lot of identity adjacent tech.
And, and so, you know, these are just some, some recent headlines and all three of these technologies that I'm focusing on here relate to each other. Of course, the trust problem is not really just about privacy and, and not just about a single view of privacy.
It's, it's really about a lot of factors large. And so I'm not gonna belabor the point.
I, I think it's sort of pretty well established. I will say though, that it's, it's, it's about kind of business models and incentives. And so I kind of wanna talk about that word trust for a second because it sort of famously maybe doesn't have a good definition, but I found too that I like the very first one is this is from a Ted talk by Rachel Botsman, Dr. Rachel bot, Rachel Botsman trust is a confident relationship to the unknown being able to confidently interact with what you might call assurance or, and that's kind of like the results that we want to see.
And I wanted to present to you a quote from one of my other favorite people, Alan Foster, who I think will be around there looking his best here. Trust is shared vulnerability to consequences. And I would say, this is the method definition of how to achieve trust, which means that it's about the incentives, the liability that may be shared and the business model that's undertaken. So with that, I wanna just talk about the third macro trend, which nobody here will be surprised about. That decentralized technology is actually impacting the personal data equation.
And you might say why a jellyfish and it's because it keeps undulating and swinging back out to the edge and then kind of recentralizing, and it's all connected and I'll stop the movement of it here. So you won't have to keep looking at that, but, you know, we've got a lot of different kinds of decentralized technologies, the kind of wider web three effort cryptocurrency, which I just mentioned end to end encrypted applications, decentralized identity itself.
And I think there's some success, I would say a lot of it is in the, the pending realm.
You know, certainly when it comes to cryptocurrency NFTs, things like that, we're at what I call the toys and risk stage. And that's necessary for progress to be made people to take a risk and people to build toys like pixelated, apes, the danger here that, that we see growing that I think people have been recognizing is the danger of insidious recentralization when you think you've been decentralized and, and you haven't really achieved it.
So, so those are a few trends. Now I wanna talk about, maybe do a little bit of diagnosis. What's really not working well in our attempts to do better at technology ethics. The first area is about compliance, regulatory compliance as a whole. And right here, I have a quote from Steve Wilson who may be there.
I don't know if you're there.
Hi, this is from 2015 talking about the concept of needing to be exposed to information, to do your job as a business, and then to have to treat it with restraint, meaning deliberately not share what you know, and how are we doing on that, you know, kind of working on the, on the basis of regulations and restraint and, and it's tough. It's a, it's a tough road to hoe and, and, and, you know, regulations are getting more ambitious right now. And ironically, that kind of makes it more expensive to do the right thing. You kind of get squeezed in the middle.
So aligning incentives, which regulations try to do for one is hard work.
A second thing that isn't working too well is the efforts to outsource relationship accountability. Why do I have a blend jet on the screen? And I'll tell you it's because I just found this product. I began to love this product before I even owned one. And once again, I will stop this. So you won't have to look at the spinning, but I wish I got a commission because I, I love this so much and I have begun a relationship with blend yet, and they take a very personal approach to it.
And I have very, I would say standard expectations and very clear expectations about what it would be okay for them to do with knowledge about me, with my data, with, with anything about me versus what it's really not okay to do. And I would say 90%, you know, I'm pretty satisfied.
And 10%, I use some of these kind of decentralized technologies in the world to help with that.
So, you know, GDPR says you can't outsource liability for how you handle personal data, outsourcing relationship accountability between the service provider and the individual is, you know, ad tech is, is, is kind of the result. So building on the above, looking at, you know, cookie consent as, as being how we're outsourcing, looking at the, the panic that companies are in around the, the impending cookie apocalypse.
So when it comes to, you know, thinking about first party data and mutual relationship responsibility, I'll just recommend to you the work of the me TOB Alliance disclosure. I work with them, I'm on their board where they have been developing literally safety standards for the internet based on behavioral economics of the Mees and the bees at particular decision points in time. And then finally, oh, consent. One of my favorite topics is consent working well, or is it not working well?
Well, let's take a look, you know, express affirmative consent is it's a superpower in terms of handling personal data. It's got complications and it's no surprise to anyone to, you know, see this little excerpt from research, which actually Lisa Lavo and I she's, she's the director of me to be Alliance did together on how we go beyond consent.
It, you know, it just shows that people are in a one down position with respect to the service provider when they're asked. And here's a couple more pieces of really very new, modern information.
You know, we've got these new attacks that I'm thinking of as inattention attacks because we're pushing so much, the previous speaker was talking about, you know, some of the inconveniences of a lot of the MFA methods, the, you know, anything that you do by rope, anything you sort of do constantly leads you to sometimes overdo it.
So you start getting false positives.
And that's what we saw in, you know, some, some recent prompt bombing attacks where if you just, every once in a while, send somebody, you know, you get into the right position and you're able to send a false notification to allow them to approve something. They shouldn't have approved well, it's kind of brow beating into a green. So it's at the new species.
And then there's a third example here really, and, and prompt bombing can be done as an act of social attacks, but in the payments world, speaking of that third set of then authorized push payments, which Dave Birch was recently writing about it's being taken by the banks as actual approval and absolving them of liability. And so we really ought to question this, you know, in the moment kind of synchronous asking, are you there? Is it okay to do something?
Are you you, and then overusing that channel for, for just that, because it doesn't really kind mean what we think it means.
So, so that's, you know, diagnosis of some of the things that just really aren't working well when it comes to actually acting ethical and having, you know, good faith relationships with individuals. So what should we try next, a big fan of ski ball. It's a matter of trying, and it's very hard to get the ball into the most valuable little spot, but you can win if you can do it.
So, so the very first thing is to lean in to the equation of mutual value of convenience value fund and profit. That's where we actually see the most innovation.
So, you know, just some examples of tech from, you know, some, some of which is from our own area right there on the screen.
This is I put that, that, that diagram together to describe oof, 10 years ago, I believe when I was doing some Forester research.
And so, you know, that pattern has been with us for a very long time. And I, I think until the era of open banking, we didn't really see regulations requiring it. We didn't see a lot of understanding about how GDPR even applied to it. I was asking those questions and not finding a lot of answers at the time, but OAuth was a major piece of new functionality capturing something really important for business.
You know, other examples from maybe outside of our, our normal world might include the signal protocol and all the apps that it's used in including the signal app itself. I'm, I'm a signal person and duck dot go.
These are, you know, duck go is a profitable company on, you know, on the basis of, you know, a growing wish to do things differently.
So speaking of wishes, I wanna talk about this notion. My Palau Rosner reminded me of my own formulation about positive privacy versus negative privacy. So let me build out, I, I may have shared with EIC audience in past years, this notion of the modern data privacy, 2.0 pyramid, I've got a, sort of a new version of it here. The very base of the pyramid is data protection. Don't accidentally let data get out. Everybody is obligated to do that.
And I've got some examples of, you know, technologies that, that maybe applied roughly at this layer. That's really the harms view of privacy and all by itself, it's a very disempowering way to see a data subject it's not complete next there's data transparency telling people what you know about them, telling them what you wanna know about them, and then ensuring that what you've done has an audit trail and that's valuable too.
It starts to move us just a little bit away from the harms world. And towards, I'm gonna say a world of wishes, and this is where data control becomes so important.
And there you see some of the, the approaches and technologies that are so relevant there, here's the thing privacy is, does not exist in a vacuum. Here are some of the adjacent societal forces thinking about bolts, some of the relevant standards attached to some of those societal forces. They're all swirling around right now.
And yes, we can push more of these into regulations as we've seen it, doesn't always mean that the right thing will get done. So the opportunity around really meaning no data about you without you, including you in that process in a good faith fashion has to come down to maybe something other than technology itself and other than regulation itself.
Secondly, we need to look at the AI equation or thirdly, I should say, make AI explainable and transparent. And this is just such a hot topic because AI is, is, is basically a way scale handling data and making decisions.
And today for drop happened to announce something called for drop autonomous access, which enables application in the design of the runtime user journey insights about the entire environment, identity related signals, non uniquely identifying signals, and turn that into actionable insights for what I, I think of it as kind of signals for both risk and delight, because you can delight people better when you take this approach. The key though is in order, I think for AI to survive is to look at how explainable it can be made. And I know that there are trade offs there.
You know, we have to think about, you know, as someone said, once, you know, we, we worked on how we could do something.
We didn't stop to think if we should, this is the way I believe we should do this. And then the last thing that I wanna say about trying next is this is we've been putting together some inner world scenarios, doing some kind of long term visioning work. And one of the really interesting use cases I think is simply knowing whether the entity in front of you is human or not.
And we've seen this come to the floor, kind of in the Twitter conversation, social media, bit large and in other contexts as well because the bots are killing us. So what you see on the screen here is it's a very high level architecture of something we've been working on with a number of partners. We think of it as personal data brokering with built in respect. Do we mean that?
Yeah, it, one of the things that it can benefit from is what I think of as the killer verifiable credential, are you human?
And then you can discretionary share more data after that. So when you have this on the signal side of the equation, you can prove you're human without jeopardizing the privacy and even anonymity of individuals. And we surely have those use cases in the world. We surely could see societal benefits from that.
You can start to have continuous and data minimized proof of that singular fact, and then you can drive proportional mutual value, liability, confidence, and relationships, and I think ultimately trust. So it's a method for achieving kind of trust where I was defining before.
And I, I truly believe that passwordless and this kind of approach are made for each other. So yeah, I didn't build out how this scenario actually works. Let me build it out here for a second. So this is basically kind of the Twitter use case with a, with a fake forge court mobile, knowing that I'm pretty short on time.
I just wanna kind of try and answer the question that I posed in, in, in the title of this talk, you know, what does it need to package ethics into a technology stack?
So, you know, can we sense it, can we see it? Can we touch it? Can we measure ethics in technology per se?
And we, we really can't, it's kind of turtles all the way down as, as we kind of saw for those of you have read the piece by Moxi on web three and the recentralization challenges on the facts of market consolidation and things like that. You have to create the conditions for alignment and, and incentives so that all the parties can act in the direction of their interest. So speaking of sensibility, what is the, what is the perception?
You know, I went, I went and bothered to define trust a couple of times, you know, you need to have kind of unconditional delivery of features and benefits that are, are transparent.
And, you know, in the case of AI, explainable is kind of a term of art to say, you know, where did you get that in inference?
Where did you get that insight you would need to unconditionally deliver things like, you know, proper secrets management, not count on just purchasing or outsourcing something the same fashion, as we say, zero trust, isn't a product, you know, ethics cannot be a product, you know, can we say ethical by design? You have to look at the entire bolts stack. If you will, to know if you've achieved those aims and ultimately keep in mind that people need to act kind of like peers to organizations in giving them meaningful choice for autonomy. So I'm gonna close right there. Thank you very much.
If you are curious about some of the things I mentioned, surely visit our booth, where my colleague on my team, Steve ENMA is. And thank you.