Fantastic. Well, thank you. So this is, this is great.
Well, having, following that discussion, this discussion is gonna be somewhat more familiar for folks in terms of the identity world and identity discussion, but we have a lot of folks here and we have limited time. So let's jump right into it. I'm going to ask in people to refer to the agenda for the introduction of folks. And so when you first speak, if each person could just say name and the institution that you're with, or just very briefly the background so we can self introduce and so save some time there.
And so I wanted to start off just a, a general question of how has the digital revolution generally in the, in broadly speaking, affected human identity and what, how should we start to think about what some of the challenges are that we're encountering in engaging in digitally as in terms of identity? Joni?
Hi, I'm Joni Brennan, president of the Diac nonprofit organization in Canada. Yeah, I guess like I, wow. I just was listening to the last panel learning so much.
It, it, yeah, it made me wanna run into the forest and take a hike, but I guess something that, I keep coming back around to many, many different levels, but one thing that I'm contemplating on the interaction with digital and AI and humans is the, what creativity means and how, how we use tools for creativity and how those tools then shape us for the way that we create. So I think I like to draw that across the whole thought. Yeah.
Hi, my name is Lynn Parker Dupree. I'm the privacy practice lead at Finnegan. And I think about everything through a privacy lens. And so when I think about sort of, sort of the sort of digital revolution, I think about the lack or the failure of this concept called sort of practical obscurity, where, you know, data about us was held in a hard copy form in one courthouse or another courthouse. And even though that was publicly available, for all intents and purposes, it was practically obscure and it was a, some measure of privacy granted around that.
And I think with the ability to grab disparate pieces of data from all over the place, there's less ability to remain anonymous in the ways that you might have been able to do before.
I Neil John, I am a technical director of the Department of Homeland Security in the us digitization, I think in general speeds up everything, whether it is fraud, whether it is bad behavior, whether it is magnification of things that you are able to do, whether for good or evil.
So I, I think I also think that there is a significant amount of, you know, bright and shiny and hype around just like, around blockchain, just like around cloud, just like around internet that is going on with AI right now. So I'm hoping that we can sort of figure out just like every hype cycle leaves a residue that is usable and for the next to move forward, what is the residue that will result at the end of the AI hype cycle? Is that going to be something that is usable? Is that something that is going to be something that is going to be beneficial or is that something that is not?
So I hope that we will get into all of that.
Hi, I'm Chu.
I'm a, a, a, a senior director for technology strategy. And I probably just add a, a very quick one on, you know, how AI and this conference will relate to each other. And usually I think of the word identity and trying to, you know, use it in the sense like, my, I have identity crisis. What does that mean? The identity? And just trying to understand that most of the things we are talking about as identity, they're not, I think AI is fundamentally a technology really bring us from these bits and bytes into meaning. And that's what AI or today we call generative ai. What's the breakthrough?
The breakthrough is that it now understands these bits into something much more meaningful. And so I think the outcome of this would be what kind of identity we are looking at. We're no longer looking at a number, we're not only looking at a token, we are looking at how we will view life, how we view work and our society and institutions, for example. So hopefully that will be a right approach to think about this.
Awesome.
I'm Joe Andrew. Legendary requirements were consultancy focused entirely on requirements for decentralized identity.
This moment has led me to seriously deconstruct what is identity. There was a gentleman on a panel yesterday who said, identity is given to you by the government, let's face it. And I'm like, identity predated government. It predated technology. Institutional identity came from bureaucracies who were trying to manage their own sort of services and the, and the collaborators that they work with. That's a natural form of centralization and centrality for that institution.
The challenge that decentralization is trying to deal with around this is that we don't need to externalize those institutional identifiers for everything else, right? That's what happened with social security numbers. It was created for a good reason. It was a natural point of centrality, then it became ubiquitous throughout the financial system in the United States.
So, you know, I think in order to advance identity beyond sort of these legacy thinking of, hey, it's a bureaucracy, we have an identity, we really need to deconstruct and understand independent of technology, how does identity work, how do we use it? And I, I think that's the opportunity for a conversation that can shift us beyond some of these technologies or perspectives that has locked us into how we're doing it today.
Thanks. Drummond Reed At Jen, I'm director of trust services.
Also, I'm board steering member of the trust OIP foundation and the, the governing board of Open Wallet Foundation. I think the biggest impact of of digit digitalization on identity was that it took our highly contextual relationships in the real world and shrunk them down to an email address, a login, a a a a cell phone number, especially probably the most tracked identifier we have for ourselves today.
One of the reasons I'm such a giant proponent of decentralized identity, which you're probably getting tired of hearing about at this conference, is because it now will allow us to widen that lens back up and have very contextual relationships. When you think about the set of credentials you will have in your wallet today and what you might share in any particular relationship where you have to prove your identity with your wallet, our digital wallets will actually be far broader in that ability. Our ability to share just, just what's needed in any particular context will be much greater.
We have to, you know, finish all of what's gonna be necessary, but that will allow us, I think, to start to recontextualize our relationships. And that's badly needed because right now it's just, you know, digitalization has really narrowed it down.
Yes. We have a question actually for Joe.
It says, could you, would you consider that digital identity and identity are the same?
And just Joe, before you answer the timer is not going. So just so you, you can reset
That. Thank you. Thank you. Don't adjust.
No, but
Okay. No, why?
Because I think digital identity are digital tools to manage identity. And so it's every tool that we might use to manage how we recognize, remember, and respond to specific people and things. So username and password is one way we do that. Biometrics is another way, right? These are all mechanisms, but digital identity is gonna be dealing with how do we do that digitally? And so it's a subset of the human necessity of identity, which is part of thinking like we have no evidence of language without identifiers.
Like the earliest evidence of written language had names in it. So it's, this identity thing is way bigger than the digital context, so they have to be different.
Thank you. Do other folks have thoughts on that?
Yeah, I, I think
One, so I I would echo that sentiment and digital identity, especially for most of the people most of the time will be your social media. That's your digital identity. That's where you really been seen and how you socialize with other people, right?
We, you know, maybe most of us in, in their professional life, talking about a very small subset of identity. I think AI breaks the, the boundary between the two. It's gonna make that harder and harder to separate them. But I think any identity system or a handle we build are specifically for purpose. And we should be aware that really what those purpose are for very clear about them and why we use them. And it's always a trade off. Any identity system we build will be a trade off and I think we should make it so that it's worth our benefit, right?
I think I, I see that as well.
We see that as well. And particularly in the research that we do. When we ask Canadians coast to coast to coast about what they think about digital identity, the solid answer is they don't, they don't understand it. They don't know what it means.
At worst, it's scary at best. It's confusing. And I think I, we had a conversation, Joe, over a decade ago because I was upset that the HML tag changed from B to strong. And I said, that's a lot more to type. But the point was, and that was you. But the point was, in communication, for me at least, what matters is, does the person I'm communicating with understand what I'm saying? Are they gathering meaning from what I'm saying? And when we talk, digital identity works for us, for our profession and what we do.
When we say digital identity out to others that are not in rooms with us, they, they don't understand what these words mean. So I think we need to be more targeted and specific about what, whether we're talking about verification or credentials or wallets. So yeah.
Lynn did you Yeah,
Well, I kind of wanted to echo what, what Joe said and a little bit what Drummond said. I think that really, when we're talking about identity, it is again, beyond digital, it is about who you are at your essence in various contexts. We don't have the same identity in every circumstance.
And so like, you have to think about your identity functionally, you have to think about your privacy functionally, all of these things are contextual. And so when you're thinking about digital identity, again, it's just a subset of who you are as the whole.
Any other folks have thoughts on it? So thank you.
So the, you know, it's, it's interesting that the, we have that necessity of interaction. We're interacting with communities, we're interacting with organizations, institutions, and there's an observer and observed in those contexts.
The, and part of the challenge is to align the expectations of the observer and the observed. And so one of the, where I'm going with this and, and I'll be there in a minute, is the institutional view of a person versus a person. If I'm talking Joni and I are talking, it's two people talking. If you're communicating with an institution that's a different kind of observer observed relationship, it's not a person. And Joe and I always have an argument of whether there's any quaia inside an institution will take that offline.
But the, what I wanted to explore here is the painter Kandinsky once said that violent societies yield abstract art. And one thing I've always wondered is, is the reverse always true, also true is abstraction, for instance, the abstraction that institutions need to have of identity is that a form of violence, and I don't mean physical violence, but a violence in the form of inva intrusions and privacy and things like that. So what are some of the things that we can do from institutional perspectives to optimize for human identity, human expectations in terms of their identity?
Anybody want to try that one? Anil, please.
So since my organization is in the business of delivering services to our citizens, I figured that would be a pretty good context for that question, right? So one of the rationales for why we are looking at what currently goes by the name of decentralized identity is that it does actually bring the locus of control on who is sharing the information back to the person. So I work for the Department of Homeland Security within the US context.
When we are delivering a capability that requires the identification of a person in order to deliver a benefit, whether it is an immigration benefit, a citizen benefit, it is a travel benefit. We have to sort of anchor our implementation in a set of principles that are acceptable to our customers. And for us, it starts with something very basic. Digital is an option, it is not a requirement because there is a set of people who aren't for whom technology may not be comfortable.
They may not appreciate technology delivered by the government.
I know that the European context might be different from the US context, but within the US context, there is a distrust of centralized government entities delivering services. So you, you, you have to lean into it and recognize it and deliver services that sort of recognize that. The other piece of it is basically when you're delivering identity services, you have to make sure that they are providing a benefit to the person.
But it also requires us to deliver the services in a manner that prevents the, the, the deliverer of the service in our, in our case, my agency, from having any way to track or correlate where that identity is used in the broader context. So there should not be any type of phone home capability that informs us how that credential, how that attestations are being used and, and you know, things like that.
It is also important in order to ensure that people have agency and control of the information that we, that do not belong to us as an agency, but we are vouching for that on behalf of our citizens when they are conducting transactions both in person and online. We also ensure that those capabilities are delivered in a manner that allows them to selectively share that information based on the context.
Again, there's a lot of mention from Joe, from Lynn, from everybody else around, identity is contextual. And that is absolutely true, which means that everything that you are delivering as part of a, you know, identity credential at a station does not need to be shared with everybody in every different context. So the ability to selectively share the information in a manner that is contextually relevant and under the control of that person is really, really important.
So from an organizational perspective that is delivering those type of services that have a very strong identity component because we are delivering services to a specific citizen. So we do need to know who that citizen is, but we also want to be mindful and respectful of their agency, their privacy and their security. All of those things I think needs to be a foundation and a core based on the principles that we use to deliver them.
I I just wanted to chime in here and in full disclosure, and Neil and I used to work together quite a lot at the Department of Homeland Security.
So everything he just said is absolutely correct.
Good
Answer, but I just wanna put a a little finer point on it because I know that this audience isn't necessarily a privacy audience. And I wanna just sort of bridge that gap of how, what he just talked about from the technical design standpoint and sort of, you know, choice standpoint has absolute real world privacy implications. The ability for an individual to not have to disclose every single facet of their identity in every single context is a privacy enhancing choice.
And so that is one of the things that I really liked about decentralized identity when the privacy office was thinking about like, what can we do to sort of have a better, more privacy focused experience for individuals is allow more control over data to be shared. Literally, it's, it's the concept of data minimization within the organization. And so this is ways that institutions can think about how you can deliver the services in ways that, that are the least invasive.
And again, it's a matter of adoption, it's a matter of education. But I think that everything that Aneel says has a corollary, privacy protective aspect.
Can, can, I, I I would add maybe one note that's related to ai. I think this and, and this concept and whether it is in government services or in any services that we interact with other human beings or organizations is true. That is every time we, we, you know, our identity live when with our friend's mind, that's what that identity is. And in fact, any kind of our identity, right? We physically actually reside in their minds.
You know, we know somebody, some celebrity very well, that's because you literally allocated a piece of your brain cell to bring a model for that person. So knowing a friend means, you know, you, I'm gonna put some room in my head for you. And that's the friendship is so knowing somebody really well literally require more and more information, it's always a trade off. That's why we give more of our space in our head to our friends.
Well, hopefully. But there's, you know, as you, we all see about attention economy, we are literally our, you know, we are allocating this space for things that we may not thinking very important and same apply to systems we built for a system to give you a very customized, personalized service. They will have to know a lot about you. And I think we have to think about these ethical dilemmas and trade offs in all the systems we built.
I love that.
I'm, I am very giddy about this contextual and subjective perspective because I hold a dear to my heart. I, I think the advice for corporations, the aha that I I I would like organizations to realize is that they are managing their own sense of the individuals whose identity they are working with and the, not all of them, but most harms come from the externalization of this identity information. So some of it's exporting it out, you're selling it, resharing it, right? We know that's a privacy violation, but also taking in more data than you need for the context. Like that's a huge problem.
But in my head, I now have a sense of everyone in this audience, I don't know most of your names, but you know your identities now in my head to your point, and you have no control over it and you shouldn't, if you could erase you from my mind, that would be the grossest violation of my privacy. I can imagine I recommend Sunshine, a spotless mind for a wonderful example of someone who did exactly that.
So
I want to, I love all the comments and fully agree and I want to pull the thread a little bit on optionality.
And the research that we've done in our ecosystem shows that optionality, voluntary, that this is gonna be voluntary for you to use out in the public is a very important feature to help to build public trust. So I fully support this as well in the reality of the ecosystem. How many of us have run to a new service to claim our namespace immediately before somebody else did that we didn't even use that service, right? I have.
So there is this degree of the institutions and the services making that digital credential, that digital authentication, making that optional and continuing to offer the traditional channels in the reality of the ecosystem though who you are, I don't know how optional that is.
If you don't go out and claim your space and put yourself out there because someone else can go and claim your space and put something out for you, which is already happening in ai, where people are doing the intimidation game and telling ai, oh I want a Nobel peace, but putting enough, inter enough information on the internet to fool the AI that this person has won a Nobel Peace prize, which they have not.
So I just wanna remind, always think about the inherent tensions of privacy for us to, for us to get the vision of AI that we want.
It needs data about us at the same and we need that optionality, but at the same time, out in the ecosystem that optionality and how people are perceiving you or putting your name or your personality forward is not in your control. So if you do not participate in that, you lose a degree of your own ability to control your persona in that online ecosystem.
Drummond,
This is the point at which I do what I do on every panel. I get up my wallet,
But I'm gonna say something probably at this conference is gonna sound a little strange. We're here talking about how we're gonna use this for identity. And if you went to another conference last week in Austin, Texas, they talked all about digital wallets for cryptocurrency and payments, right? It's all the Web3 world, right?
So when Scott asked his questions about, you know, the, the impact with organizations, I don't think either one of those is actually gonna be the defining feature of digital wallets going to the future. This, when I'm moving onto a phone or a a tablet or any other device is gonna be a communications tool. And we're gonna have the biggest breakthrough in communication since the start of the internet. We're gonna have the personal channel. Every single one of your relationships with another person or an organization is gonna be just you and them.
You're not gonna have to worry about phishing, you're not gonna have to worry about spam. You're gonna have that personal channel with every message being signed encrypted when it needed to be. And all the context you build up in that relationship is gonna make today's CRM systems look like a horse and buggy. So that's rich contextual relationships.
Yes, we have actually so many questions. This topic is very interesting, so it's very hard to, to pick some questions, but here there is a, a couple of that are very interesting. One of them is, do you believe that zero knowledge technology will play a role in the future of digital identities?
So this is interesting, this, this comes up on a regular basis within the digital MD ecosystem. In a past life I was in the business of delivering, you know, prior to the current generation attribute services, in fact deploying attribute oracles, right?
So this is, this is basically things that operate on data then basically give, you know, yes no answers, you know, so what is important to realize when within the context of zero knowledge proofs is that if you do not have a liability model around whose throat to choke when something goes wrong, nobody is going to trust the answers from a zero knowledge proof mechanism, right?
So what I mean by that taking away my, you know, phrase here is that if the beautiful math will occasionally go wrong, when it comes to zero knowledge proofs, when that goes wrong, who is liable for ensuring that the counterparty that relies on it is actually able to make a decision on that?
Is it the vendor that actually has deployed the zero knowledge technology? Is it the original data source that basically that the zero knowledge proof is operating on in order to generate an answer?
In the current generation of our technology, none of the vendors of zero knowledge proofs want to be liable for their answers. They want to basically move the liability back to the issuer. And the issuer is going to say, but I am giving you the actual information, you are, your magic math is what is basically giving you the yes no answer.
So until that problem is solved at scale zero knowledge proves are wonderfully, you know, wonderful that they are, that they are, they are, they're theoretical, that you will do proof of concepts, you will do a lot of implementation, but at a high value transaction level, you need that problem solved in order for that to work.
So I follow up, I asked the question, may may I have a microphone to, to give to him because otherwise it would be not here. Thank you.
Speaker 10 00:27:19 So thank you.
I I asked the question and there actually is a vendor where you are both liable, where you can be de anonymized and you also can have serial knowledge commitments. And that's where I came from.
So that, that's why I I I asked that question. So yeah,
So that
Speaker 10 00:27:37 Is awesome.
Yes, I think
You should be absolutely selling that technology to many, many
Speaker 10 00:27:41 People. Well, we are trying, nobody's buying though, but yeah, Concordian blockchain by the way.
Thank you. I
Think $0 proofs have a lot of problems, mostly in their academic idealism.
The, the true is ZKP, you only get the attribute that you ask for not even the issuer, right? So that's really weird. There's been a shift in the verifiable credentials conversation away from age over 21 as the best example of a minimal disclosure, kind of zero knowledge proof. If all I know is that this person is over 21 and they're interacting in a digital medium, we know that that can be proxy to a complicit partner. And so you cannot know that the person in front of the computer is in fact of age because the electronic intermediation can allow any other proxy to compromise it.
So if you do not have the photo of the individual or some biometric check the age over 21 isn't sufficient. But out of the ZKP work, we got selective disclosure mechanisms and selective disclosure is incredibly powerful, but we don't need the idealistic, there's zero knowledge outside of the attribute. We actually need some of the metadata so that we can correlate at the point of, of use whether or not this person gets that CKP about them.
So just one additional point to get a really good understanding of what Joe just said, I would highly recommend looking up a beta paper called what if Alice Is Evil? Yeah. There's zero knowledge
Proof problem or something.
Yeah, yeah. It's, it is actually by Dave Longley and IE Spectrum. Yeah.
Yeah. So other than those comments are very specifically to the question, I want to probably like bring this to a bigger question.
The, so-called zero knowledge proof is not zero knowledge, it just minimize knowledge. And also that the knowledge we're talking about, we were talking about symbolic knowledge, we, this is after all a AI track. So in the AI world, the knowledge you think you are not disclosing is all over the place. And so trying to control all of that knowledge, I think it's a, a bad choice in the long run. It might work in some specific areas.
There might be a product really could do very good job with it and they might even be useful, but I don't think if you look at the long term, these are the right approach. What really should be caring about is what like humanity always care about is who you tell the information and can you trust them.
And, you know, if we tell information to a large organization, will they eventually delete them, for example, forgetting? I think those are, you know, it's much more useful concept than purely zero knowledge proof. I
Think we have. Yes. Like I said, we have so many, so I, I'm picking the more relevance according to, to the conversation that we are having here is the interesting question. How would you solve the power imbalance issue when dealing with user consent to disclosure? And I believe like this one is, is is something very relevant in this context.
So,
So I I can go ahead. Okay, I'll start with one.
The, the decide who answer first. This one.
So, so I work with, so I'm like German, I'm, I'm mainly steering committee for trust of ip and we have a, a task force for AI and metaverse, we talk about this quite a bit and there's one thing to start with is like the issue of confidentiality or privacy and authenticity are always in conflict. And that's why we have to choose, right?
And again, I want to go back to our simple human needs. How do we choose, how do we choose a friend? Remember every friend we know we are allocating a part of our brain to them. So choose carefully. I think that's fundamentally what we need to do. Not trying to somehow with a one single simple answer to, to sort of sort out these question, they are mathematically conflict. And so we will have to do deal with this in some way. And AI simply bring this, I think the, the delusion we are in, into the open now all of a sudden, you know, it's, it's not just a piece of information.
We can lock it up somehow and feel safe. Now we have all the ambient information, everything that we do potentially can be recognized and digested and permanently, you know, enriched in some system that we, we don't know where, where that will be. So those are, I think, much more interesting and more fundamental questions for whether it's for business or for our personal lives. I think those will be a, a harder question for us to contemplate.
So, so I think this, the concept of choice and choosing better practically is difficult for individuals in, in the world that we live in. You know, in many ways choice is is almost a fiction because if you want to buy a plane ticket or, you know, use a phone, your information in some ways is the choice is not a real choice.
So I, I think, you know, again, it it, it may be that we are having to think again and, and, and reshaping the, the, the paradigm of how we approach data and data use and maybe all data isn't for everything. Again, it goes back to do we have a shared understanding of how we want other entities to recognize and respond to us and, and engage with us contextually, but I think it's, it's a matter of simply saying, you know, you you just make a choice about who you share your data with or who has access to your data. It's for so many people, it's not in their control.
I, I would like to to to weigh something in here regarding a session that we had yesterday. Max shrimp talk about this and said, okay, maybe when you go to a website and you have the choice, okay, I accept or I deny to share my information with third parties, then the only option is, okay, if you deny maybe you have to pay and subscribe, et cetera, et cetera. So then maybe people don't really understand what are the policies here and that might be an issue as well, exactly as you said. Yes.
I mean, when, when you deny, like sometimes the features you, so it's like making a choice, like do I have this this technology that allows me to work in the world and live in the world or not? You know, do I drive my car that's got GPS built into it that I cannot turn off or do I only bike? Like it's really these, it's in some ways it's a false choice.
You know, the, the technology on its own is not gonna deal with the power imbalances. The best we can do is substitutability not just interoperability.
One of the forms of that that I have the benefit of is I own a domain name and I can host the website for that domain name anywhere. I can substitute any provider for that. That really helps with the power imbalance that we used to have when if I wanted to reach a OL customers, I had to pay a OL or convinced them to give me a form on a OL. So that was a power imbalance that we fixed with a web by making it possible for anyone to put up a website. But the reality is that even though you can choose to not go to Google or not go to Facebook or not go to Twitter, so many of us do.
We enjoy the social dynamics and there are winner takes all kind of social things that we just can't get rid of. So we're still gonna have these points of centrality because we all like this particular service and our friends are on it. We can't make that go away, but at least if we have substitute ability at different layers, we can have more choice so that when, you know, when we get tired of MySpace, we can go to Facebook. Right?
Many of us lived through that transition, not that it was a big deal, but you know, we, we, we did replace the biggest social network with a different bigger social network.
Yeah.
Can I add one more?
You go off.
Yeah. So I'll make a very quick, I do think that technology or technology, even most powerful AI tools, we making them, right? We are the people making them. We can make it to something that, to help us to solve these problem rather than make them worse.
So we are designing these systems and I think this has come down to the fundamental thing of agency, what kind of system we want to build. And so if we don't have any consent or our consent are meaningless, that's because the system was built to that effect. We can build a alternative system that will make our consent meaningful. And I think all the technology should be also in service of that purpose. And that I think is, is really how do we have the benefit of technology and also not losing our agency in it.
Yeah, I, I appreciate the, I appreciate the comments on the false, you know, I I'll call it a false choice. Do we really have choice or not? I appreciate the comments that we can't tech our way out of this problems alone.
You know, I think combinations of tech and intersections with culture and policy and in that session yesterday, I mean one of the things that he raised as well was the concept of the web that we would run our own mail servers. There is a degree of separation there as well because you need to have the requisite knowledge and skill and tools to be able to do a thing like that, which quite quite frankly, not everyone has, and not, not everyone will have and nor do I expect them to have it.
So one of my hopes that through living through multiples hype cycles of technologies and there is always something new that comes out of it, which is good, it's not always what's promised, but something comes out.
One of my hopes with this cycle around AI is that it will bring forward the inherent tension between the amount of data that is required to share, to get the vision that tech is trying to promise you. Because as we know, the, the phone does a lot of tracking. It won't work the way that we want it to unless we share that data.
One of our hungriest tools that we use every day that I think is quietly on our computers is our browser, which we've just learned Google is collecting a lot more data through the browser than they had, you know, spoken to in the past. So I do think that through, I hope that through this AI revolution that's coming, that that tension between data sharing what you get and what you don't get becomes more visible to people. They start to pay attention to that more.
And I think we will see some very interesting legal ways forward on, you know, what's just happened with open AI and her and Scarlett Johansson and her voice and the tension between how much of my voice is in that model that was used and how much of me is present in that. And I think that law debate will draw some tensions out that we're already feeling on data sharing and our control or not. Drum
Something on that.
Neil can go.
I think the question was about power imbalances and I think it is also important to recognize that power imbalances exist and they are real.
So the question that I think we need to ask ourselves is if you power imbalances exist because I work for any government agency, there is a power imbalance there when you're dealing with your citizens. The question is, is there also accountability that is built in?
So I think that if there is a power imbalance, you need to basically look at structures of checks and balances, structures of accountability so that the people and entities that are have the power are also accountable in using them in a manner that actually meets the expectations of the people who are encounter, who they are encountering more than anything else, I think, I think we need to have an accountability conversation. Whenever you talk about power imbalance.
Speaker 11 00:40:29 I'm sorry, go ahead. If you wanna say something.
Well, I just wanted to touch, you know, a Neil in your talk yesterday spoke wonderfully about vendor lock-in as a source of power imbalance. And you've been a stalwart at trying to make sure that at least as far as the federal government engaging with its citizens, you don't have vendor lock-in. And that is, that is part of what we need to do. And as a startup, you should not be thinking that you can, you can make yourself the one provider that everyone in some region or country or whatever has to use that is a harm, that is a violence against the public that I would act.
You ask you to find a different business model.
There you go.
So again, I think decentralized identity brings us another tool to address power imbalances. One of the reasons we started trust over IP was we said, when it comes to trust on the internet, technology at most can be half of the solution. You have to have governance. Governance is ultimately agreement about how you're going to operate in a community of any size. And I'm a huge proponent of governance frameworks as a new tool.
When we hear that and we where conferences like this and the, the, the discussions around EIAS right governance for the whole of the eu, oh, oh actually it's 27 member states. Oh well those 27 member states have smaller and smaller municipalities and segments. I encourage you to think about governance, small micro governance, governance of a community of a, you know, I live in, in, not just in Seattle, I live in a small community in north Seattle.
We could literally have governance for the Howler Lake community there. Okay?
Could have governance for the university that my son went to, but even for one dorm in that university, right? Governance frameworks when all the, the, the participants have digital wallets, they can verify themselves, right? You can now have much more liquid democracy can have much more dynamic mechanisms for, for, for the members of the community to get involved and address things like power imbalances.
Okay, class action lawsuits. I mean, Scott, you're a lawyer, you know the difference they started to make and you know, I think we can start to move some of those tools into our online lives.
I think it actually, the stu tells quite nicely, you made me think of something when you're talking about sort of what the law is going to say. And I think this sort of, sort of intersects with accountability and again, it will be cultural, it will be regional, it will be values based per, you know, per region. But in the United States we are beginning to see accountability put into laws.
And we just have seen this, I talked about this a little bit yesterday with the state of Colorado passing comprehensive AI act for the state. And in that state law it get does a couple of things. It guides provides a presumption of reasonableness if you do all the things that follow in the law. But the other thing that it does is it creates accountability for model deployers and developers to show that the data used was necessary for the model.
That it was fit for the model. And that you have to prove that.
And so I think, you know, again, that's one area where in that state those people have said, this is what we need to feel comfortable when you deploy a tool that Im impacts our citizens. And so I think it's, it's that kind of governance that will I think begin to permeate, you know, through the e the eus AI Act or other, you know, frameworks that will require developers to think about what am I collecting? Do I really need it? Am I taking advantage of people? Am I just doing a data grab? Am I being an irresponsible steward of data? And that's really what it comes down to.
It's like, can you be a responsible steward of data and accomplish the goals that you want to accomplish?
Speaker 12 00:44:39 And lemme make a comment and we'll go to another question.
Okay,
Let me, let me make one more comment here. Yes, please, please. So one of the things that's really interesting with the Colorado law in particular is, you know, we have these traditional bodies that we put together, governments, companies, people to things to represent groups and all of the institutions, when we woke up this morning, there were institutions that were already in place around the world. Are those the only institutions we can imagine going forward? So we have these layers that are, have been ma produced by the as artifacts of the past.
For us, Colorado is defending its citizens that's part of the US constitution, the US defends its citizens more broadly. And there's a lot of dynamic tension between states and the federal government in the United States. And you get that in every jurisdiction. So what might we contemplate, not just commercially, but governmentally? Are there fiduciary layers that need to be in included new mediations?
On the governance side, we talk about media as a technology, but there's media in governance as well to mediate the ability of individuals to have voice and their new risks, new interactions, what might we add? Not taking away the old, because that's tough because we have such dependency on it, but what might we add going forward as layers? Some of them may be human governance, some may be human ai, hybrid governance, some may be standards and technologies, whatever to add. And here's the key reliability. And that reliability and predictability is the pathway to trust.
So if I say that I step on the brake of my car and it's it's, and it stops my car, I trust that it'll stop my car. That doesn't sound like a weird sentence, but it's mechanistic trust. How can we create mechanisms, technological and governance mechanisms that are sufficiently reliable and accountable so that we can trust them and so then they become part of the apparatus through which we can interact more reliably. So let's go to another question.
Sure.
Here, another question that is interesting, and I think this one is related to what Johnny mentioned about the browser. When a lot of data of the person is using the algorithm, then the person will see more publicity advertising related to their search. Do you think that this is going against the privacy, right?
The first question that I have about this is, is it even effective?
I mean, usually the average I get advertised to buy a ticket free. EIC I'm speaking at EIC, I'm coming to EIC, has this ever actually worked? So it's either not working or it's, I know that when, look, I use social media when I do, I'm ever entertained by who it thinks that I am.
It, it cannot figure out who I am. Maybe I'm a mom, maybe I play sports, maybe I'm a dj, it has no clue. So I'm entertained by what it's trying to think about who I am versus who I know I am.
And, and I just don't see it as being effective. But what it ha this advertising model, and I think what it has been arguably effective for is it is the basis funding model of everything we do on the internet.
It's been effective for that ecosystem. I don't know that it's been effective for our society. I don't know if it's been the attention economy. We know that the, the way that we use the thing called the internet today, websites are built to conform to try to guess the magic box of what Google is doing.
Which is why when we go for a recipe, we have to scroll and scroll and scroll and figure out where was the actual recipe that I was looking for. So I would question, I think this model is terrible for privacy, number one. Number two, I think it's ineffective for what it's actually trying to do. And I think that it's the whole web has now taken that shape, which has affected the way that we see the value in the web and we use it. And I think that it does need to change.
And I hope that with accountability and with governance that's being pushed through this, I think we will see some new iteration, new iterations of the web, which is arguably broken in the way that we're using, in the way that we'd like to use it today versus sometimes I miss Web 1.0 to be honest. So
Yeah, I want to add that more data does not equal better. And I think we are learning that systemically, like data hoarding, you know, the most important thing to your, your Google search is not who you are, but the query that you entered, like that's the pivot.
The rest is just ancillary information that may or may not help. So the question really is how do you get the right data that's the most salient so that you understand what the user's attention wants to be on right now?
Not, you know what, maybe in their perfect reality this might be the best thing for them, but right now they're looking for a specific result. You need to help them find that specific result. And that doesn't require a holistic context of who that person is.
I also think that in the current generation of the e ai conversation, I think we are being distracted. I I think the entire AI would not be a topic of conversation over the last year, year and a half unless, you know, unless large language models had not come on the onto the stage.
Here it is also important to understand the only way the large language models could be developed is with massive amounts of data being used to train them. And each of those training runs typically cost many millions of dollars. The only entities who are able to do that type of scalable training runs are very few, maybe a handful of large very language vendors. So we are now being distracted by, you know, all the magic and the beauty and the wonder of ai while the aggregation of information and power still resides within specific ecosystem and specific vendors.
And that is the question that we need to sort of, you know, wrestle with as we are looking to see, you know, what are the benefits of ai. And I, and I think there is a, there is, I think at the end of the last session, Scott called it existential questions. I think there is a lot of fear, uncertainty, and doubt that is being thrown into the ecosystem.
Oh my god, AI is going to become intelligent and basically kill us all. Well, don't connect it to weapon systems, don't allow it to drive, you know, don't, you know, don't do stupid things that, that, that, that, that basically that turn AI into Skynet, right?
So, so, so there are concrete actions that people can take. And a lot of it actually comes back to are we actually ensuring that we as a, you know, as a people are actually holding the entities accountable that have these type of capabilities such that those capabilities are being deployed and used in a manner that actually benefits us or it just benefits them, right?
So, so i I think we need to like get past the distraction of AI and actually see what the heck is going on there. I think
We are getting distracted and, and it's not entirely identity, but I'm very fascinated by the intersections between US data economics and governance. And when you talked about the throat to choke, who am I going after I'm, and this, this model around collecting data for advertising and, and, and you know, ca capitalist surveillance, that's out commercial surveillance with Google.
Now putting forward the summary of what it believes in the websites is the telecommunications act in the US is woefully out of date. Does Google now become responsible? Before it was just, hey, I'm serving you a bunch of websites, go find it now as it's summarizing data and putting that forward, does that change the model of accountability for Google from a search engine to being interpreted as a reliable source? And is there liability there? I don't have the answer to that. Maybe the lawyers are keen, but this is fascinating.
Here wants to say something if you can pass the microphone.
Oh yeah. How many folks have heard the term zero party data?
Not very,
It's actually, as I understand it, quite the hot topic in the personalized advertising or hyper-personalized hyper-personalization. Right now it's defined as, you know, first party data is the data that a site has about you. If you visit it directly and it's track, you know, you're, you're, you're, you're, it's tracking your information and you're submitting things like that. But zero party data is data that you explicitly share with a business or organization in order for them to serve you better. Okay?
So it is, it is, there's no other party involved and it's explicitly volunteered. Okay. Over something like that personal channel I talked about earlier. The fascinating thing about zero party data is a business cannot buy it from anyone, any, there's no broker in the world that can give you zero party data only you can give it voluntarily. It's a complete trust based thing. It is by far the, the, you know, the sales forces and foresters and folks that are are, are all a Twitter about zero party data.
They get, it's by far the most valuable data that, that you could have or you could advertise against or you could have in a personal channel. I think it's gonna change the game in, you know, in, in five to 10 years we're gonna be at this conference, not even talking about the, the current, you know, web-based surveillance
Baseball.
So we, in our last few minutes, I want to ask, there's a lot of anxiety associated with these issues. I would ask each of you, can you give us one sentence on your positive view 15 years out on what identity's gonna look like, the year 2040? Just one sentence each or a few words.
I hope that in my positive view, number one, I hope that in the future there continues to always be unauthenticated spaces that are anonymous because I think that's an important feature and it needs to continue to exist. I hope as well there will be verified spaces.
And so the, that power, the ability for me to prove that I'm me or for someone else to know that it is me will be an important tool to exercise accountability to, to put, have control in that ecosystem over that data. So, so I hope that in the future we, we actually have usable verification that is in our control and respects our privacy while we continue to have unauthenticated spaces for the people who want and need to access those spaces.
You took the words out of my mouth.
In an ideal world, choice and control is actually meaningful and that means shifting from the current paradigm that we are existing in right now.
I think I'm going to echo Lynn here, agency and control. Hopefully in 10 years when we're having this conversation, we have a, a set of systems that provide agency and control for the individual. At the same time that systems are also accountable. Whether it is accountably resides with the individual or with the counterparties as appropriate as well.
I, I would say something much more positive. I think today we a lot of times need identity or need IDs for access is because we have very limited supply and I hope by the time we look at how many years, 15, yeah I think we'll have a lot more abundance of things we need and the need for those kind of ID controls will be much less.
I think we're gonna see identity systems that are entirely anchored by cryptographic secrets that cannot be forged by AI or other humans or other systems.
And that we need to figure out how we can use that to build these meta notions of who people are because we don't know that part yet. But I know how to prove that I have a secret without sharing the secret. That's what cryptography gives us. It's amazing. I think we're gonna spend the next 30 years figuring out how do we restructure society based around that anchor and the presumption that people have this cryptographic wallet. Just like we can think about society with the assumption that people have cars in Los Angeles, you know, maybe not New York, but in LA you can.
I also, you know, I want to say, I think the, the time of of moving fast and breaking things is over. And what I wanna see and what I think we will see is thinking deeply and making sustainable things.
I build on what Joe said, I three words, personal channels, everywhere. Wallet to wallet communications is going to change the web and hopefully society as we know it and the way you've just heard from all of these panelists, what an amazing group.
Thank you so much. This panel was very interesting. I would say that the world has changed and we will definitely change with it.
And well, thank you so much for your participation. It was great being here and moderating this panel with you. Please
Join me in thanking the
Panelists.