Thanks for being here. My name's Elizabeth Garber and I was the lead editor of a paper last year called Human-Centric Digital Identity for Government Officials. And many of the panelists here contributed a great amount to the writing of that paper. And so I was really excited to pull this panel together. I'll just briefly give an introduction to who's on the stage and why they're on the stage. We have an amazing panel here, and I do encourage you to go look them up on LinkedIn and reach out and speak to them.
But this is also gonna give you a sort of idea of what the arc is gonna be of the discussion that we're going to have. So we have Francesca Morgo here from the Cyber Ethics Lab in the city of Rome. She's a real expert in ethical research and also applied ethics as, as I've put it here, because they've actually tested things out in the real world.
Sanjay DKA here is from U-N-H-C-R, and he's here because he has a real insight into human rights on the ground in his work with U-N-H-C-R. And Mr. Schomaker is there, sorry, you're not in the right order.
People, he comes to us from Caribou Digital and he's a senior advisor to UNDP, and he's here with a real insight into digital transformation and governance on a national level. And then Han Kasik, you are well known in this industry for challenging folks like us to make it all practical in industry. And then just 'cause I couldn't fit me and Hank on the slide, we brought everyone together. We're there.
See, look, we have our own special slide. It's okay. It's okay. So Hank is an interdisciplinary researcher and thought leader. I liked that. I wanted to steal it, but yeah, it's good. It's good. I'm working on it.
So that's, that's the group you have in front of you today. Get your questions ready, but I have a lot to get us started. So I'm gonna start with Francesca.
Francesca, can you talk to us a little bit about some of the tensions that are emerging as technology evolves and how it relates to digital identity?
Yeah, of, of course.
Well, if you, if you think of the broad tech landscape, I think that, I mean, you can spot mainly three problems, a huge problems that, I mean, dominate the landscape. The first one is the one of access. So when you think about the technology, you first have to think who is granted the right to access and who is not granted that, that, right? So who is left out? And this is a fundamental aspect of course, because it defines the, the, your possibilities what you can do.
Then there is a problem of fairness as also in the presentation before us was explained because I mean, there is a possibility that there are hidden discriminations, hidden biases in each technology you deliver to the market. And so, and of course there is the connected problem of transparency of opacity. So how can you really know what is inside that technology and what are you conveying with it?
And then finally address the huge problem of privacy and autonomy, of course.
I mean, how are the personal data of users protected and how the, their agency is preserved and they do not hand reduced in just, I mean, points in a tech landscape, flatten it into, I mean, a digital imposition. No. And so when you think of digital identity, I think that the, the, the problems are pretty much the same because in, in a certain sense, when you assign an identity, you are exercising a power no. And you are making a definition and you, you say what of a person, what he can do, and who is, who is he?
And so you are defining his possibilities and it's really, I mean, an important tact that, that you're exercising. And in this sense, I mean, this morning someone used the, the image of a fortress when speaking on access.
No, and I think that I, I found it really, I mean, engaging that presentation because it, it's true that it's like you can leave someone in or you can leave him out. And so it's basically a problem of, of rights.
I mean, you are giving him rights or you are denying the, those rights.
So in the end, I mean, when you design a digital identity, in fact you are also conveying a definite value system into the society.
No, and onto the users. But the problem is that not always, this is made explicitly. And if you do not define attentively this value system and you, you, there is the strong risk that you end up in.
I mean, denying to people fundamental human rights without even realizing it. So it's really something that should be attentively meditated.
Thank you.
Yeah, I, I think that that's crucially important. One of the things you said there was about when you assign a person an identity, you are really assigning their future in, in a way their future access.
Hank, that reminds me of some of the examples that you brought up last year in your EIC talk. I hope it brings those to mind for you as well, because I'm bouncing it to you. Do you wanna
Yeah, that, that's actually what, what got me started. So I've been working on digital identity a lot on the tech side and then reading on the possibilities that digital identity solutions offered to society and the benefits it can have to open up services to people our really communicated quite clearly in a certain area. But then if you dive deeper into it, you also see where things go wrong.
And in indeed, in in the, the, the normal world where people are entitled to food rations or pensions or allowances, there are, there are many ways to get that. And as soon as you start to introduce a new technology that affects their, their position of rights. So last year there was the, the case that I used of in Uganda where people were issued a digital ID solution. And the problem there that was some of the services were not open to it yet, but we're required to use it. So that led to exclusion. So that access part, so
I think you had said only 30% of the population at the time.
Yeah,
So they were still rolling it out, but services were starting to lock down unless you had this solution. And recently in the Netherlands we had an AI example of the use of an algorithm in how we determine whether people are fraudsters or not. So you can get a lot of allowances in the Netherlands, if you have kids and you bring 'em to daycare, you get an allowance. If you rent a house, you can get an allowance if you're below a certain income level.
But the, the algorithm kind of went haywire in a really bad direction by also labeling people as frauds, for example, based on their ethnicity or the area that they lived in, which should not be the case. And then it's, it's not just the technology that goes haywire, but they also by the results, which was quite terrible for lots of families. They also see that the system around it was not organized in such a way that the technology could be checked by human intervention.
And that's what you, what I've seen a lot in these technological solutions that if you don't have the human check in place or you don't have the redress form mechanism, that's when people start to be excluded or treated unfair or basically lose control over their, their data.
Thank you.
Nice job, Sanjay. I was wondering if you could help us draw the connection between ethics and thinking about these, these ethical tensions and defining value systems and how it direct, how it relates in the world of human rights on the ground.
Yeah, sorry, is that working? Yeah, thank you. Thanks. Wonderful to be part of such a distinguished panel.
Yeah, I thought I'll just take one step back and go back to the, go for it. You know, the whole kind of basis. I mean we have I think two or three elements, identity, ethics, human rights, and what, how do they really get involved?
I mean, interconnected and somehow it's very tempting. And with the limited time, I'm going to try that, that we go back to our Plato and Aristotle. Nice to see what other
We'll start there. Pieces.
Yes, 30 seconds.
Just kidding. And very traditionally philosophy has, you know, four or five branches. Often they say six branches, but three of them get very closely associated with identity and human rights.
And, and that is, these are ontology or, you know, out of metaphysics comes the idea of ontology. What does it mean to be a being? But then today we find all our definitions are not really drawn out of ontology, but out of our epistemology, what, what is our knowledge? How do we end up defining ourselves in different contexts or in different, you know, ways and for different purposes. And you know, just to kind of bring in a bit of comic relief in this very heavy topic, Descarte famously said, I think therefore I am.
And I've always wondered why none of the companies here have used that GT E as I think therefore I am, or I, I I think therefore AI or whatever it is.
But that's really it. But more seriously, again, just kind of bringing the connection back, why we are talking of all this, I think it was much later that Spinoza was the first one to kind of propound the physical, the philosophical basis of how an identity could be constructed, bringing in those elements of epistemology and ontology together. But that is for another time, I can be outside and give you long kind of lectures on this.
But finally, let's say how it became concretized, I think 1948, the Universal Declaration of Human Rights. If you go to Article six, one single line made it all concrete as we see it today, that each human being should be able to stand before law and ask for what he wants or what he wants to be. And that single line of the universal Declaration of the human rights by the United Nations.
So this is also, I'm trying to make the point that United Nations does do good work once in a while, and this was certainly one of them.
So that is perhaps where the, you know, starting point for identity as we understand it today. The two quick points is that Francesca has already said, and I'm sure she, yeah, she kind of works hard along with a lot of other intellectuals of bringing the concepts of ethics into the, you know, realm of digital ethics as we call it today.
So, you know, there are names like Florida and maybe you, and you know, places like Oxford, Yale, which are doing a lot of work in these lines. And before coming to my conclusion, again, bring in that point, which again came in, came up in the previous discussion, that values are not universal, you know, like languages. So I just like to three examples, go to Iceland where there is no pri there's nothing private your can tell.
The ID number is known to everybody. They know the day you paid the tax and how much you paid, they know the day you got married and to whom you got married.
It's, it's, it's all in the public. There's, there's no nothing private. And that is the societal value, that is the constitution of the country and an individual fully lives by it. Then there are countries, and sorry, I wouldn't like to name the countries here, but where it becomes a question of price.
You know, where I have privacy, so I decide what should be the price that someone must pay me if my privacy is, you know, violated or if I need to disclose something, can I charge a price for it? And of course there is the traditional, you know, perhaps I personally, that's my belief that it is the basis of the social contract.
And I'm glad that many countries still do live by that principle where the country and the individual link each other with the idea of having an identity, ensuring human rights and maintaining the right level of, you know, privacy and things like that.
So coming to the conclusion, which I would like to make, is that ethics is not just about moral judgments, you know, which is fine. We, you know, we all will have personal judgements about what is good and what is bad. But what becomes important from the work of, you know, the ethics intellectuals, is that it helps society to build up the notions of what is serious.
You know, what is the, if I put it dramatically, what is crime and punishment involved with, with let's say what is happening in the digital world? If my double is used for creating a fake film, how serious is that, that crime? And therefore what is the punishment? So what are the laws which I must decide design around that. So I thought with that conclusion I come to,
That's fine.
I think it, it connects really nicely to the concept of the value system operationalizing and defining the value system that,
Sorry, I had it listed out so I didn't go on for that,
But Oh, that's absolutely fine. You did wonderfully nishant. You wanted to jump in.
Yeah, so
The challenge for the folks at this conference is how do you take that and put that into code, right? Like the only person in this room I know can probably do that is Justin. 'cause he can put anything into code.
But it, it actually brings up a very interesting point, which has been a pretty interesting challenge to see, which is the fact that I, I brought this point up last week at Averse and it's become very apparent this week as well. 'cause there's literally a vendor here who is saying that they're providing ethically compliant software. And I was like, wait, compliance is about rules. So you can be compliant with GDPR, you can be privacy preserving, but ethically compliant is a big leap.
So there's a lot of ethical washing, ethics washing that is happening in identity now because all of a sudden everybody's picking up on the bus because of the rise of ai.
All of a sudden it's like, well, if you want to have ai, we also have to add ethics to what we do because otherwise people are gonna doubt what we're doing. And so there's a lot of ethics washing going on because what you just described as people building products, we don't know what to do with that, right?
If somebody came to us and said, build human rights compliant or human rights preserving, ah, there's a, there's a human, human rights, there's an actual list of human rights, I can evaluate what we're building against those. There is a process, but when it comes to ethics, because it's so contextual, it becomes really hard to do anything. And then how do you dig that and then build that into product which you then go and sell to a customer and the you and the customer looking at each other and saying, well, you know what, you know, you're gonna use this ethically, right?
And the customer's like, well, you've given me an ethical product, right? And they're both booking each other and nobody's answering. So that's been what I've been struggling with and I think a lot of people here will be struggling with pretty soon.
Absolutely. ERs I'd love to come to you real quick if that's okay, because you've been grappling with how to translate certainly from a human rights perspective.
And, and maybe you can speak to a little bit more about the, the some of the broader ethical questions that feed into the work in DPI. You're not a, you know what, you know what I'm asking. Go.
I will.
And, and I think Nisha's point is really important is yes, how do we translate some of these often quite abstract concepts into lived experience? How does the technologies that we use every day reinforce or contravene the things that we hold most important, the values, the principles, the laws and and so on. Code is clearly a critical part of that.
I mean, but I think there's also an environment in which code lives, and that's the work that I do, which is really around the governance layers, the governance dimensions. A lot of these technologies and the work we've been doing is with supporting the UN development program and particularly thinking about their work around what's an emerging term in the development sector called digital public infrastructure. I can see some eyebrows going up in the audience and there's obviously some resonance with what that means as well as skepticism about what it might imply.
But I think the core point to it is much like we're talking about, so much of what I've been hearing over the last day or so here has been digital wallets talking about the infrastructure around which certain core functions and services that we want to be able to do, transactions that we want to make, ways that we need to prove who we are, the infrastructure that enables that. And there's a big question about whose interest, whose values that's actually serving. And so the idea of digital public infrastructure is that it really delivers value to the public public interest.
Now why is that important? And why is, what's the connection between that and, and governance and ethics and, and human rights?
Now, I believe very strongly that tech is never good. It's never bad, but it's never neutral either. There's always, as Francesca was saying, hidden values, biases, interests that are at play realized through technology.
I could see nods in the, in the audience to that. And so I think that that's that principle. Kranzberg first law tech is neither good nor bad, but never neutral is a critical rationale for why governance is so important. Regardless of the technology.
We need to make sure that the, the laws, the regulations, the policies, the management of those technologies is managed in such a way that the interest of values that we think are important are maintained. So the work we've been doing with UNDP as part of a broader effort to introduce safeguards into building out this digital infrastructure and a particular part of realizing what those rails, those guardrails look like is tools such as the model governance framework for legal identity, which we helped UNDP develop.
And a critical part of that is recognizing that in many different contexts there's a variety of different elements that govern things like digital ident identification.
So the framework which you can find online, it's a governance for id.org. And the framework contains a very holistic set of elements that UNDP, that the UN stands behind as a way of upholding and realizing human rights in that governance layer, it includes things like laws, policies, institutional capacity, but critically things that we might not normally think about.
Things like justice and equity inclusion and critically things like participation and accountability. What are the mechanisms through which recourse against these systems? He's example of the AI driven welfare platform was only stopped because of recourse to human rights law. So it was that governance layer that was able to ensure that this arguably an ID system was actually impinging on people's rights. It was causing harm and it was able to be withdrawn because of a, a, an effective governance layer.
So that's what UND we've been doing with UNDP is thinking about the different elements that need to go into place to ensure a governance layer can ensure that the digital infrastructure in which our lives are increasingly going to run increasingly running is actually maintaining principles of safety, inclusion and human rights.
Thank you. I wonder if either Hank or you Amers want to expand on the case of South Africa that recently went to the Supreme Court. I think that's a wonderful example of the governance layer.
And do we think the amount of time that it took to get that result was an indicator of effective or not effective governance layer?
So the story in South Africa, as many of you I'm sure know, is that there was a, a, one of the civil registries had, I believe it was 2 million entities. Human registries within it, human registries were in it deactivated essentially. So people's ID cards was no longer valid. They weren't able to access certain services, they weren't able to prove who they were. And critically, this was just before an election, they weren't able to vote.
And critically, these were particular individuals who were regular cross-border migrants. So they were a very defined demographic. All of the talk about ID systems as tools for surveillance, tools for targeting horribly realized. But what happened was that people were able to turn to the Supreme Court in South Africa, they're able to say, look, this is, this is against the constitution. This breaks the laws of the land.
And those, those identification registries were restored.
So in many cases, also in India, in Kenya where there's been huge debates around ID systems, it's turning to legal frameworks that has been the most effective way of countering perceived injustices. And I don't think it's necessarily in the code, I think it's in the way that those systems are are being misused. But one of the things I'd be interested to hear others in the panel talk to and particularly is thinking about ways in which it's possible to specify what kind of systems we want to see in place.
And particularly the role of procurement in terms of specifying what kind of standards, what kind of components, what kind of APIs, the different elements that need to be in place to ensure a system actually does what a particular procuring party would want to see a a, a system do. Because I think there's, there's a degree to which you can have a governance layer which manages a a system or set of systems, but how to ensure that the systems you get in the first place are the ones that you want to see and are doing the things that you want to do.
And I know NDPs work around procurement is incredibly complex because procurement agents aren't always the most technically savvy people. I think I can say that without obsessing too many people,
Procurement
Is the best brand for every product vendor out there, right? We love our procurement guys.
You know, let me try to think about the last RFPI read where there was a section regarding human rights or ethics or
I can't remember one crickets never there. But it goes back to the fact that it's not considered a system that is impacting human rights. And I think that's one of the challenges we have, which is explaining, as you were mentioning, ultimately recognizing that the systems that are being built or put put in place using our products is affecting somebody's possibilities, is affecting somebody's future. And those are not codified anywhere.
Those are not guidelines anywhere for anybody who is building a system unless they're doing something that is highly visible like a national ID system, in which case there's a whole different set of issues that come up. So what you really find is what I find myself often being is in an uncomfortable position of telling the customer, you're not thinking about the things you should be thinking about and you're doing it wrong. And no
Always a
Good pitch. No boss wants to hear their CTO telling a customer you're wrong. But unfortunately that is the, that is the position we get put in.
It's interesting how some of those things then play out. Like there's a lot of technology driven or technology first thinking like engineering first thinking. And with that engineering first thinking comes very narrow views. And those narrow views are limited because they're driven from very narrow experiences.
Like, so what was talked about earlier, the fact that I think was being talked about earlier is that when you don't recognize context, you end up with systems that are very biased. Not because there was intention, but because it's ignorance in terms of how you're building things. And over time what we've seen is we went from, I've been unfortunately in identity long enough where I've seen the evolution of you could look at an identity system and say, explain to me why something happened.
And you could exactly point to it because you're like, there's the ACL, there's the group, there's the user, and now you're like, what this happen? Well I have no idea. Nobody can answer why.
So it is, and so the to the point which I would love to hear is this is where governance comes in is because last week I was talking about this with Michelle Dundy on a panel inverse and she gave me a really interesting insight. She said, when things like that happen, look at what happened with privacy. And you have to shift from understanding how and understanding why. 'cause you're not gonna be able to understand why anymore.
You have to shift to an outcomes oriented approach where you're measuring outcomes, you're looking at outcomes, and you're using that to figure out whether the systems are defined correctly. And outcomes comes from governance.
Yeah.
Sorry, if I could just continue on that. I work for the United Nations High Commissioner for the refugees. I think we have that very, very complex situation. People turn up without identities and you have to, you know, create, recreate their identities. We have a process of, with every project, unfortunately, because that is how the mandate goes. We call it simply or crudely as the data protection impact assessment. So every time a refugee has to be registered or a new class of refugees have to be registered, we have to go through A-D-P-I-A.
And one of my recommendations, and some of the people have started agreeing is that we also involve the vendors with that, the data protection and impact assessment. So they start understanding what are the issues going into it.
I mean, we can give you lovely examples. I don't know, you do read the headlines in the newspapers for example, there's that small bunch of 28,000 refugees stuck in lampedusa and it took two months for the data protection impact assessment to be completed. And those people had to be put up in temporary camps just deciding that, can I capture his name? Can I capture his, you know, what, what constituted privacy and what constituted, you know, something which can be done acceptably. Thank you.
So, and
That's, that's interesting 'cause on the, on the flip side of that, so it takes time to do a proper registration, but what digital technology enables is that you set the policy and the execution is immediate. So I've been studying some population registries and how they're buildup, they, they had to register male and female and now we can change the sex populations. Registers can't do that. So they need to adjust and then law comes into in case flow and it changes. But if you wanna do it with digital, there's almost no time.
So the outcome based one is interesting and because the, the other factor that ties into it is the liability. Is it responsibility or liability?
And if, if you go off in the liability, long, long time ago in ancient Samaria you had King Hammurabi and he had some great set of laws. One of the laws was that if a house caved in the construction person was responsible.
And now we see a lot of vendors and their liability basically s when they signed the contract and it ticks all the boxes. But indeed that, that outcome, that's one of the challenges where how fast can you see the outcome and how fast can you adjust it? And in that process that's ongoing, keep an open eye for that discussion on.
So, so determining whether the process is right or wrong can be done based on human rights. I don't know, number four or five on autonomy or privacy. But that value discussion also plays a role. 'cause we do have the universal Declaration of Human Rights, but the execution in various geographies can be very different. So you need to engage still in that conversation.
Thank you.
Yeah, yeah. I would love to turn back to Francesca now actually, 'cause you've done some real practical work on, on the Impulse project in Italy. Talk to us about what your lessons learned from an, like, how, how that project helped you understand how the ethical landscape needs to be navigated.
Yeah, thank you. Well, impulses was an Horizon 2020 founded project. So I mean there was a consortium of course of many countries and it is just one of those and input stands for identity management in public services. So it was a project exactly about digital identity. And I mean it was quite interesting because it wasn't focused on developing a real wallet even though we ended up with the prototype. But it was more like an experiment about the introduction of a digital identity wallet in different countries. So in different social contexts.
And in the end, I mean we also, I mean conducted many workshops and round tables also in one of those we had Hank and it was really, I mean, insightful because it merged that really the, the differences that are in different social context really count when you try to introduce a digital identity in certain society.
And the social structures, what already is in a certain country, if they already are used with certain technology and what experiences they had with it make a a huge difference.
So even though, I mean you try to, I mean have a, a uniform ethical framework and that is of course very similar to the UD wallet and the, the one I mean stated by E iida that is at heart of also the inputs project. But then you realize that if you don't really delve into the differences that are typical of each context, you, you end up with basically nothing because you, you, your project, your identity wallet and identity management system will not be accepted. So it was really, I mean, something we was, it was useful and we learned a lot doing it.
So
It's sort of, it's a classic case of really you need to know and understand your audience and the value systems that they uphold.
Yeah, absolutely. And for example, this morning there was a, a talk about the German identity wallet. And it was really fascinating because they are putting in practice, of course not because they had contact with us, but what some of it, the insights that also emerged from the inputs project, for example, they're making, I mean, workshops with stakeholders and they are taking their insights and trying to embed them into the wallet design.
And it's like seeing put in practice, you know, what we saw in this small experiment that we conducted. So yes, it's, I mean really the the, it it's not only the broad ethical values but also the, the, the, the values that are typical of a certain society or certain context that are fundamental in this. Of course. Yeah.
So this definitely doesn't solve your problem of, you know, how, how ethical outcomes are baked into contracts or governance or anything like that.
But does that lead you to any thoughts nishant about how companies can take, can can be designing towards a value system or is that, is it even relevant?
First of all, no smart contracts. Okay.
The, I think the challenge is there is a massive gap between what we do and what we need to be doing. And it's because we've been such a technology driven industry, right? And identity, one of the reasons I love being in identity is because we are not like security where it can be boiled down to math, right? Identity isn't math as much as, and it's not data as much as Steve Wilson. Somebody took us through that earlier.
As much as Steve Wilson would like to us to believe that, that that identities, I said it's not 'cause identity brings some of these considerations in that it's not just about the data, but it's about processes and systems you're building and how they're being used. And it's really hard to build products systems generically for ethical situations that are contextual, right?
And ethics often end up being contradictory. One of my favorite examples, well this is another example I'm gonna give in my talk so I can use it here, which is moip is a really interesting idea. It's a really good concept.
The fact that it's open source, really good. One of its core values in the architecture is there should be no vendor lockin. Now everybody would agree that vendor lockin isn't a good idea as a vendor. I wouldn't like, like it, we'd like to be sticky. We don't call it lockin, we call it sticky, but vendor lockin is a bad idea. But most of us building national identity is about deploying national identity systems. How do you avoid vendor lock-in, in certain contexts when the data is so sensitive and has very proprietary needs?
So one of the requirements that most puts on biometric vendors is you must store raw data, the photograph instead of just the template.
Why? Because if we switch vendors, the raw photograph is there so that we can generate new templates. 'cause templates are not interchangeable. Anybody remember the OPM hack, right? You're forcing somebody who's building a deployment at a national ID level to keep raw image is images around because you're, you're sacrificing one value for a different value, right?
And that's sort of the moral quandary, the ethical quandary that you end up in, which is how do you, 'cause SAP is catering to organizations and governments that just can't handle right the need to build a system like athar because India could throw resources and money at the problem 'cause they had that scale, the majority of the, the world cannot. So they're looking for a solution like this, but they're being forced to take this kind of a burden on, and that's the challenge, right? These are the kinds of issues we run into.
I think there's a
Such great examples Han, and I think to me, one of the things I've taken away from the conversation, particularly around most SIP and the developing conversation around standards within the specification for particular systems, which I think most, one of the great achievements it's had is to promote the idea of interoperability as a value and as something that the industry more widely should accept. Which I think is generally a good thing.
Although I do think things like interoperability then introduce their own risks attached to certain values like privacy, which need further engagement and, and support in order to protect.
But one of the things I've been really struck by around the conversation, particularly of mip, is the demands it puts on procurement to understand what they're actually asking for, to know what the outcomes they're looking to achieve are, be they both technical and pragmatic as well as values based and the ability to translate those values into detailed specifications that go into an RFP that actually make sense and can deliver high ideally deconflicted values that they're look, look looking to see.
And one of the things I've been struck by here I overheard, and I don't know the detail of this to know how deep it is, is that the digital wallet that the EU is developing, which has as one of the great things that people talk about, this idea of selective disclosure, that the individual should have greater control over how much of the data, their data is revealed to a relying party. But actually that the, the protocol that enables that selective disclosure actually privileges the relying party rather than the individual in terms of specifying what information is actually disclosed.
So we talk about these technological systems, we talk about these concepts, a digital wallet, we attach certain values to it, individual agency and and control. And when you drill down into some of the detail of the technology that may not necessarily be realized in the specification that is within that system itself. And I think that's a real demand on procurement particularly, or policy and policy makers to, to work together and develop both the capacity and the vision to ensure that what they're asking for will deliver on what outcomes they're looking to see.
Yeah, and I think just, just to add to that with like these really super smart techie people who come up with this product, but indeed on the procurement side, there needs to be an equally smart process person or governance person who understands that this, this digital wallet will enable uses for the citizen, but indeed, why can I only respond to a request for data? Why can't I push some data proactively to a provider and if I get a full request, can I do only half or is it a one package deal?
How, how does the, how does that work? And that's when we get these value conversations going as well to see how that will work in society.
And, and one of the things that may help there is to flip the framing and it's good for solution providers to also be aware of that, that we give a wallet and that unfortunately will not save the world.
What it will do is that it will open like a sixth channel to a service that people need. So flipping that framing, saying it's not about a wallet, it's about this person who wants something and he can do it in paper, he can fiz at the office, he can write an email and how he can use a wallet.
So how are we gonna orchestrate that service delivery because we've got a really cool fancy new tool that's the wallet only, a digital version. And that kind of also helps the thinking on, so if this tool is super easy and the others are super difficult, is that fair? Do we want to push that out to the people? And there are many examples, but I won't go on on that
Line.
No, you must not, you must stop to. So I think there's still lots of unanswered questions here about how we take the, the great ethical research and thinking that's out there and then human rights instruments and the wonderful work of the UNDP in, in helping to provide a legal identity governance framework. How do we bridge all of this into, into product design, into contract negotiations, into RFP processes? How do we do, there's lots of unanswered questions and I look forward to working with all of you on projects that will fix it in future. Thank.