Thanks for being here. My name is Elizabeth Garber and I was the lead editor of a paper last year called Human-Centric Digital Identity for Government Officials and many of the panelists here contributed a great amount to the writing of that paper and so I was really excited to pull this panel together. I'll just briefly give an introduction to who's on the stage and why they're on the stage.
We have an amazing panel here and I do encourage you to go look them up on LinkedIn and reach out and speak to them but this is also going to give you a sort of idea of what the arc is going to be of the discussion that we're going to have. So we have Francesca Morpurgo here from the CyberEthics Lab in the City of Rome. She's a real expert in ethical research and also applied ethics as I've put it here because they've actually tested things out in the real world.
Sanjay Dharwadkar here is from UNHCR and he's here because he has a real insight into human rights on the ground in his work with UNHCR. Emeriss Schomacher is there.
Sorry, you're not in the right order people. He comes to us from Caribou Digital and he's a senior advisor to the UNDP and he's here with a real insight into digital transformation and governance on a national level. And then Nishant Kaushik, you are well known in this industry for challenging folks like us to make it all practicable in industry. And then just because I couldn't fit me and Hank on the slide, we brought everyone together. We're there.
See, look, we have our own special slide. It's okay. It's okay. So Hank is an interdisciplinary researcher and thought leader. I liked that. I wanted to steal it, but yeah, it's good. It's good. I'm working on it. So that's the group you have in front of you today. Get your questions ready, but I have a lot to get us started. So I'm gonna start with Francesca.
Francesca, can you talk to us a little bit about some of the tensions that are emerging as technology evolves and how it relates to digital identity? Of course.
Well, if you think of the broad tech landscape, I think that, I mean, you can spot mainly three problems, huge problems that, I mean, dominate the landscape. But the first one is the one of access. So when you think about the technology, you first have to think who is granted the right to access and who is not granted that right. So who is left out? And this is a fundamental aspect, of course, because it defines your possibilities, what you can do.
Then there is a problem of fairness, as also in the presentation before us was explained, because, I mean, there is a possibility that there are hidden discriminations, hidden biases in each technology you deliver to the market. And so, of course, there is the connected problem of transparency, of opacity. So how can you really know what is inside that technology and what are you conveying with it?
And then, finally, there is the huge problem of privacy and autonomy, of course. I mean, how are the personal data of users protected and how their agency is preserved? And they do not tend to reduce them just, I mean, points in a tech landscape flattened into, I mean, a digital imposition. And so when you think of digital identity, I think that the problems are pretty much the same, because in a certain sense, when you assign an identity, you are exercising the power, no? And you are making a definition and you say what of a person, what he can do and who is he?
And so you are defining his possibilities and it's really, I mean, an important act that you are exercising. And in this sense, I mean, this morning someone used the image of a fortress when speaking on access, no? And I think that I found it really, I mean, engaging, that presentation, because it's true that it's like you can leave someone in or you can leave him out. And so it's basically a problem of rights.
I mean, you are giving him rights or you are denying those rights. So in the end, I mean, when you design a digital identity, in fact, you are also conveying a definite value system into the society, no? And onto the users. But the problem is that not always this is made explicitly. And if you do not define attentively this value system and there is the strong risk that you end up denying to people fundamental human rights without even realizing it. So it's really something that should be attentively meditated. Thank you.
Yeah, I think that that's crucially important. One of the things you said there was about when you assign a person an identity, you're really assigning their future in a way, their future access.
Henk, that reminds me of some of the examples that you brought up last year in your EIC talk. I hope it brings those to mind for you as well, because I'm bouncing it to you. Do you want to?
Yeah, and that's actually what got me started. So I've been working on digital identity a lot on the tech side, and then reading on the possibilities that digital identity solutions offer to society and the benefits it can have to open up services to people are really communicated quite clearly in a certain area. But then if you dive deeper into it, you also see where things go wrong. And in the normal world where people are entitled to food rations or pensions or allowances, there are many ways to get that.
And as soon as you start to introduce a new technology, that affects their position of rights. So last year, there was the case that I used of Uganda, where people were issued a digital ID solution. And the problem there that was some of the services were not open to it yet, but were required to use it. So that led to exclusion, so that access part. Only 30% of the population at the time. Yeah. So they were still rolling it out, but services were starting to lock down unless you had this solution.
And recently in the Netherlands, we had an AI example of the use of an algorithm in how we determine whether people are fraudsters or not. So you can get a lot of allowances in the Netherlands. If you have kids and you bring them to daycare, you get an allowance. If you rent a house, you can get an allowance if you're below a certain income level.
Um, but the, the algorithm kind of went haywire in a really bad direction by also labeling people as frauds, for example, based on their ethnicity or, uh, the area that they lived in, which should not be the case. And then it's, it's not just the technology that goes haywire, but they also buy the results, which was quite terrible for lots of families. They also see that the system around it was not organized in such a way that the technology could be checked by human intervention.
And that's what you, what I've seen a lot in these technological solutions, that if you don't have the human check in place, or you don't have the redressal mechanism, that's when people start to be excluded or treated unfair or basically lose control over their, their data. Thank you.
Nice job, Sanjay. Um, I was wondering if you could help us draw the connection between ethics and thinking about, uh, these, these ethical tensions and, um, defining value systems and how it direct, how it relates in the world of human rights on the ground.
Sorry, is that working? Yeah. Thank you. Thanks.
Uh, wonderful to be part of such a distinguished, uh, panel. Um, yeah, I thought I'll just take one step back and go back to the, you know, the whole kind of, um, basis.
I mean, we have, I think, two or three elements, identity, ethics, human rights, and how do they really get, uh, involved? Uh, I mean, interconnected. And somehow it's very tempting. And with the limited time, I'm going to try that, that we go back to our Plato and Aristotle to see what other basis. Yes. 30 seconds.
And, um, very traditionally philosophy has, uh, you know, four or five branches. Often they say six branches, but three of them get very closely associated with, uh, identity and human rights.
And, and that is, these are, uh, ontology or, you know, out of metaphysics comes the idea of ontology. What does it mean to be a being? But then today we find all of our definitions are not really drawn out of ontology, but out of our epistemology.
What, what is our knowledge? How do we end up defining ourselves, uh, in different contexts or in different, uh, uh, you know, uh, ways and for different purposes?
Uh, and, you know, uh, just to kind of bring in a bit of comic relief in this very heavy topic, uh, Descartes famously said, I think therefore I am. And I've always wondered why none of the companies here have used that GTE as I think therefore I am, or I, I, I think therefore AI or whatever it is that really, but, uh, more seriously, again, just kind of bringing the connection back, why we are talking of all this.
I think it was, uh, uh, much later that Spinoza was the first one to kind of propound the physical, uh, the philosophical basis of how an identity could be constructed, bringing in those elements of epistemology and ontology together. But that is for another time. I can be outside and give you long, uh, uh, kind of lectures on this.
Uh, but finally, uh, let's say how it became concretized. I think 1948, the universal declaration of human rights. If you go to article six, one single line made it all concrete as we see it today, that each human being should be able to stand before law and ask for what he wants or what he wants to be. And that single line of the universal declaration of the human rights by the United Nations. So this is also, I'm trying to make the point that United Nations does do good work once in a while. And this was certainly one of them.
So that is perhaps where the, uh, you know, starting point for identity as we understand it today. The, uh, two quick points is that, uh, Francesca has already said, and I'm sure she, yeah, she, uh, kind of, uh, works hard along with a lot of other intellectuals of bringing the concepts of ethics into the realm of digital ethics, as we call it today.
So, you know, there are names like Florida and maybe you and, you know, places like Oxford, Yale, which are doing a lot of work in these lines. And, uh, before coming to my conclusion, again, bringing that point, which again came in, came up in the previous discussion, that values are not universal, you know, like languages. So I just like to highlight three examples. Go to Iceland where there's no private, there's nothing private.
Your, can you tell us, the ID number is known to everybody. They know the day you paid the tax and how much you paid. They know the day you got married and to whom you got married.
It's, it's, it's all in the public, there's, there's no, nothing private. And that is the societal value that is the constitution of the country and an individual fully lives by it. Then there are countries, and sorry, I wouldn't like to name the countries here, but where it becomes a question of price, you know, where I have privacy. So I decide what should be the price that someone must pay me if my privacy is, uh, you know, uh, violated, or if I need to disclose something, can I charge a price for it?
And of course, there is the traditional, uh, you know, perhaps, uh, I personally, that's my belief, that it is the basis of the social contract. And I'm glad that many countries still do live by that principle, where the country and the individual link each other with the idea of having an identity, ensuring human rights, and maintaining the right level of, uh, you know, privacy and things like that. So coming to the conclusion, which I would like to make is that ethics is not just about moral judgments, you know, which is fine.
We, you know, we all will have personal judgments about what is good and what is bad. But what becomes important from the work of, you know, the ethics intellectuals is that it helps society to build up the notions of what is serious, you know, what is the, if I put it dramatically, what is crime and punishment involved with, uh, with, let's say, what is happening in the digital world? If my double is used for creating a fake film, how serious is that, uh, uh, that crime? And therefore, what is the punishment? So what are the laws which I must decide, uh, design around that?
So I thought with that conclusion, I've come to... That's fine. I think it connects really nicely to the concept of the value system to operationalizing and defining the value system.
Sorry, I had it listed out, so I didn't go on forever. That's absolutely fine. You did wonderfully.
Nishant, you wanted to jump in. Yeah. So the challenge for the folks at this conference is how do you take that and put that into code, right? Like the only person in this room who I know can probably do that is Justin, because he can put anything into code.
Um, but it actually brings up a very interesting point, which has been a pretty interesting challenge to see, uh, which is the fact that I brought this point up last week at Identiverse and it's become very apparent this week as well, because there's literally a vendor here who is saying that they are providing ethically compliant software. And I was like, wait, compliance is about rules. So you can be compliant with GDPR. You can be privacy preserving, but ethically compliant is a big leap.
So there's a lot of ethical washing, ethics washing that is happening in identity now, because all of a sudden everybody's picking up on the buzz because of the rise of AI. All of a sudden it's like, well, if you want to have AI, we also have to add ethics to what we do because otherwise people are going to doubt what we're doing. And so there's a lot of ethics washing going on because what you just described as people building products, we don't know what to do with that, right?
If somebody came to us and said, build human rights compliant or human rights preserving, ah, there's a, there's a human rights charter. There's an actual list of human rights. I can evaluate what we're building against those. There's a process. But when it comes to ethics, because it's so contextual, it becomes really hard to do anything. And then how do you take that and then build that into product, which you then go and sell to a customer and you and the customer looking at each other and saying, well, you know what, you know, you're going to use this ethically, right?
And the customer's like, well, you've given me an ethical product, right? And they're both looking at each other and nobody's answering. So that's been what I've been struggling with. And I think a lot of people here will be struggling with recently. Absolutely.
Emers, I'd love to come to you real quick, if that's okay, because you've been grappling with how to translate certainly from a human rights perspective. And maybe you can speak to a little bit more about the, the, some of the broader ethical questions that feed into the work in DPI. You're not, you know, you know what I'm asking, go. I will. And I think Nishan's point is really important is how do we translate some of these often quite abstract concepts into lived experience?
How does the technologies that we use every day reinforce or contravene the things that we hold most important, the values, the principles, the laws, and so on. Code is clearly a critical part of that.
I mean, but I think there's also an environment in which code lives. And that's the work that I do, which is really around the governance layers, the governance dimensions, a lot of these technologies. And the work we've been doing is with supporting the UN development program and particularly thinking about their work around what's an emerging term in the development sector called digital public infrastructure. I can see some eyebrows going up in the audience, and there's obviously some resonance with what that means as well as skepticism about what it might imply.
But I think the core point to it is much like we're talking about so much of what I've been hearing over the last day or so here has been digital wallets, talking about the infrastructure around which certain core functions and services that we want to be able to do, transactions that we want to make, ways that we need to prove who we are, the infrastructure that enables that. And there's a big question about whose interests, whose values that's actually serving. And so the idea of digital public infrastructure is that it really delivers value to the public, public interest.
Now, why is that important? And what's the connection between that and governance and ethics and human rights?
Now, I believe very strongly that tech is never good, it's never bad, but it's never neutral either. There's always, as Francesca was saying, hidden values, biases, interests that are at play realized through technology. I can see nods in the audience to that. And so I think that principle, Krantzberg's first law, tech is neither good nor bad, but never neutral, is a critical rationale for why governance is so important.
Regardless of the technology, we need to make sure that the laws, the regulations, the policies, the management of those technologies is managed in such a way that the interests, the values that we think are important are maintained. So the work we've been doing with UNDP is part of a broader effort to introduce safeguards into building out this digital infrastructure. And a particular part of realizing what those rails, those guardrails look like is tools such as the model governance framework for legal identity, which we helped UNDP develop.
And a critical part of that is recognizing that in many different contexts, there's a variety of different elements that govern things like digital identification. So the framework, which you can find online, it's at governanceforid.org. And the framework contains a very holistic set of elements that UNDP, that the UN stands behind as a way of upholding and realizing human rights in that governance layer.
It includes things like laws, policies, institutional capacity, but critically, things that we might not normally think about, things like justice and equity, inclusion, and critically, things like participation and accountability. What are the mechanisms through which recourse against these systems? Henk's example of the AI-driven welfare platform was only stopped because of recourse to human rights law.
So it was that governance layer that was able to ensure that this arguably an ID system was actually impinging on people's rights, it was causing harm, and it was able to be withdrawn because of an effective governance layer. So that's what we've been doing with UNDP, is thinking about the different elements that need to go into place to ensure a governance layer can ensure that the digital infrastructure on which our lives are increasingly going to run, increasingly running, is actually maintaining principles of safety, inclusion, and human rights. Thank you.
I wonder if either Henk or you, Amaris, want to expand on the case of South Africa that recently went to the Supreme Court. I think that's a wonderful example of the governance layer, and do we think the amount of time that it took to get that result was an indicator of effective or not effective governance layer? So the story in South Africa, as many of you I'm sure know, is that there was one of the civil registries had, I believe it was two million entities, human registries within it, deactivated essentially. So people's ID cards was no longer valid.
They weren't able to access certain services, they weren't able to prove who they were, and critically, this was just before an election, they weren't able to vote. And critically, these were particular individuals who were regular cross-border migrants. So they were a very defined demographic. All of the talk about ID systems as tools for surveillance, tools for targeting, horribly realized.
But what happened was that people were able to turn to the Supreme Court in South Africa, they're able to say that this is against the constitution, this breaks the laws of the land, and those identification registries were restored. So in many cases, also in India, in Kenya, where there's been huge debates around ID systems, it's turning to legal frameworks that has been the most effective way of countering perceived injustices. And I don't think it's necessarily in the code, I think it's in the way that those systems are being misused.
But one of the things I'd be interested to hear others in the panel talk to, and particularly Nishant, is thinking about ways in which it's possible to specify what kind of systems we want to see in place, and particularly the role of procurement in terms of specifying what kind of standards, what kind of components, what kind of APIs, the different elements that need to be in place to ensure a system actually does what a particular procuring party would want to see a system do.
Because I think there's a degree to which you can have a governance layer which manages a system or set of systems, but how do you ensure that the system you get in the first place are the ones that you want to see and are doing the things that you want to do? And I know UNDP's work around procurement is incredibly complex, because procurement agents aren't always the most technically savvy people. I think I can say that without upsetting too many people. Procurement is the best for every product vendor out there, right? We love our procurement guys, you know.
Let me try to think about the last RFP I read where there was a section regarding human rights or ethics, or I can't remember one. Crickets. Never there. But it goes back to the fact that it is not considered a system that is impacting human rights, and I think that's one of the challenges we have, which is explaining, as you're mentioning, ultimately recognizing that the systems that are being built or put in place using our products is affecting somebody's possibilities, is affecting somebody's future.
And those are not codified anywhere, those are not guidelines anywhere for anybody who is building a system, unless they're doing something that is highly visible, like a national ID system, in which case there's a whole different set of issues that come up. So what you really find is, what I find myself often being is in an uncomfortable position of telling the customer, you're not thinking about the things you should be thinking about and you're doing it wrong. And no boss wants to hear their CTO telling a customer you're wrong. But unfortunately, that is the position we get put in.
It's interesting how some of those things then play out. There's a lot of technology-driven, or technology-first thinking, like engineering-first thinking. And with that engineering-first thinking comes very narrow views, and those narrow views are limited because they're driven from very narrow experiences. So what was talked about earlier, the fact that I think was being talked about earlier, is that when you don't recognize context, you end up with systems that are very biased, not because there was intention, but because it's ignorance in terms of how you're building things.
And over time, what we've seen is, I've been unfortunately in identity long enough where I've seen the evolution of, you could look at an identity system and say, explain to me why something happened, and you could exactly point to it, because you're like, there's the ACL, there's the group, there's the user. And now you're like, why did this happen?
Well, I have no idea. Nobody can answer why. And so to the point, which I would love to hear is, this is where governance comes in, is because last week, I was talking about this with Michelle Danady on a panel in the universe, and she gave me a really interesting insight.
She said, when things like that happen, look at what happened with privacy, and you have to shift from understanding how and understanding why, because you're not going to be able to understand why anymore. You have to shift to an outcomes-oriented approach, where you're measuring outcomes, you're looking at outcomes, and you're using that to figure out whether the systems are defined correctly. And outcomes comes from governance. If I could just continue on that, I work for the United Nations High Commissioner for the Refugees. I think we have that very, very complex situation.
People turn up without identities, and you have to create, recreate their identities. We have a process with every project, unfortunately, because that is how the mandate goes. We call it simply or crudely as the data protection impact assessment. So every time a refugee has to be registered, or a new class of refugees have to be registered, we have to go through a DPIA. And one of my recommendations, and some of the people have started agreeing, is that we also involve the vendors with that, the data protection impact assessment, so they start understanding what are the issues going into it.
I mean, we can give you lovely examples. I don't know, you do read the headlines in the newspapers. For example, there was that small bunch of 28,000 refugees stuck in Lampedusa, and it took two months for the data protection impact assessment to be completed, and those people had to be put up in temporary camps, just deciding that can I capture his name, can I capture his, you know, what constituted privacy and what constituted, you know, something which can be done acceptably. Thank you.
And that's interesting because on the flip side of that, so it takes time to do a proper registration, but what digital technology enables is that you set the policy and the execution is immediate. So I've been studying some population registries and how they are built up. They had to register male and female, and now we can change the sex. Populations, registers can't do that, so they need to adjust, and then law comes into the case and it changes.
But if you want to do it with digital, there's almost no time, so the outcome-based one is interesting because the other factor that ties into it is the liability. Is it responsibility or liability? And if you go off in the liability, a long time ago in ancient Samaria, you had King Hammurabi, and he had some great set of laws. One of the laws was that if a house caved in, the construction person was responsible. And now we see a lot of vendors, and their liability basically ends when they sign the contract, and it ticks all the boxes. But indeed, that outcome, that's one of the challenges.
How fast can you see the outcome and how fast can you adjust it? And in that process that's ongoing, keep an open eye for that discussion on so determining whether the process is right or wrong can be done based on human rights, I don't know, number four or five on autonomy or privacy, but that value discussion also plays a role because we do have the Universal Declaration of Human Rights, but the execution in various geographies can be very different. So you need to engage still in that conversation. Thank you.
I would love to turn back to Francesca now, actually, because you've done some real practical work on the Impulse project in Italy. Talk to us about what your lessons learned from it, like how that project helped you understand how the ethical landscape needs to be navigated. Thank you.
Well, Impulse was a Horizon 2020 funded project. So there was a consortium, of course, of many countries, and Italy is just one of those. And Impulse stands for Identity Management in Public Services. So it was a project exactly about digital identity. And I mean, it was quite interesting because it wasn't focused on developing a real wallet, even though we ended up with the prototype. But it was more like an experiment about the introduction of a digital identity wallet in different countries, so in different social contexts.
And in the end, I mean, we also conducted many workshops and roundtables. Also, in one of those, we had Hank. And it was really, I mean, insightful because it emerged that really the differences that there are in different social contexts really count when you try to introduce a digital identity in a certain society. And the social structures, what already is in a certain country, if they already are used with certain technology and what experiences they had with it, make a huge difference.
So even though, I mean, you try to, I mean, have a uniform ethical framework, and that is, of course, very similar to the UD wallet and the one, I mean, stated by EIDAS, that is at the heart of also the Impulse project. But then you realize that if you don't really delve into the differences that are typical of each context, you end up with basically nothing, because your project, your identity wallet and identity management system will not be accepted. So it was really, I mean, something we, it was useful, and we learned a lot doing it.
So it's sort of the classic case of you need to know and understand your audience and the value systems that they uphold. Yeah, absolutely. And for example, this morning, there was a talk about the German identity wallet, and it was really fascinating because they are putting in practice, of course, not because they had contact with us, but what some of the insights that also emerged from the Impulse project, for example, they are making, I mean, workshops with stakeholders, and they are taking their insights and trying to embed them into the wallet design.
And it's like seeing, putting in practice, you know, what we saw in this small experiment that we conducted. So, yes, it's, I mean, really, it's not only the broad ethical values, but also the values that are typical of a certain society or a certain context that are fundamental in this, of course. So this definitely doesn't solve your problem of, you know, how ethical outcomes are baked into contracts or governance or anything like that. But does that lead you to any thoughts, Nishant, about how companies can take can be designing towards a value system? Or is that, is it even relevant?
First of all, no smart contracts. Okay, so the, I think the challenge is, there is a massive gap between what we do and what we need to be doing. And it's because we've been such a technology driven industry, right? And identity, one of the reasons I love being in identity is because we are not like security, where it can be boiled down to math, right?
Identity isn't math as much, and it's not data, as much as Steve Wilson, as much as Steve Wilson would like us to believe that identity is a set, it's not, because identity brings some of these considerations in that it's not just about the data, but it's about processes and systems you're building and how they're being used. And it's really hard to build products, systems, generically for ethical situations that are contextual, right? And ethics often end up being contradictory.
One of my favorite examples, well, this is not an example I'm going to give in my talk, so I can use it here, which is MOSIP is a really interesting idea. It's a really good concept. The fact that it's open is also really good. One of its core values in the architecture is there should be no vendor lock-in.
Now, everybody would agree that vendor lock-in isn't a good idea. As a vendor, I wouldn't like it. We like to be sticky. We don't call it lock-in, we call it sticky. But vendor lock-in is a bad idea. But MOSIP is building national identity, is about deploying national identity systems. How do you avoid vendor lock-in in certain contexts when the data is so sensitive and has very proprietary needs? So one of the requirements that MOSIP puts on biometric vendors is you must store raw data, the photograph, instead of just the template. Why?
Because if we switch vendors, the raw photograph is there so that we can generate new templates, but templates are not interchangeable. Anybody remember the OPM hack? You're forcing somebody who's building a deployment at a national ID level to keep raw images around because you're sacrificing one value for a different value. And that's sort of the moral quandary, the ethical quandary that you end up in, which is how do you...
Because MOSIP is catering to organizations and governments that just can't handle the need to build a system like Aadhaar, because India could throw resources and money at the problem because they had that scale. The majority of the world cannot. So they're looking for a solution like this, but they're being forced to take this kind of a burden on, and that's a challenge. These are the kinds of issues we run into.
I think there's such great examples, Nishant, and I think to me, one of the things I've taken away from the conversation, particularly around MOSIP and the developing conversation around standards within the specification for particular systems, which I think MOSIP, one of the great achievements it's had is to promote the idea of interoperability as a value and as something that the industry more widely should accept, which I think is generally a good thing.
Although I do think things like interoperability then introduce their own risks attached to certain values like privacy, which need further engagement and support in order to protect.
But one of the things I've been really struck by around the conversation, particularly of MOSIP, is the demands it puts on procurement to understand what they're actually asking for, to know what the outcomes they're looking to achieve are, be they both technical and pragmatic as well as values-based, and the ability to translate those values into detailed specifications that go into an RFP that actually makes sense and can deliver ideally de-conflicted values that they're looking to see.
And one of the things I've been struck by here, I overheard, and I don't know the detail of this to know how deep it is, is that the digital wallet that the EU is developing, which has as one of the great things that people talk about this idea of selective disclosure, that the individual should have greater control over how much of the data, their data, is revealed to a relying party, but actually that the protocol that enables that selective disclosure actually privileges the relying party rather than the individual in terms of specifying what information is actually disclosed.
So we talk about these technological systems, we talk about these concepts of digital wallet, we attach certain values to it, individual agency and control, and when you drill down into some of the detail of the technology, that may not necessarily be realized in the specification that is within that system itself. And I think that's a real demand on procurement particularly, or policy and policymakers, to work together and develop both the capacity and the vision to ensure that what they're asking for will deliver on what outcomes they're looking to see.
And I think just to add to that, we have like these really super smart techie people who come up with this product, but indeed on the procurement side there needs to be an equally smart process person or governance person who understands that this digital wallet will enable uses for the citizen, but indeed why can I only respond to a request for data? Why can't I push some data proactively to a provider? And if I get a full request, can I do only half? Or is it a one package deal? How does that work?
And that's when we get these value conversations going as well, to see how that will work in society. And one of the things that may help there is to flip the framing, and it's good for solution providers to also be aware of that, that we give a wallet and that unfortunately will not save the world. What it will do is that it will open like a sixth channel to a service that people need. So flipping that framing, saying it's not about a wallet, it's about this person who wants something, and he can do it in paper, he can visit the office, he can write an email, and now he can use a wallet.
So how are we going to orchestrate that service delivery? Because we've got a really cool fancy new tool that's the wallet, only a digital version. And that kind of also helps the thinking on, so if this tool is super easy and the others are super difficult, is that fair? Do we want to push that out to the people? And there are many examples, but I won't go on a deadline.
No, you must not, you must stop. So I think there's still lots of unanswered questions here about how we take the great ethical research and thinking that's out there, and then human rights instruments, and the wonderful work of the UNDP in helping to provide a legal identity governance framework. How do we bridge all of this into product design, into contract negotiations, into RFP processes? How do we do this? Lots of unanswered questions, and I look forward to working with all of you on projects that will fix it in future.