Thank you everyone for joining me. We're gonna be talking about a topic that is very personal to me because it really drives, derives from a lot of work I've been doing as CTO of Unan, delivering the concept of verified identities in different contexts where we talk about how do we, we essentially help organizations build secure and enable verified identities.
And in doing that for a variety of organizations ranging from regulated industries like banking to governments and and organizations dealing with financial inclusion, we find some really, what I've been able to find is some really interesting issues. So one key point there will be no talk of wallets, IES or capital T Trust in this session at all.
As I said, it's a lot, a lot of this is opinion that is derived from my experiences and I'm not gonna provide any answers. Just like Mike, I'm here to ask you to think, recognize some of the signs and see what you can do about it.
I love this quote by May Angelou, the needs of a society determine its ethics and the role that digital identity plays in society as both a security perimeter but also a business enabler actually puts it at the center of a lot of interesting contradictions, especially as it pertains to ethical considerations. So I wanted to give a few examples.
We've seen that digitization and digital identity has helped fuel tremendous growth and transformed society in many different ways, especially when it comes to bringing people who have previously been excluded from the system into the market and giving them access and opportunity. As those schemes succeeded, they became increasingly attractive for fraud. And to combat that fraud, it drove increasing need for identity proofing and digital and digital identity required more identity documents.
The consequence that those same people now were, were once again being excluded from the system because they couldn't fulfill the new requirements that the system brought.
So digital identity empowered and then took away, right? The creation of national digital identity is an increasingly popular initiative around the world because it's sort of aimed at guaranteeing access and creating economic opportunities for people around the world for government to provide sort of growth stimulation for their economies and many other things.
But what we find is that in a lot of those same areas, the, sorry, state of identity proofs and documents and foundational documents, which is something Heather talked about yesterday, actually creates a serious consequence, which is that oftentimes these governments are being forced to deploy immature, untested technologies on the general population and are oftentimes having, having to open up and connect their internal systems to third party solutions, creating privacy concerns, security concerns, and very real concerns about accuracy and authenticity.
But this isn't just a problem in developing nations. Let's look at what it's like to open an account for a service and let's look at what happens in the eu, right? With under the GDPR, you have these really interesting two principles that are mandated, which is data minimization means that you should only collect that information that is required to provide the service and nothing more. And purpose limitation, which is that the data that you're collecting should be used for that purpose only must be pro disclosed and you must pro get explicit consent from individuals, obviously.
Great GDPR also creates this obligation, which is that anybody can ask the service what information do you have about me? And can ask them to either dis not just disclose it, but sometimes correct it and or even revoke it. Now those two create an interesting contradiction, which is how do you do that in a proper and secure manner?
How do you know that the person asking for that information or asking for that correction is the person that you genuinely, that that is the genuine person that that data pertains to? How do you verify that identity?
So what this forces us to do, and many services, as Andy Ndra pointed out in the Body of Knowledge article, is it actually forces services to collect more data than they actually want to. Because without that additional data, they can't actually verify the identity of the person making the request. So you end up in this vicious cycle, identity is foundational, but digital identity is not. Digital identity is actually a facility built on top of that foundation. And as such, that foundation that it is building on has false and is weakened parts.
And what digital identity does, it'll often amplify and exacerbate the issues that exist in that foundation.
And that what that does is it creates some really interesting problems that will force you to sort of look at how to handle that. And one of those solutions that people have driven towards, including myself over time, is this concept of human-centric design where you try to address the problem by focusing on the humans that are involved in the cycle, which in theory sounds like a good idea, right?
But what you find is that digital ID systems, oftentimes even when taking human-centric design into consideration end up in this really weird Gordian knot where they're trying to solve a problem that often forces you to face contradictions between the ethical side and the human side of these systems that you're trying to bring. So over time, what we've done and seen is a few interesting patterns that have emerged, and I wanna bring some of those to you and hopefully you can see some of this in your own work.
One is this notion that when you try to be provide human-centered design, what you're trying to do is be inclusive. You're trying to make sure you can accommodate diverse user needs. Now what that runs often into is budget limitations, technical constraints of the technology that you're building on the technical stack you're building on. And what ends up happening is that it that by definition constraints the design itself. And what you often find is that in order to provide access to people, you make compromises on the design, which end up excluding people from the system itself.
One of the benefits of digital identity is how it really improves efficiency. Going back to that notion of being a business enabler because it increases scale and access and reach, this is easily seen in how digital identity has been the linchpin in transforming in-person assisted workflows into remote self-service experiences, right?
Especially during covid. But that efficiency often makes it more challenging for those who have disabilities or other disadvantages.
One of the interesting twists we see when people actually try to take that into account through human-centric design is that in order to maintain that efficiency, they will u often use those same technologies that we, they were using as part of those assisted workflows, bringing it back into those remediation cycles.
And an interesting phenomenon that shows up there is this thing called automation bias, where people who are dealing with those systems will often override their own judgment because they believe that the results or the output from a machine is neutral and free from bias and therefore they will override their own sort of judgment calls and go with what the machine is suggesting. And because this is being done as oftentimes the fallback option, people who get excluded because of that often end up having no other means of remediation to solve their problem.
Human centered design also puts an emphasis on obtaining informed consent. Now, Eve already killed informed consent, so that's a whole different issue. Maybe I should skip the slide.
However, one of the things informed consent requires is that you ex ex expose and disclose the data that is being collected. But with digital identity, there is a whole lot of data that's gonna being collected for very legitimate purposes. That is really hard to explain to individuals. Think about behavioral biometrics. How do you explain the data collected in order to enable a behavioral biometrics based system? It's really complicated and it's really hard to explain to the end user who's a layman. So that really creates some really interesting challenges as well.
Usability is obviously a key requirement in human-centric design. And creating user-friendly and personalized journeys often involves collecting that kind of data, vast amount of data, personal data.
When you're doing that, which obviously conflicts oftentimes with privacy and autonomy. And when you think about, for example, this brave new passwordless world we are going through, that actually ends up running into very interesting privacy regulations.
So for example, when we think about privacy passwordless designs, we've gone from the box as Ian inputs it, the two box problem, which is username and password to a one box problem where enter your username and then I will guide you through the process. I will tell you what your authentication method is.
Well, in many security regulations around the world disclosing the next step is actually a privacy breach 'cause it's data leakage. So how do you create a user friendly passwordless journey if you're not allowed to tell them what they're supposed to do next? These are real considerations that show up when you're dealing with security regulations in different countries.
Obviously in human-centered design, autonomy is a key requirement. People want autonomy, and that is the key goal, right? Empowering users and giving them the ability to make choices.
But giving autonomy to users oftentimes compromises their security because the normal average person is not informed enough to understand their risks, to understand their threat model. So giving them autonomy oftentimes puts them at risk. Think about the number of people you have heard who have been locked out of the crypto wallets 'cause they couldn't secure their keys. I think the largest crypto wallet just got hacked after 20, 20 years to recover 3 million in crypto because, which is a whole different problem because it was a very interesting mathematical thing that came into play there.
But that's the kind of situation you're dealing with, literally locking people out. And there are obviously solutions to that, but those solutions oftentimes end up taking away autonomy.
Again, think sies. Hey, I didn't, I say I was not gonna talk about PASIs. I'm bringing up PASIs too much. But PASIs actually tries to solve this problem by creating a sink fabric. But that sink fabric by definition takes a little bit of autonomy away from the individual because they're now tied to that sink fabric. They're tied to that platform. Oftentimes the OS platform and switching costs have become really high. And when switching costs become high, you're removing autonomy from individuals, right?
So these are just a few patterns we've seen play over and over again in some of these PA places. So it's really important to understand the power of the space that we play in. Digital identity can be a very wonderful tool for inclusion and a ruthless weapon for exclusion. So if you're working in digital identity, you are by definition working in the field of ethics.
You have to deal with the ethical considerations of what you're building. You're in the mud and it's your responsibility to keep moving forward while keeping the crowd working and somehow not too much the worst for wear.
But as digital identity practitioners do, we really have the tools to be able to deal with this. When you're engaging with scenarios that have ethical challenges, what really need you need as a designer is an applied ethics tool. Applied ethics is the actual application of theory and practice to the purpose of solving very thorny ethical considerations. It actually gives you a tool for making a decision when you're moving forward with that. But what you find is that when, when organizations are trying to deal with this and trying to solve this, there's nothing out there.
It's really hard to find an applied ethics tool that applies to digital identity specifically. There's generic tools, but those generic tools end up becoming way too abstract. If you really try to look for an applied ethics tool for identity, you often find things that are very narrowly focused on privacy or data, but not general purpose for identity. And that's because ethics tools tend to be highly contextual.
One of the tools that I have found, which isn't necessarily a direct tool, but is a very useful starting point, is this notion of value sensitive design.
What value sensitive design does is it provides theory and method to proactively consider human values in a principled and systematic manner. So going through the details of ESD is gonna be a little bit too much for the short amount of time we have. There's at the end, I have a page that has a bunch of links that you can use. But what I, what I believe is that any ethical framework for digital identity has to integrate value sensitive design into its general design principles. So this is essential.
And one of the things that allows designers to do is create a framework that helps guide designers through the process of going from the abstract, which is principles through translation of those principles into values, then establishment of guidelines and guardrails that maximize those values.
And then that finally leads to definition of requirements, which at the end of the day is everything that we're trying to do. You as a designer, you don't want to be dealing in the world of abstract. You want concrete requirements.
And that tool essentially allows you to traverse the map from the abstract down to the rail, incorporated into the design process. VST can help service relevant ethical concerns because it helps identify where you're gonna run into ethical challenges, which otherwise you're not. So most of us are not really actually trained in how to do that.
Now, one thing to note is that even with VSD, or maybe sometimes because of VSD principles being applied, you will find conflicts even after that without really easy answers. So you really need what VSD provides you is some guidelines around how you can resolve those conflicts. Like for example, using direct trade off when the values that are in compromise are commensurable, or using innovation to eliminate the conflict altogether because the VST process can show you where the conflict is, but even suggest what innovation you might bring to the table to solve it.
And also satisfying conflicts with moral obligations, which means essentially making compromises right? Where you can actually define thresholds about how much you're able to do, how much compromise you're able to do on a certain value.
Because technological and societal values change over time, they never set in stone. Any overly rigid application of ethics often will end up blinding you to emerging ethical concerns. We see this play out over and over again as technology evolves. Gen AI is certainly bringing this to the fore in terms of how technologies are being used.
So what VST promotes is this notion of ongoing investigation so that you're continuously challenging and evolving your solutions in time with that instead of assuming beforehand that you understand what the risks are. Because as time immemorial has taught us, we're not really very good at figuring out risks and issues ahead of time. So you really need that sort of iterative process and this is done or has to be done in conjunction with getting inputs from your stakeholders. Now that's all that's obvious in in a certain extent.
But one key point here is that when you're talking about stakeholder inputs, ethical considerations very often are not one of the inputs, especially when we are talking about things like national IDs, et cetera.
When you talk about requirements, you will not get ethical requirements. You might get regulatory requirements, you will get privacy requirements, but not necessarily ethical requirements.
And so it's really important to be aware that your stakeholders, your customer, isn't always giving you a complete set of requirements and sometimes it's part of the process has to be going back to the customer and challenging them and challenging them on some of those requirements and having that conversation with them. This happens very often for me when I'm dealing with regulators. So one of the things we find is from this is talking to stakeholders, measuring impact, continuous refinement.
This sounds a lot like user experience research and since having an sort of an applied ethics board as part of any company is pretty much almost never gonna happen. We don't, nobody has a budget. It's pretty important that we bring the concept of ethical design VST into our existing systems.
So what that means is building it into UX research, building it into product management. Anybody who has a good product management document, you must be, if you think of product requirement documents, there's almost always a conception for security concerns and considerations.
However, adding a section called Ethical Concerns and considerations, that is driven by this notion of VSD. So this is important enough work that I believe we as a community have to come together and working on to work on this. We need to push to bring the best practices and the knowledge that we have through our experiences into this. A lot of that happens at venues like this, but that's not enough. We really need to build in this notion of ethical ai.
I mean, not ethical ai. The ethical AI folks can't have all the fun. We have to have ethical identity built in. Women in identity has been working on this thing called the code of conduct. It's a really important first step. There's a lot in there that we can start with and expand on, but I'm really looking at Heather to drive something under the ID Pro umbrella because if Heather puts her mind to it, we'll have the best systems built out tomorrow.
So in closing, I wanna say that being an identity practitioner means becoming skilled in the art of balancing concerns, whether it's security and user experience, whether it's personalization or privacy. You really have to find that, do that balancing act. But it starts by becoming aware of the challenges that exist. We live in a world of digital abundance, but as a very wise man once said, more identity, more problems. So take it upon yourself. I've compiled a page that is a life page. I'm continuously adding resources to it. Some of it is related to ethics and identity.
Some of it is generic things like providing links to things like value sensitive design and more. So I encourage everybody to look at that, start contributing on this topic in whatever venue you can, whether it's talking at conferences through ID Pro. And with that, thank you very much.
Thank you. Thank you.