KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Unlock the power of industry-leading insights and expertise. Gain access to our extensive knowledge base, vibrant community, and tailored analyst sessions—all designed to keep you at the forefront of identity security.
Get instant access to our complete research library.
Access essential knowledge at your fingertips with KuppingerCole's extensive resources. From in-depth reports to concise one-pagers, leverage our complete security library to inform strategy and drive innovation.
Get instant access to our complete research library.
Gain access to comprehensive resources, personalized analyst consultations, and exclusive events – all designed to enhance your decision-making capabilities and industry connections.
Get instant access to our complete research library.
Gain a true partner to drive transformative initiatives. Access comprehensive resources, tailored expert guidance, and networking opportunities.
Get instant access to our complete research library.
Optimize your decision-making process with the most comprehensive and up-to-date market data available.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Configure your individual requirements to discover the ideal solution for your business.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
This is quite quite a high stage. This isn't, it, it reminds me of being back at school, the headmaster addressing it. So if you all all sit comfortably, please stop messing around in the back there. We'll make a start. So I'm gonna talk to you today about protocol independent data standards. And why is this important? So if we're gonna look at interoperability of identities in, in two ways, one could be the Federation of identities within a single ecosystem.
So within a trust framework, within a country, say, if we're going to have multiple identities, choice of identities for consumers, then they need to be operating with consistent data formats. So if there's a relying party, an organization receiving data is going to receive data from lots of different identities. Be they centralized be their distributed identities because the user has a choice. Then the data needs to come from them in a consistent format.
If it doesn't that organization's gonna have to deal with the problem of getting slightly different data from all the different identities that are out there. And that will not be tolerable. Digital identity will not be a success. If that is the case, we've got to deliver data in a consistent way.
Moreover, if we're looking at interoperability across trust frameworks, having data laid out in a consistent format, consistent naming, consistent typologies, consistent enumerations will enable global interoperability to be much smoother. We won't have to have somebody translating data when data arrives in OneTrust framework, when it's delivered from another. And as soon as it's not fiddling with data and translating it, we've got a risk. We've got a role, we've got a data protection problem that we don't want to have. So if we can get consistency, we avoid all of that.
So this is the reason that we formed a working group to look at this at our, so we have architecture, interoperability working group. This is a standing working group it's currently looking at data standards. It's exploring two questions. Can we define a common data schema for what we ended up terming core ID information, and could that data schema be recommended for use across multiple and many trust frameworks on a global basis. And can that work for both verifiable credentials and O IDC for idea assurance. So can we define something here, which is essentially protocol independent?
So those are the two questions we're exploring on the screen. There are some of the members of the working group we've been working on this with me at the open identity exchange. This working group is not concluded. So I'm sharing with you, our thoughts, current thinking, we've probably got another two or three months, maybe more to go on this before we come out with a paper with some recommendations. So this is very much an interim update and we are looking for continued input.
So if what you see today interests you and you have some thoughts and opinions, then do gimme a shout and, and get involved. So the other thing to note is if you saw the keynote that Gail and I did earlier on, on the main stage data standards are part of what we're doing on game. So I've already touched on the global interoperability need for data standards and they interleave this work into what we're doing on the game project. So let's kick off with the common data scheme side. So we started off with this, this picture and it was quite simple and we stuck with it.
We have an envelope, which is the protocol, IDC, all Sam decom, verifiable credentials, whatever hasn't been invented yet. All of these protocols are securely delivering data. They're delivering in a way that's traceable, that's encrypted. And that's what those protocols are, are primarily about. They're not about the data content, the, the, the contents of the envelope. That's what we've been exploring in this working group. And we've been looking at, you know, can we do consistent naming, can we do consistent types?
And the metadata that describes this around evidence and then equally proofing in levels of assurance, can we put some standardization in place for that so that we get smoother interoperability? And we ended up with this term core ID information.
So we, we, we, we, we looked at, you know, a whole range of different attributes and decided that if we try and go too far in this, we'll never complete our work. So we divided it between core ID information and anything else which is to do with eligibility. So within the scope of our trust framework, we, we deal with eligibility. But for this piece of work, we concentrated on the core ID information. And we divided that up into SEPA claims, evidence proofing and assurance claims pretty obvious.
The core claims about an individual name, address, contact details, nationalities, personal identifi identifies a small amount of core data that is standard for people all around the globe. The evidence that goes behind that. So when we're working out, whether who a person is from an ID proofing point of view, we gather evidence three types, ID documents, electronic records, and vouchers. Most of what we do around the world today is around ID documents or a direct voucher into the system where someone is put into the system via direct interaction face to face.
And that may take the form of a government interaction. In many countries though us UK electronic records are also used as part of the process, but how are they used the proofing methods? The approaches that are used to leverage that data for validation does a person exist. Verification is this, that person activity is the person actually active in society and ID fraud checks. And then those roll up into assurance. So we take all of that, the evidence, the proofing methods, we've applied to it and come up with some kind of level of assurance.
And this is an arbitrary concept is a concept that, that arguably we O X have had a big hand in creating in terms of a level of trust, a level of assurance in a person. And generally it's done in a fairly consistent way in terms of the, the Lev, the levels in that are used in different frameworks, as in consistent.
I mean, they use levels and that's actually, as far as the consistency goes, some frameworks have three, some have five, some have numbers, some have letters, some have words, how they're derived is different. So we need to find a way of communicating that and tracing it back through all of this process. So we need some standards around it. And when we run into this, we have a pretty mixed bag.
So it's, this is now a game of two halves football analogy. Over on that side, we're looking at the claims and the evidence and in this area, there's very much a mixed bag of standards. In some areas there's too many standards name and address there's you, you go out and search on the internet. There's numerous standards for a name and address. In some areas, there are pretty much global standards and others, there are no standards yet. So this is, this is a difficult area. It's a reason where we've got huge amounts of information behind this, in our analysis.
But on that side of things, we've got this mixed bag over here. We haven't really got any standards, either for naming of how we do this, other than some emerging standards in our IDC for Ida ID assurance or underneath it for how we do it. There isn't a standard for how you scan and proof a passport there's oodles of companies doing that, but there isn't a standard yet. There isn't a standard for how I do liveness checks. There are emerging standards, but there isn't a global standard. Yet. There isn't a standard for biometric matching.
There's some standards around force, positive rates and measures, but there isn't a global standard yet. So in this whole area is a bit more Greenfield, but if we're going to achieve interoperability, we need to approach that in a standardized way. So canter through very quickly what we've, where we, where our thinking is at the moment. So for name, there are standards for names. What we think is needed is some kind of globally extensible name, format standard. We are exploring that in the working group.
I'm not gonna draw into that anymore, but we we're coming up with a structure of Jason's structure that enables you to describe names, how they break down and how they then reassemble into different name formats. In other countries. There's a theme here. Lots of these things have more than one. I can have more than one name, my current name. I can have previous names. I can have a professional name. I can have a stage name, and they have a duration when I have those things.
Sometimes, you know, when I change my name, the old one, ceases, the new one starts. That's a theme types and durations. So quite a lot of these need types and durations on that basis, it would be useful if we had global types for interoperability purposes. And if everybody adopted the fact that there are due durations from and two dates and did that in a way that can be communicated consistently. So we make that recommendation for name. We make that recommendation for address.
We make that recommendation for most of these attributes that that approach of typology, hopefully globally defined and durations is adopted for address. This is a nightmare on one hand, but on the other hand, addressing is about quality. There are lots of in-country quality solutions around addresses. So the recommendation we're landing on here is go with your in-country address format, the best quality you can come up with, and globally, we need somewhere to translate that into other formats or communicate it in a one line or a five line address format.
So that's our thinking there at the moment. And we're looking at ISO 19 1 66 as the mechanism to do that data birth. It's a date, time pick a format, use it. There are various formats. We need to pick one, we'll make a recommendation. Nationalities use I C there's ICO and IO standards. We're recommending we use ICO because it's used for international travel. We're looking at global interoperability.
That's, that's where we're thinking on that. But equally we, that you can have more than one nationalities. We need to capture that and we need to capture durations personal identifiers. I think my slides are messed up a bit here. Our recommendation on this is personal identifiers are set locally. Let's have a type value pair on that. That says the type, which is a local type set by the framework. And the issuer of that then defines the, the rules for that. So if my Nino in the UK is a type, the rules for Nino that has two letters at the front are set by the issuer of that type.
And we need an ecosystem that lets us expose that, navigate it, and allow validation that ecosystem of how that type value pair and navigation work should work on a global basis. So we can trace them and resolve them. Cuz this is a vital national identifier, moving on evidence, ID documents, driving licenses, MDL standard, passport ICO, standard national ID cards, local standards done. We work with those standards. There's some gaps in I in MDL at the moment in terms of how you proof someone, but the data is defined. So this is a good area, solid standards there already.
When we get into electronic evidence and vouchers though, no standards. So how do we describe an electronic record and a VO? This is the recommendation that we go with these fields to describe them. These are being mapped into our IDC for Ida and I'll come on to work out how we think that that is actually just a, a translatable claim, metadata evidence description format that can work in any protocol, Proofing, proofing types. We've got some debates still going on around this. The golden type of proofing is a direct issue.
SSI model issuer gives you a credential because they know who you are and stands behind that. Then there's various things that are done to validate real world credential, scanning, selfie, taking cross-checking with biometrics, hitting electronic records for footprints. We want to categorize all of these and try and do this in standard way and get frameworks to then categorize what they've already done to that standard and new frameworks to pick up that standard. So we can interoperate.
If you come and see what I'm talking about this afternoon, we think this may be the currency of identity that we need to trade at, and that will only work if it's done in a consistent way. So that's our recommendation there. And on the assurance processes, the framework of assurance levels is there it's well adopted. We need something that enables that to be described and trace back to the proofing behind it. And the methods that have been used. We've got a Fourier proposal to do that level policy, procedure, elements, and scores that links back to evidence. That's already in our IDC for Ida.
We recommend that that is used across the board. In all protocols. We need to test that that does work in more frameworks. And we will do that as part of our work on gain and then moving on to working across protocols. So all of this then defines the standard for that core ID information, the claims, the evidence, the proofing that goes along with the evidence and the assurance that drive from it. Does that work across all protocols? Does this content work in multiple envelopes?
And the idea would be something like the content that we've got in our IDC would be the same as in a verifiable credential. So I could literally lift it from one to another. Now I'm probably being massively over simplistic here in massive labor, simplistic in what I'm about to show you, please feel free to shoot it down. I will defend my simplicity at the very least the data naming needs to stay the same. Even if it's shuffled around a bit. This is from the O IDC for ID assurance, implements draft three. It's an example of a part of an identity token.
And at this point we have the verified claims object. That is the common content that we're talking about. The claims, the evidence, the proofing goes behind it. The bit at the top is about who is involved in it, who the issuer was the, when the nuance that makes it unique. Who are the audiences? When was it done? So all of that is the, the core element of O IDC is the security elements between the parties involved. This is the data packet that goes with it. That's the common content in a verifiable credential. The common content sits here. Take that out again.
So, you know, if you're familiar with this, this graph, we've got all the elements over here describing the same thing at the top of O IDC, the issuance date, the issuer, where it came from the cryptographic level proof and the signatures that came with it, the same stuff you get in the core of O IDC, the claim bit, which is already nicely dotted out for me is where we see the common content going. So what I've done here is taking a verifiable credential example from the W3C spec and shoved in here in the credential subject.
The verifiable claim that I had in our IDC, I've literally taken the same object and put it in a verifiable claim. Why not? Why can it not be that simple, common data in a different envelope? Now there's an argument that actually what I've got in here should actually go further up at other levels within the verifiable claim.
So that's, that's where you, you can shoot this down. So there's already evidence that could be lifted up. But what I'd like to do is we explore this further. I want some SSI input into it, please. So we'd love, you know, if you've got an SSI view to join us on this, to help us position where these goes, but I'd like to keep the segments together. The data naming must stay the same. That's the objective here. Even if it's shuffled about a bit in terms of the presentation. So to summarize or exploring these two questions, can we define a common skier schema?
Yes, we must is the answer there. We need global standards for various things, attribute naming, not least types for various things, names and addresses common methods to describe check methods and assurance processes where it's Greenfield at the moment. Let's do that now while we can and get some standards in place, the they're simply naming standards.
You know, this is, this is not big stuff, but if we get that right and we get people working to that, we're making interoperability possible. And can it work in both fairer credentials and art assurance? Yes. In my most simple view of the world.
Yes, it can. I'm sure it's not that simple. We continue to explore that in this working group. And that's one of the main areas I want to explore next. So that's where we're at with this working group. I hope that's useful on our thinking here. So it isn't concluded yet. We've got at least two to three months to run on this and we will then in the ultimate conferences there formally launching this with a paper and recommendations and then what next?
So we, we making recommendations that somebody sets and manages data standards. Who's gonna do that. And we may make recommendations on that too, but that's for me this morning. Thanks for, thanks for time. Thanks for listening.