Am I audible?
Yes, yes you are.
All right. We actually have a presentation that Oxana had uploaded.
Okay, perfect. So
This is going to be sort of hybrid. We're going to have some presentations and then we will have probably 20 minutes of panelist discussion at the end. So I don't know if we can pull up the deck.
We have the deck. The deck is already displayed. I will hand over this one to you. Thank you. Enjoy. Yes.
So let's talk about selective disclosure. So what we plan to do today is I'll do some introductions. Dr. Daniel FET will talk about selective disclosure Jots j JWTs. Christina will talk about the ISO mdoc format, Tobias Looker remotely from New Zealand.
We'll talk about zero knowledge proofs and the BBBS algorithms including the standardization of it. David wait of ping identity, we'll talk about the Jason Webb proof's work in the itf. And then we will have arousing discussion among all of us. So there's a lot of foundational work happening in the selective disclosure area right now. It is certainly not new and the notion of releasing the minimum amount of information fit for a purpose is a great privacy principle and one that has been talked about for, well more than a decade.
One of the new things though, is some data formats and algorithms are being used to enable the use of selective disclosure in more practical systems without having to build everything yourself. Sometimes this happens via zero knowledge proofs, which Tobias will talk about, but there's lots of cases which the lady and gentleman to my right will talk about where you don't need advanced crypto at all.
So you've seen a variant of this slide a number of times already during the conference.
The notion of having an issuer, the notion of having a wallet or holder that holds a set of claims for you, and then the notion of releasing or presenting a subset of those claims to the verifier. And that's what we're trying to enable. For instance, the driver's license or national ID card is an experience in the physical world that you're used to using that you don't have to go back to your driver's license department. I don't have to go back to the Washington Department of Motor Vehicles to use my driver's license and that's a feature.
I can just present it at a time and place of my choosing to enable interactions that I want to have. And this is an analogy of what we're trying to do in the online world together. And with that, I will turn it over.
Thank you. Yep.
So I'm going to talk about SD Jot or selective disclosure jot, which is an ITF draft, which I am co-editing together with Christina. So you're on the panel today and Brian Campbell. And the idea behind SD Jot is that we really want to keep things simple. Mike already told us in his talk this morning that simple is good.
So we hope that SDR is good because it's simple. What we try to do with SDR is really to find the shortest path between A and B. So if you want to do say credentials and you want to have selective disclosure, what is the easiest way to achieve that? Especially reminding ourselves that many people are already familiar with formats such as jd, which themselves are based on relatively simple principles and we want to reuse as much as possible of that experience that developers, for example, already have.
So as d JD is based on standard cryptography, we are just using Jason rep's signatures plus a hash function. So there's no black magic going on here. You don't need to be a cryptographer to implement this. It's based on Jordan and Jason and some some compelling characters that we need as well. It's very simple.
The idea is to create a format that is secure by design as far as possible that is easy to understand and also easy to verify. So you don't need to analyze a cryptographic library to ensure that this is secure you.
You just need to analyze a relatively simple algorithm that is, that can easily be implemented. We also want to enable hardware binding. So if you're on a mobile device, you might want to bind the credentials to some hardware backed key but also enable cryptographic agility. So whatever algorithms you want to use in your use case, maybe also dictated by the regulation, you hopefully can use them with sdra as well.
And essentially you only need a JW T library and then you can implement the algorithms of sdra, which also brings us to the point that we already have more than six independent implementations. This number is not up to date here.
We have more in in the meantime, Anna Searo is not limited to identity use cases. So wherever you want to do selective. So you want to create documents that enable selective disclosure. You can do that with SD Jot.
In fact, as Searo is so simple that I will show you the how it works to a degree that you'll be able to implement a first prototype if you've ever touched a jot library before in the following slides. So say you want to package up this data that you see here on the slide, that's the standard Jason, there's some identity data in here, but whatever data you want to put into, so we we, we don't tell you what the format is. You can use your Jason, your format. So you want to package that up and to want, you want to make some of these claims selectively disclosable in the first step.
What you need to do is for each claim that you want to make selectively disclosable, you create a so-called disclosure. A disclosure is a very simple Jason, essentially just a Jason array.
The, the arrows here on the slide are not pointing at the right position, but I hope you can figure out where they are supposed to point to. So each Jason array starts with a so-called salt or nons. So random value chosen freshly for each claim. That value is there to pre prevent gassing attacks. Then the second element is just the name of the claim, for example, given name. And the third value is just the claim value, which can be of any Jason supported type. So this can be a string as you can see here, but it also can be a more complex object, it can be an array, whatever.
So you create a disclosure for each claim.
And then in the next step you hash the disclosures using the hash function by default as chart 2 56. You can use other hash functions if you like. So you hash the disclosures each individually and put them into the original document in the same level where the claim used to live under a container element called underscore sd. So you collect all the hashes there and of course hash being hash, you cannot from the value on the left, you cannot deduce the value on the right. That's the idea here. So in the document, the values are gone.
You can't read them from the document any longer. Now in the next step, you take the thing on the left and you just create a signed jot out of that. So the issuer of the credential, if it is a credential, just scientist sign the thing, standard jot. You don't need to read all the characters here. That's just the standard jot, believe me.
And then we also have the disclosures, right? So there are no values in there, there's just the hashes here on the left side. So we need to do something with the disclosures as well. So what you do is you base 64 encode them. So everything's still there.
You just base 64 encode them just for transport for special characters and so on. You use the Tilda Tilda character, put it in front of each disclosure and then you mesh the whole thing together. What we have now is the signed chart, the signed document by the issuer plus the disclosures. If you have the whole thing, that means you can reconstruct the original structure, but the disclosures are not part of the signed, the disclosures themselves are not part of the signed part here.
So what you can do is as a holder, you can remove some of the disclosures and thereby taking information out of the whole thing.
So only if you have a disclosure for a certain claim, you will be able to actually deduce the original claim value. So in a grand scheme of things, this looks as follows. So here we have the path from the issuer to the end user holder or wallet at the top. So the issuance, the issuer creates the SD jar that is the signed jar plus the disclosures and sends all of that to the end user to the holder or wallet.
Now the wallet, when the user wants to use the credential, takes the SD jar. It cannot touch the SD jar because that's signed by the issuer. So let's just send straight on to the verifier and from the disclosures, removes all the disclosures that it doesn't want to disclose to the verifier. So if you just wanna disclose the given name and your birthdate, you remove everything except for the disclosure, forgive name and birthdate.
Really simple. So there's no complex algorithm that you need to run on the wallet site here. It's really easy.
But you probably also want to do something that is called holder binding. So you probably want to prove that you are actually the legitimate holder of the credential. And it's a relatively standard mechanism that for that you have a key where the public key was signed by the issuer in the SD jot. And with that key, you create a so-called holder binding jot. That's just another very simple jot document. So another signature over some transaction bound data like an audience value, so the receiver of the thing or a nuun to prove the freshness of the signature.
So you create another very simple draw, put that together with the rest in one long string and send all of that to the verifier. The verifier can now verify the SD jars signature. The verifier will have to hash over the disclosures to find out where the hashes lift in the original document. That's very important cause that means that without doing the hash part, you cannot figure out where the claim lift in the original document. So we'll not be able to reconstruct the structure. That's good because that's a security feature. That's security by design, you cannot skip the hash part.
So you reconstruct the original structure as far as it was disclosed to you, and then you can check the whole A binding and then you're good to go. So whatever comes out of this process is adjacent. That's the Jason that was sent to you as a verifier. You can just pass that onto your application to do whatever you want with that.
That's a, so you can now implement it. Thank you. We do have of course some, some more things in that specification. If you're interested, you should read the ITF draft that we have. For example, we also define adjacent serialization, which is just a different way of picking the same things up because adjacent serialization is used in some context.
So that's a good thing to have, especially where this format was used before for normal signatures, you can now do signatures with some selectively disclosable attributes. Just about on cryptographic agility. I think I already said that.
We don't prescribe really the algorithms. You can use different hash functions. You can use, you can sign that thing with whatever you wanna sign it. If you want to do post quantum crypto, go at do it.
So that's, that's really cool. It can be used with any Jason based format. You can put any Jason in there. If you are, if you are so inclined, you can use Jason LD and put it in there. That's fine. You can use things like the W three C VC data model. You can do that. You can use open ID connect for identity syntax, which allows you to be more expressive about where the data came from and how it was verified and so on. It's Jason, just put it in there. You can do hold our binding if you like in completely different ways. So we don't tell you how exactly to do that.
And you can transport the whole thing over whatever protocol you want. So you can use open ID four vc, you can use avian carriers, you can use HH p s, that's fine. Just go ahead and take the string, put it somewhere. That's fine. So it's agnostic to the transport protocol.
Yeah, that's s stro. It's available right now. So you can go to our GitHub repository and you will find a Python implementation there that we are also trying to move to the open wallet foundation. If you were here in the first day, we talked about that there are open source implementations. There are more than these. We just got info about a Golan implementation. So that's really cool and I expect that there will be even more than that.
So that's, thank you
Christina, take us away.
All right, let's talk about M docs. And one small caveat to have on the screen is
I'm directly involved in this Deja as an editor when it comes to mdoc cs. This work is being done in ISO International Standards organization and I'm directly the member of the working group working on it. But MDOC has already been defined before I was joined. So I will give you a digest so you don't have to read 160 pages in is 18 and 13 dash five.
But note that you know, like I'm not the editor, I'm giving the digest based on my links to discussions and involvement in the ISO working group. And maybe another kind of framing that might be helpful, the four mechanisms you're hearing today is dja, OCS and next bbbs signatures and jwp, Jason web proofs. I'm not a hundred percent sure he can compare them Apple to apple to apple to apple. So you have to kind of, when you listen to us talk, if you could try map, you know, so one is how you, you know, sign it, how you secure, it's a format with some disclosures, whatnot.
M docs is close to that, but a bit more inclusive to other elements of credentialing. PBS is more on the cryptography crypto heat layer and GWP is kind of a more fundamental, the same layer is kind of a JW T, right? The same.
But you, you'll hear more about them. I'll leave it to my dear, dear friends. But let's dive into, so usually you will hear mdoc, right? But what we are actually talking about is this thing called M S O, which stands for mobile security object. And at the high level is djo. The reason why we put this DJO and mdoc next to each other is because the fundamental mechanisms they used, selective disclosure based on salted hash mechanism is common to both of them. And what it means is an object signed by the issuer does not contain the claims in the plain text, but it contains the hashes, right?
And then so you can't play with that, right? Like if you want to use usual signatures, you can't tamper this issue a signature. So that's the object. And alongside and alongside that, you would have some magic happening in know results and actual plain text claims, like fundamentally, like that mechanism itself is common among stage and and mdoc. And that mechanism itself has existed in crypto literature for a while. So it's literally building up on that, really defining the mechanism so that it can be implementable in interoperable manner, securely in a privacy preserving manner.
So, sorry, I'm not probably following what I'm saying, right? So this thing itself is defined in iso, there is a standard, you probably have to pay to a hundred euros. So I'm not just, you know, summarizing the hundred and 80 pages, but also hopefully emitting you. Oh you have to, if you have to implement, you have to buy it.
But that's an ISO model. So and technically if you read the standard that defines ocs, it defines it for the specific use case of mobile driving license, right? But theoretically nothing prevents you to, you know, use the same mechanism to express claim of anything else.
Like I think there's work happening on vehicle registration. And so that we touched the same mechanism and this mdoc M MSO is expressed in cbo. And so before, you know, in this digital, we talked everything Jason, everything GW T here at cbo. And one rationale is this structure is supposed to be sent across N F C and Bluetooth. So they wanted it to be smaller. How how much smaller actually is compared to Jason. I've never seen the data and whenever I complain about Seaboard in that working groups, they tell me, Christina, be happy. It's not ASN one.
So, and the first point is actually very interesting Mdoc and the standard itself is defined as a document or application that resides on a mobile device or requires a mobile device as part of the process to gain access to the mdoc.
It's honestly pretty confusing that they're using the same word to express pretty much a wallet like MDO application and the actual, you know, data structure sitting in that document, right?
So that's just one caveat and that's why I started by saying mdoc itself is not directly, you know, the format, like the equivalent of what Daniel talked in his DJ's, this issue assigned object with hashes is actually called mobile security object. But for simplicity, I think you're fine, keep calling it ocs, but just keep in mind that mdoc can mean different things. And so yeah, like the way also Amdocs are defined in the specification are pretty tightly coupled with the rest of the mechanism. They're not originally intense to be used just as a standalone credential format.
Really you can't do it. We want an extensive way to figure out how to do it when we defined how to send M docs over openly for file presentation protocol for example.
But that's kind of another thing. So in compiling the slides they kind of, you know, had a bit of a trouble. Which part of spec do I cut out to explain just the ocs, right?
And another piece of context, which is actually pretty important is the context that OCS were defined in this dash five is, you know how it's this conference, there's been a narratives at Z for fiber credentials slash issuer holder verify model is about kind of reproducing the experience we have with plastic cards in the digital world, right? So that was actually what people defining Amdocs tried to do in the beginning. So that's why the people were who were sitting at a table defining this are mainly the manufacturers of those plastic chip carts, right?
So those are the companies who were used to, who were used to this concept of having a chip on a plastic cart and issuing things into that chip.
So that mental model, it took me a while to grasp it, but once you understand it, the design actually starts to make much more sense because if you come from a only web world, a lot of things originally for me were a bit like, hang on, why? But it's so this amdoc space is interesting, but it's kind of trying to bring together those two words.
All right, finally I'm showing you what it actually is. So it's a c D dl, meaning yes it's seaboard, whatever language that's used to represent it can be used to represent both Seaboard and Jason. But long story, shortest mobile security object contains for the purpose of this talk, what's relevant is digest algorithm and value digests probably. Yeah.
So because you know, now it makes sense when I said this was not meant to be as like kind of just standalone format, that's why it has this device key, which is used for proof of possession, which was holder binding Jo, a separate artifact in a stage example for example. And info is pretty much, you know, exp i t claims in Jot and well now hopefully makes sense Digest algorithm is just the algorithm used to hash and value digest our array of those hashes, right? The one you already saw in anesthesia example.
Yeah.
And one probably notable difference is here we also have this thing called digest id, which kind of blinds the claim names, but it's a minor difference. And when you actually send that MSO during presentation, you send it alongside this mapping of digest ID sold claim, name, claim value. And what's fascinating to me is the standard where mdoc is defined only talks about presentation issuance is completely out of scope. So how do you, how do you communicate this mapping of digests in MSO to the plaintiff's claim values during issuance is not defined anywhere.
So I don't know how those large scale implementations actually interpret. Well I know what's happening realistically is it's the same company building system for issuer in the wallet. So they kind of control how the mapping happens, you know, by doing that. But realistically, if we want, if we would want to interpret at a larger scale, I think that would have to happen at some point probably.
Yeah, and maybe fun, not fun facts, but just kind of warm up for the ZK piece coming up. When people talk about these credential formats, they're usually kind of three features people want to achieve. Second disclosure, predicates inability, second disclosure, you don't have to release everything predicates. I don't want to give you my actual birthdate or let's say the actual salary number. I just wanna answer age over true or I earn over this amount. True.
I mean how many use case there are to actually, we actually need predicates those two, their main ones I've, I've heard so far and linkability NA will say that you have to specify Linkability between whom and whom. So here we're talking about two verifiers being able to correlate the user because they receive the same signature, the holder signature. So that's where pe so on one side you can achieve it using, you know, advanced cryptography.
On the one hand, the amoc approach is really bad, can we do it without doing that?
So, and it, this kind of applies to SDL two. So for predicates, the idea is to use this age over 18 true issue, literally just issues that claim inside as a plain, right? So you kind of use that to compute or unlink ability.
One, one way to do it is just issue a hundred credentials or thousand credentials into the wallet and a wallet you'll have to reuse a different one per verifier. So yeah, just kind of warming up to give you, you know, how how this kind of philosophy is is different. So over to you Tobias.
Welcome Tobias Looker from New Zealand where it's 10 30 at night. So thank you for staying up to be here with us and you're up to Paris.
Thanks Mike.
I, I think the next presenter actually got a rougher end dab, got a rougher end on the time zones, but glad to be here. Can you, can you hear me okay?
Yes.
I might have to request thanks. I just might have to request slide changes. So next slide please. So I'm gonna give a little bit of a brief overview on Z KPS and then dive into a little bit of a description of, of one of the algorithms that is kind of in this space. So Z kp for those unfamiliar stands for zero knowledge proofs.
And really in a general sense it refers to what I would describe as a family of cryptographic algorithms and techniques and, and they allow for various things, but in the most general sense, they allow a proving party. So you know, that's a person assuming a role in the protocol to prove a given statement is true without revealing any, revealing any additional information other than the statement itself. So that sounds very general, but we'll do properties like unlink ability and the likes make that somewhat clearer in terms of how that actually comes to be.
Is there a knowledge proof to have a variety of possible applications? Verifiable credentials is, is one such sort of ecosystem or industry that these techniques have been applied to and have been used and the BBS signature scheme is effectively one such algorithm that meets this kind of, these properties of zero knowledge proofs all kind of lives within this family, for lack there of a better word. Next slide please, Mike. So bbbs is effectively the, the most general overview of bbbs is it say cryptographic algorithm.
So I think what Christina said was a really important point to sort of come back to that there are different layers of technologies here. So we've got cryptographic representation formats like JWT or the Jose family in general. We've also got coz and that's in part what iso, mdl and MD docs use. But below that there's some fancy Matthias that goes on that makes all these algorithms function in general.
And BBS is, is a form of of algorithm.
It, it is bites in bites out in terms of it deals in data you wish like would like to create integrity over in the signing phase and, and the improving phase. And you're dealing in cryptographic structures similar to conventional signatures.
However, the properties of some of these algorithms are different or add additional properties that can be beneficial in certain use cases. So in this protocol you've got three roles, roles that should probably look pretty familiar when mapped into use cases like verifiable credentials. You've got an issuer, an issuer who has effectively just a, a keyer. So a public-private keyer similar to asymmetric cryptography, elliptic curve based as well just uses a different family of elliptic curves and fundamentally they sign a set of information.
So they sign a set of messages and they sign a, the protocol.
BBS also has what's known as a header and the header is a must disclose field. So the issuer can put what they want in there. Usually that is for public like common information like an algorithmic identifier to ensure that the prover cannot hide or obfuscate that piece of information. The prover that then receives the signature and the set of messages including the header, can then perform selective disclosure, which is within the set of messages that were signed.
They can choose which ones to reveal importantly, which is different to how the hash and salt style selective disclosure scheme works. They do what is known as deriving of a proof rather than simply handing over the signature.
And this, this comes back to one of the key properties that BBS enables that can't be done in conventional cryptography today. But effectively there's this derived proof phase which is similar to I guess producing a signature like for holder binding.
If fundamentally the proof proves that the proving party is in possession of the signature from the issuer and that the messages they're revealing are protected by the integrity of that signature.
And so that then is transmitted to a verifier and the verifier can then validate that proof, just like they can validate basically a, a normal signature. They need the public key of the issuer and they can validate the integrity.
However, one of the important, the the most important property of these proofs is that they are indistinguishable from random. So every proof you generate, even from the same signature and same set of messages is entirely random. So you get the property of unlink ability without having to issue multiple instances of multiple signatures or multiple instances of the credential.
Next slide please. So this is really just some deeper details on, on bbs. So in terms of how it works as an algorithm, it uses a kind of subfamily of elliptic curve cryptography.
So elliptic curves is a widely used for a variety of algorithms. We know in of today, ec, dsa, the Edwards curves and the likes. This uses another family of elliptic curves that are known as peering based and in particular the BBBS scheme and the definition that we have worked on and is currently going through the FFR G standardization process, it can use any peering friendly curve. The ciphers suites we have defined today are based on the B L S 12 3 81 curve, which is the most common and and popular in the industry for other applications of peering based cryptography.
And effectively where we're at in the standardization process is we recently published draft zero two and presented that to the CFR G at IETF one 16 and, and we expect to do some updates and, and present a a further revision at the up and coming ATF 117.
In the meantime, we have currently multiple independent interoperable implementations that run off the extensive set of test fixtures that we have for this scheme.
That list there is just of the current interoperable implementations that are aligned to draft zero two, there are at least five or six maybe more implementations that are aligned to various older versions of the draft. So BBS itself as an algorithm as it was originally defined in academia, has been around for a significant amount of time. In many ways peering, cryptography and, and zero knowledge proofs and, and group signatures depending on what terminology you use, is not really new cryptography in in many sense it's in fact been well documented in academia in a long around for a long time.
It's, it's more a matter of formally documenting it in a such, such a cryptographic standard manner to promote the sort of independent interoperable implementations that we would like to see. So with that I will hand over, oh sorry the last slide is so a, a summary of the key properties I guess from from zero knowledge proofs in general that BBS meets these criteria that are of interest in verifiable credentials applications. We've got selective disclosure, what we've talked about quite significantly and is the framing of our presentation today.
This can be accomplished with conventional crypto cryptographic algorithms. There are multiple ways to do that and, and SD jots and and mdoc to well share the same technique that does that.
However, the bottom two properties are, are properties that are fundamentally missing from the kind of existing crypto standard crypto system today. So applications that want to be able to create un linkable proofs. So the ability to issue a single credential or a single signature to an intermediate party and have them be able to generate an arbitrary number of un linkable proofs for, for whatever purpose fundamentally can't be done with existing cryptographic schemes.
And there are techniques that can be used to circumvent that sort of limitation such as issuing lots of them but that also forces other trade offs like requiring the wallet or the holder and the verifiable credentials use case to be more online or more in contact with the issuer than ideally would be makes that traffic more chatty.
There are privacy considerations with that and and refresh periods and so algorithms like BBS provide a a different possibility in terms of how those problem statements can be framed and and what solutions can be brought about cuz of that.
The last one is, is related to unlikable proofs but I wanted to sort of highlight separately, which is the ability to effectively bind a credential to effectively a key peer managed by the holder or the approver, the intermediate party, such that in a way that the verify knows that the signature or the proof that has been generated involves a unrevealed key here that the proving party must be in possession of. So it gives the same confidence that the confirmation claim or the device key or the hold a binding in in SD J W T but it accomplishes it in a different way cryptographically.
And the last property that I would probably highlight that I haven't put on this slide that is commonly talked about is predicate proofs as well. So algorithms like BBS do ultimately support different predicate operations. So the ability to prove things about attributes indirectly such that such like inequality proofs greater than or less than the age old age over 18 or salary range use cases as Christina spoke to, are possibilities with zero knowledge proof style algorithms that are otherwise not supported in existing kind of cryptographic algorithms we have available today.
And so with that I will hand over to my colleague David,
Welcome David from Denver where it's four 40 in the morning. Thank you for making the sacrifice to be here with us. If you can try to keep it to 10 minutes then we'll have 10 minutes for audience interaction. Thank you.
Yeah,
I will try to rush without rushing too much and I will also try not to speak in my 4:45 AM voice. So, so this is about Jason Webpro, which is some new work in the I T F, it's part of the Jose working group, the I tf that next slide. Jose has been around for a while. It actually paused and has been reanimated partially cuz of this work. It is short for Jason objects signing an encryption.
Many people are familiar with Jose or systems using it because it was basically created to, you know, create non a s n one versions of a lot of these binary artifacts that existed for the various cryptographic remnants. So easier tools for web developers to deal with digital signatures, encryption message authentication codes, H max as well as how to represent keys. So this was defined for applications to then extend it and say how do I know this is the right party? How do I know which key they used for signing?
You know, what defines a valid message in terms of what needs to be sent? And that includes supporting content that doesn't have to be json it Jose defined JSON based formats. They defined the compact format that was used for jots that you saw in the earlier slide, but you can pretty much protect anything with with the Jose stack. Next slide.
So some places it's used, everyone is probably familiar with. It's used for cross domain single sign on that's profiled under open ID connect and further profiled under FPI and FPI two, it's also used as part of acne v2.
The next generation of protocols standardized based on the work that lets encrypted. It's used as a signaling layer. So for instance, your email provider, if you report your account was compromised, may send a signal to participating companies where that email was used as a username and as an account recovery mechanism. It's used for cross network interoperability with voice systems. I believe it's also used as part of 5g and there's a, a serialization of W3C verifiable credentials, VC jots using this as well. Next slide.
So that last one though is a little bit different than the other ones.
The other ones are really more often business to business kind of use cases where there's two parties. But as we all know, ID identity credentials have a three party model because you have a user agent, you have a wallet and it's actively participating and, and representing your, your desires giving you as the user consent. And then you know the topic of this giving you the ability to control privacy with things like selected disclosure. These may hold more sensitive information.
It may not be with, with something like federation, the types of data that's released, the message is purpose built as part of that transaction. So they're only gonna have that information. But when you start having these credentials, it could be a full college transcript, it could be my full medical records for a decade with a particular doctor and you don't want to have to give all that over unless it's absolutely needed.
Also for something like a transcript of Ramel record, you may have that for a very long period of time. You may be using that a decade or more.
And even if you can do selective disclosure, you start to worry about the risk of correlation that just the cryptographic fingerprinting, the linkability as Tobias mentioned, will gradually provide correlation and, and could mean that information leaks between parties even if only disclose bits and pieces to them. So all this comes down to the wall, it's a very important stakeholder in the overall security system and we know that there's additional capabilities needed beyond just signing an encryption for it to be able to control and limit the information being shared. Next slide.
So Jason web proofs, this is a new work item. There's initial draft published, there'll be a a QR code coming up for people who want to read that. The goal is to support newer cryptographic techniques for controlling the information, sharing things like VBS and support features such as selective disclosure, unlink ability, the ability to use that message multiple times and not have information leaks, anonymity, all the cases where people want to know that you're a particular party with the issuer but you actually want to preserve the identifiers.
You don't want to give out your particular email address. You want to give out something specific for that relying party that's correlatable for them but not across other verifiers.
And the predicates computed answers the questions. So the common one would be age, other ones could be like, do you live in one of these lists of 10 states or was this credential issued by one of these dozen parties? Can I actually give the that kind of information rather than giving the attributes themselves if all people need is a particular answer for their business process.
So some of these like selective disclosure are achievable using existing techniques while those require new technology such as zero knowledge proofs and verifiable compute. Next slide. So there's kind of three pieces of work. There's the core adjacent web proof, which is defining new containers. So as Tobias illustrated, a lot of these new algorithms require or gain benefit from being able to work on multiple messages.
So dividing the, the pieces of personal identification, personal identify valuable information into individual messages allows the wallet to be able to dis declare which ones of those it wants to share and which one of those it wants to keep secret.
So the course spec defines that issued form. They also define a presented form and this might be additional information, additional terms of use, audience restrictions, things like that. Things part of the application protocol that that are needed. And this is analogous to say like verifiable credentials and verifiable presentations.
There's proof algorithms. So this is kind of the body of work to take algorithms such as bps as well as existing ones like various Merkel based systems and define how they can work. Interoperably define which capabilities they provide and which ones they don't and how to represent them using something like JSON web case. And finally J S O proof tokens.
And this is meant to be analogous to J W T where, you know, these are all meant for applications and protocols of the fine use, but giving a a few more really valuable tools just like Jots do for people to be able to use this when they're thinking in terms of claims. Next slide. So this is really meant to be part of the next generation of privacy critical use cases. Those long live credentials, really rich records. It's very early stage. We welcome comments and assistance.
We do have a couple of early prototypes, but implementations are welcome implementations, interoperability, people trying to figure out how to get this applied to their particular use case are how we're going to create a very robust set of specs. So with that, I'll turn it back over to Mike.
Well thank you all. I found that informative. I hope that many of you did as well. I would love to hear from many of you in particular that have use cases that you wanna describe for some of these technologies.
Or if you have questions for any of the participants have at it
In the room, we would be happy to hand you a microphone. If not, we have some questions online actually that it would be great if you could answer. So one of the questions is, let me just read it here. How strong is the un linkable proof guarantee with advanced computing? It will ensure the privacy.
Tobias, do you wanna take that?
Yeah, I assume that is the un linkable guarantee that bbbs makes, right? So so zero knowledge proofs the threat model that they're effectively tested under is a, is a pretty strong assumption. And I I I think if, if I'm, I'm gonna make a an assumption about where this is going. Maybe post quantum cryptography and you know, what happens if one of these proofs were in a post quantum world, could the verifying party crack the proof and you know, reveal additional information that was unrevealed previously in the proof?
And the understandings that we have today as a, as a working group is that in a post quantum world, the proofs are remain unaffected. There are effects on the signatures integrities, but that is applied to the entire elliptic curve family or the existing crypto space of elliptic curves. But the proofs remain un unaffected in both unlink ability and information revealing.
Did you wanna say anything?
Yeah, I'll let a comment for more of the application space. So when people are familiar with jot snow, there's things like expiry times and if you have a credential in your policy is that credential expires on your birthdate and that's a field required for processing for people to know the credential is valid. That's going to cause issues greatly limited just how anonymous those credentials could be as part of processing.
So within Jason Wood proofs, we've made an effort to try to eliminate or or guide people away from decisions where the structure of the message or things like expiry times might lead people to, you know, partition off their users. But it's also one of the reasons that things like predicates are so valuable because I can actually prove just that as of this time, the time that I'm requesting the credential, it's not expired. I don't know if it's valid for a day or a year, but I know that, you know, particular processing step was able to be passed.
Great point.
Great.
Another question that we have here, thanks for your answers. If you need to selectively disclose multiple attributes from different issuers, for example passport onboarding card, is it possible to combine them into one JW T
No. If it's not, well if you don't want to overcomplicate things and want to have one JWS with one signature, you would have different credentials, right? So different issue, different credential. And that would more become a question to the protocol layer. How do you request presentation of multiple credentials? How do you specify from which credentials do you want?
Which data element and how do you return it in the response, right? But if you want to get into multiple signatures, signature chaining and all that, it's not impossible to have multiple signatures on one job. But that would still, I think usually mean that multiple issues, you can't make a distinction there if there are multiple signatures on the same gws, you can't make a distinction which issuers are testing for which claims. So I would still say the first option is a way to go in the jot world if you agree.
I, I think another important point is that it's not guaranteed that we ever settle on one credential format. Oh yeah.
So
Might be desirable to put everything in a chart, but maybe you will have a chart and an end doc or whatever. So different documents and the protocols need to cater up with it,
Maybe a miracle will happen.
I
Think, I think, I think just to the use case there though, as well as if you have issuers who are authorities on different claims, you're gonna have a signature from each of them, right?
Because you know you're gonna have the, the passport sign the attributes they know you buy and and the medical record facility. So as Christine said, you need a protocol that can carry both those artifacts and and prove them as kind of independent things.
Great. Another question that we have here, we have an audience very engaged online. Thank you for that. If the b s scheme requires that the prover always share the header, why does that not serve as a correlator for the holder
Yeah, it's a great question.
Yeah, it's a great question. And as a feature in the protocol, depending on what goes in there, just like you know, any, any structure, mdoc and SD jots also have areas that are effectively not subject to selective disclosure that can be abused by the issuer. We have an extensive kind of privacy considerations where we talk about that feature and it's intended only to be used for the information that is global to the whole herd of people that are being issued signatures. So an algorithm identifier which means it is in effect not correlating.
Thank you so much and well, so the, the last question before we, we close the session. How can the user understand what information is really necessary?
Well I think it's my perspective, but when we say select disclosure, even with this issue holder verifier model, the fact that the user needs to give claims that the verifier is requesting to receive the service from the verifier doesn't change, right? So it's not like the user can say, oh I don't want to, you know, share OZ'S and get away with that. If the very far says hey, but I still need those claims to give you a service.
So if you're saying you don't want to disclose, sorry, can't give service, right? So one, at least in our product, the first big use case we're seeing for statute disclosure being explicitly demanded from the customers is portrait image. So the issuer has put portrait image in the credential either because you know, when you're presenting it in person, you can compare the portrait image against the physical human being or you can do a liveness check, you know, somehow comparing to your live image. But very far doesn't want that biometric itself.
Like it's okay trusting the wallet saying I did it based on that image, whatnot. So that's when very far is like, don't don't give it to me like for example, right?
So yeah, I think it's the the by, sorry, long story short, I don't think the user needs to know what credentials needs to be select or disclosed to answer the question itself.
So I'll make the observation that most of what we've been talking about here is plumbing. It is bits and pieces that you would use to assemble into larger systems that solve problems and do things that are useful.
And having advanced plumbing doesn't change the fact that applications are going to need to know what they need to do or what information they need in order to make trust decisions in order to provide a service for you. None of that changes. We're giving them more advanced tools that may be more privacy preserving than they had in their toolboxes before, but it's still up the app up to the applications what data they need and how to use it.
Great. Thank you so much. Yes. David? You want to say something right?
Yeah.
The other thing, again, some of this is non-technical solutions or or larger application level considerations, but the, one of the reasons that social logins got acceptance by that got rolled up by so many different sites is that it reduced user friction for signup and authentication. It changed the identity and access management, it outsourced recovery, all that. But really what it came down to is I'm losing users cause I'm making them register, I'm losing users cause they don't remember their username and password.
So in some cases like selective disclosure, there's concern that people will require you to give more information than you really should have to. And I suspect the wallets or regulations will control that. And the wallets can do it as easily as just telling the user, Hey, they're asking for a lot of sensitive information, are you sure? And as that, as asking for that information reduces the conversion rates where people decide no, people will start to reconsider what they actually need that information.
But the flip side's also regulation such as GDPR that once I get the information I have to protect it. But things like mdoc is like a, a higher level protocol thing which have things like intent to retain. And so it's like a whole orthogonal part of selective disclosure of I'm giving them my picture, they say they're not gonna save it. So that matters as well.
It, it's a very complicated discussion.
Thank you very much for this. Before we wrap this session, I would like or laugh to give everybody a few, the chance to give the audience your final statement. Like one sentence. What what should they take away from this session?
What, what do you want them to really have in their minds if they leave the room?
Oh, sorry, I was talking to her.
Feel free to start.
Yes, I don't mind who starts.
So I think there's probably a place for more than one credential format in this world. I think there's, we will see use cases where the types of un linkable proofs that you get and the types of predicates and, and computed information that you can get from some of these credential formats will be extremely useful. But I also think that with SD jot, we have a good solution for let's say 80 or 90% of the use cases which don't need these features.
So yeah, I hope that we can hopefully settle on one or like on two formats, not expecting it to be one format. So yeah.
Thank you very much.
I'll, I'll point out that SD jot is referenced from the European architecture reference framework, the beautifully named ARF document as one of the credential formats that is going to be supported by the EU wallet projects.
I think we need to prove the value of select disclosure first bottom line. So the vision I have is in, in a short term, like the reason why many kind of big implementations jumped on excitedly GSD job was because it was kind of a simple onboarding to implement select disclosure to improve the value of it, right?
Like it's a fact that like Microsoft has been running in production, they were fabric credential system without select disclosure like for over a year and customers did not ask for it. Like it took customers like certain amount of time to understand the value of the whole system.
And now, you know, they're starting to ask for additional features. So next step is to probably implement Dayja to just proves the value for sex disclosure itself. And then once that happens, I anticipate more, you know, customer demand happening for Linkability, you know, predicates whatnot.
And hopefully that they'll give time to also not just technologies like bbbs or Jwp to mature, but also regulators to understand what they need to require to be really privacy preserving whatnot.
Because David Cook attached upon intent to retain, but there are certain features that would be useless without, you know, regulations requiring it, right? Like even executive disclosure, if you don't have data minimization principles built in just things like GDPR or ccpa, like, you know, they're no incentive to implement it, right? Even though it's technologically possible. So I really like, and I think the order we had today in the presentation was also really good.
It kind of give you a spectrum of what's possible in the short term, what's coming, you know, what should be possible in the long term, what we should be strived for, but bottom line, we have to prove value of this underlying assumptions that sex disclosure is needed and linkability is a must, you know, and that
And our speakers online. Do you want to add any sentence maybe, you know, a final line?
Yeah, I think, I think the main point that I would say is, you know, select what we're talking about today is tools and they're, they're a piece of the larger puzzle and industry that, you know, to, to kind of right back to David's point as well is that these technologies need to go hand in hand with other factors to make this, to make this ultimately successful, right? Regulation education on what these properties mean and the opportunities they create in industry to solve, solve problems in new ways or new problems altogether.
So, you know, just calling out that it's, it's one technical piece in a, in a larger puzzle that needs to fit together.
Thanks to, and David, something else that you want to add, and I will,
And I will say it is hard to go last with this set of speakers. I changed what I wanted to take away from this three times, but I, I agree there'll be multiple credential formats for, you know, public records, things like business licenses.
There's no reason absolute disclosure, it could be a simple signed jot for more personal information where you have an active relationship like a employer issued driver's license. St.
Jot is, is just about perfect. And then there'll be use cases where kind of the, the newness and complexity of something like BBS or Jason web proofs will have to be justified. And I think that's okay. And I think that's a sign of a healthy market.
Thank you so much. Thank you very much. Thanks for the very nice presentation and discussions to all of you. Before we wrap the session finally and jump to the lunch break, I would highly recommend and invite you to our next session at two 30, same rooms, different moderators. We will have Mike, Small, and John Bun on privacy trends.
So I highly invite you to join the session at two 30, same room and enjoy the well-earned lunch break and
Please applause for these speakers. Thank you so much.