Well, thank you for spending your time with me. Standards are about making choices, but beware there may be some assembly required. I'm Mike Jones. I've worked on a number of standards, some of which I will critique today. I hope you find this discussion, the journey we're gonna go on, both entertaining and potentially useful. So Philip Poll Baker, who was very involved in setting up the PKI system used on the web said a very strange thing to me at an ITF meeting.
He said, standards make the choices that don't matter. It made me think it's an odd thing to say, but there's actually a deep truth there. So here's a quick tour of some choices that don't matter. HX 800 is the ether type for IPV four. Packets six is the IP number for TCP packets. 4 4 3 is the TCP port number for SSL packets GET is an identifier for an HTTP request, HTTP 200. Okay indicates that your HTTP request worked. Application Jason is a content type for Jason.
Content 65 is the number used to identify the letter, a upper case in both ASCI and Unicode curly braces to limit Jason objects issuer and subject are the claim names for some Jot claims I could go on. So could you,
However,
As you've also discerned I'm sure already making these choices deeply matters. Interoperability requires that people make the same choices. So text can be input and displayed because everyone uses 65 for uppercase. A-H-T-P-S works because everybody uses port 4, 4, 3. And so it goes.
Standards are, we would write these choices down and it's our job as identity professionals and standards professionals to make these choices. So standards are the nuts and bolts, if you will, of the identity and security industry. By analogy, you know, nuts, bolts, light bulbs, wires and other countless parts have standards all conforming and therefore there's a marketplace in interoperable parts. I don't have to buy my screws and my bolts from only one supplier. Now if you think about it, without these standards, every part and every machine would have to be custom manufactured.
The same is true of the identity and security standards we use for digital identity. But different standards make different degrees of choices, enabling more or less interoperability. So depending upon your standard, your mileage may vary. So when I proposed this session, I said that I was going to name names and take prisoners. So hold onto your hat because here we go. I'm going to critique about a dozen existing and emerging standards in terms of the degree to which they made choices to enable interoperability.
And I will give them letter grades from A, the best giuseppe's laughing to F the worst
X 5 0 9. Let's go. Historical existed before there was a worldwide web. There's a lot of choices that can be made with X 5 0 9, but there are interoperable profiles, particularly for TLS certificates. Now the choices have evolved over time. The certificate that you got 10 years ago had the domain name in a different place than it is now. There's multiple revocation mechanisms. Nonetheless, we've managed to make it interoperable at least for some use cases.
So I give X 5 0 9 A B saml, which is not dead, it is the original single sign-on protocol. There are interoperable SAML profiles used in business and academic and research environments. But you know, there's a bunch of choices you gotta make. SAML name IDs can take a lot of different formats. There's multiple protocol flows. You can use the browser profile, the artifact binding profile, the enhanced client profile, and there's multiple logout mechanisms. Furthermore, it's dependent upon brittle XML canonization, but you know, it's been made to work. So I'll give it a B be.
This is the first one of them that I worked on and this is also the first one that I'm gonna talk about that won an award from the European Identity Conference Once upon a time OAuth two, it's widely used, but it's not interoperable without a profile. Occasionally people ask, well, can't we do SAML or OAuth interop testing? And the answer is largely no. You have to have profiles for scope, values and all kinds of other things for it to actually work together correctly. There's different response type values with different security properties.
There's different scope values for different contexts. There's multiple token type values. Furthermore, the bar token spec, which I worked on, defines three different ways to pass the bar token, one of which it says don't use. So I only give this one a C. Some assembly required open Id connect a successor to saml, if you will, is a widely used single sign-on standard. And it did make a number of choices that improved things over OAuth that it's built on. For instance, it said that you must do exact redirect URI matching and there's lots of evidence that open ID connect can be interoperable.
The evidence I like the most is from the Open ID certification program, where at the time I wrote this, there were 754 successful open ID connect certification. So there's good confidence those are gonna work together. But building on OAuth introduced more choices than we necessarily need to have. Particularly there's six response types each with different security characteristics. And there's three different IDP initiated logout mechanisms, two of which use the browser one in which back channel.
So
You know, but I will give this a B because while there's open things that you can choose still, there's interoperable ecosystems that have taken off beyond our wildest expectations. One of the things used by Open ID connect is Jason Webb signature. And that's actually used by a lot more things than I ever imagined it would be. You know that a standard successful when it's used in ways that you never anticipated. Now there's however, there's two serialization formats.
There's the compact serialization with base 64 URL encoded things separated by dots, and there's adjacent serialization, which was added late in the game 'cause some people wanted it. But other than the serialization choice, most choices are made and most therefore most implementations are gonna interoperate. Well. Now as an asterisk, the choice of algorithm is intentionally left up to the deployment. You have to have that to support cryptographic agility. And I'll talk more about that in my closing remarks. So other than the multiple serializations, I would've given this an A, but it's a B.
When I talked about this last week, some people said well give it a B plus.
Jason Webb token is built on top of Jason Webb's signature and it's an incredibly widely used token format.
And again, while we designed it with Open ID connect in mind it's also used for verified caller id, at least in the United States, to try to combat call fraud. Who knew that it would be used for that, but that's great. It does narrow Jason Webb's signature by requiring the use of the compact serialization. And you know, interestingly, all the claims are optional. Is that a failure to make a choice? I argue not. It's a choice to allow profiles such as the ID token to say what's optional, what's required, and why. Furthermore, the Jot security BCP further tightens the choices made.
There's many interoperable implementations in essentially all programming languages. So this is the first one that I will actually give an A. There's one more a coming think. Think about what you think it's going to be. Now the The Jose standards for binary signing and encryption are a lot like Jose and a lot like Jason Webb's signature, but it includes both protected and unprotected headers and it has some bells and whistles that Jose doesn't such as counter signatures. Nonetheless, it makes enough choices that implementations for the most part are gonna interoperate. So this gets a B.
SIBO web token is like Jason Webb token, which got an A, but it doesn't narrow some of the Coase features in the same way that Jots did. So for instance, it doesn't say whether to use Coase sign one or Coase sign, which is a multisignature format. And the thing that makes me most queasy is it doesn't mandate that only the protected hitters be used, but that's a security discussion. But it has the same claims model as Jots because it didn't make as many choices. I'll give this a B
Web. Cryptography has been around for most of a decade at this point.
It's a standard for being able to do cryptographic operations in browsers at native platform speed. And it defines only one way to perform each operation. It does support multiple key formats, but only ones that were already in widespread use. It didn't invent any new ones. Furthermore, you know, while I was in the working group, some people argued to exclude stuff that others wanted. In particular, you can only get it browser held keys, you can't get it platform keys. That may be a deficiency, but all the interoperation are gonna interoperate. So what do you think?
It gets an A web auth and Fido two enabled passwordless login and are supported by all modern browsers. It evolved from and replaced U2 F, which used X 5 0 9 signatures. And then some other formats use native raw signatures. It has multiple attestation formats which are continuing to evolve and there's numerous extensions, some of which are ubiquitous supported, some of which aren't supported at all. And it continues to be a work in progress to see which extensions are gonna be supported. So while there's a number of open choices, it does have interoperable implementations.
So I'll give this a B
Verifiable credentials.
Some of you have heard that term many times today already. Now here I'm only going to talk about the W three C version. I mean there's other forms like SD jots and like ISO M docs and what have you. But we'll talk about the W three C verifiable credential work. There's been 1 0 1 1 and we're working on two oh. All of them make somewhat different choices that aren't necessarily backwards compatible. There's two different families of ways to sign verifiable credentials.
There's the VC Jose Coza that I've worked on where you can use Jason Webb's signature or you can use Jose or you can use SD jots. And then there's the VC data integrity family, which canonize J LD in one of two ways, either into RDF or using the J canonical standard. So a lot of open choices. What would you give this? I give it a C. Decentralized identifiers.
Okay, good. You get it, you know it. The spec does say what methods you have to define or to implement, but it doesn't say how that's left up. The DID methods, when I wrote this a couple weeks ago, there were 193 did methods. I bet anybody a dollar that there's more now
None are mandatory to implement, so there's no interoperability guarantee at all. You can have a DID implementation that's conforming, I can have a different one. They need work together. Furthermore, they retreaded the working group and did. Methods are out of scope still, so this is not gonna get better.
So somebody tell me, what grade would you give this? Call it out.
D,
Correct D, but it does get worse. There's a thing called multi-format that had been proposed in the W three C and the ITF where there's a spec called multi base that defines 23 different equivalent and non interoperable representations for a binary string of data. You can encode it as various forms of base 64, base 68, base 36 base 32 hex decimal base eight, base two, and leave it as binary interoperability. Therefore requires Im implementing all 23 or having a profile that narrows down which ones you're gonna actually use.
So in my mind, multi-format institutionalize the failure to make a choice. And you know this isn't an academic exercise, unfortunately. This is used by many DID methods and it's used by VC data integrity. This is so bad that while I normally never spend my time trying to stop bad ideas from happening, I wrote a blog post called Multi-Format considered Harmful and sent it to the relevant people at the ITF. So a few closing remarks on making choices and standards. Enabling layered protocols is a choice.
We've talked about a number of different kinds of layerings from ethernet packets to IP packets to TCP ports, to jot types, et cetera. This is not a failure to make a choice. This is a choice to enable protocols to be layered one upon another in a constructive way.
Planning for evolution is a choice. Sometimes choices must change over time and building in the ability to do that is a good choice. In particular, I already talked about how in Jason Webb's signature, you have to be able to change the algorithms over time.
Indeed, because the security properties of different cryptographic operations will change. And in fact we're, you know, possibly entering the post quantum apocalypse at some point and we'll have to change our algorithms again, only supporting a fixed algorithm would be a poor choice.
Finally, extensibility is a choice. All the specifications I've talked about do enable extensions living thriving ecosystems require that. So you know, we can add new features such as Depop to OAuth. We could add ID tokens to OAuth. And my recommendation is to use extension mechanisms that don't break existing implementations. When you add an extension, the classic, if you don't understand it, you must ignore it. Language has served us well. So I hope you'll agree with me having gone on this tour, that standards are about making choices. So if you're writing standards, please make good ones.
Finally,
I'd like to thank Matter for both supporting this work and providing a designer so that the design values are higher than black, text on white background and Coture call for also supporting this work.
Before you go, I've gotta ask you, in your experience with with standards, how do you expect AI to influence the development and adoption of security and identity standards?
It may help people write code that uses the standards.
Maybe I'm old school, but I expect human beings to be the ones making the choices to enable the interoperability, to build the protocols and to write the data formats.
Thank you so much. One of the very few people I know who can make standards interesting, engaging, and even entertaining. Thank you very much, Dr. Michael Jones. Thank y'all.