Through something called token binding a few years ago, where they were trying to bind at the TLS layer, the tokens issued to the channel on which they were transported. And it kind of worked. If you had inspecting proxies, it didn't work. It counted on things being end to end. And so when some of the browsers, as it was finishing, didn't deploy it. We had to start over. And in some sense, we went back to thinking about, okay, well, how do you issue an application level, a token, which you can prove was issued to you. And that's what we called. Depop demonstration of proof of possession.
And the way you do that is you have a cryptographic proof inside the token. That's signed with a private key that you hold that's sent with the token. So how would that work?
I give a statement in this case, adjacent web token, a jot, which we call the Depop proof to the issuer. The issuer includes that in the token. And then when, for instance, a resource gets that token, it checks that the Depop proof was signed by the correct party. And if you try to use that token from a place you took it to, that's not the correct holder. If you've correctly secured, the Depop key.
For instance, if you kept it in a secure element or just kept it in secure parts of your operating system software, then even if I'm the insider attacker, well, I can steal the token. I can't steal the private key. That signs the Depop proof that proves that I am the legitimate holder of this token. So this was an idea that came up at the OAuth security workshop in Stuttgart three years ago.
And the OAuth working group has been working through this. I happen to be fortunate to be at Microsoft where have a product team that had perceived needs from customers to do proof of possession.
We didn't have the standard ready. They're not yet using it, but you know, they cobbled something together for office 365 using parts that we had lying around. One of them, an old HTTP signing draft to sign the request and the other being RFC 7,800, the confirmation key, which is used to represent the Depop key in the proof.
Now, why am I talking about my employer when this is actually an industry venture? It's because Peter and I, my colleague Peter Castleman had we'd asked our product team, the Azure active directory protocols team to look at Depop and they told us, oh, well it doesn't do everything we need.
In particular, they described an attack, which they had seen in the wild against our proprietary proof of possession mechanism, where the attacker, again, the insider would pre generate Depop proofs, but with dates of their choosing.
So they could generate proofs into the future, which were signed with the correct private key. And if you had these proofs and you could take them off machine, they were effectively bare tokens because they had been signed with the correct proof key. How do we mitigate that? The way we do that is you have to have the, the issuing server or the resource contribute material to put in the Depop proof. This is material we call a non, and this is a standard pattern in identity protocols.
So for instance, an open ID connect ID token has a no contributed by the relying party, which when the relying party gets the ID token back, it knows, oh, this is either the no I gave or not. And knows whether if the no doesn't match to reject it well, if the server controls the lifetime of the nons that it gets put into the Depop proof, then the server can control the lifetime of these Depop proofs that are used to generate access tokens and refresh tokens.
And so there've been a couple late additions to the Depop spec. The server notes is the primary one.
And that way you can't have a pre generation attack it's optional, but we think that at least the higher security deployments are going to implement it. And Peter, what was the other thing we had added
Authorization code binding?
So, so one of the other attacks that, that we discovered was log file attacks. And so the pattern of that attack is that authorization codes end up being logged somewhere along the line.
And, and even though authorization codes are, you know, they're meant to be one time used, et cetera. Sometimes there is small time windows and sometimes some attackers are particularly sophisticated.
They, they actually can prevent you from presenting or presenting the authorization code during, during the authorization code flow. And, and so this authorization code ends up in a log file, which the attackers can then exfiltrate.
And again, these are things that we, we observed in the wild. And so one of the additional mitigations that we added was to allow the, to, to allow the Depop public key, to be registered during the initiation phase of the author in the, in the initial request, so that it is not possible to present the authorization code and redeem that authorization code for an access code by executing this attack when they ate the log files. So that was the second edition that we made and that we added onto the, onto the, the Depop proof.
So the sort of layman's description of that is that lets the entire oof flow be bound to the Depop key. So again, originally the Depop key was applicable to token issuance and in the OAuth code flow, which is the predominant flow token issuance happens on the second leg of the protocol.
First, you do an authorization request and you get back an authorization code. Then you redeem that authorization code at the token endpoint. And it was at that point that we were sending the deep pot proof and getting it included in the access tokens and the refresh tokens.
But if it's possible to reuse an authorization code, which again, the OAuth says, you're not supposed to be able to do, but in practice, if you have a distributed implementation where you have many different servers implementing your authorization server, and they're not tightly synchronized, it's possible for you to get a code from one instance, redeem it at the other.
And the system doesn't notice. So if you could, as an attacker, get a copy of the authorization code and it wasn't bound to the Depop key.
Then you could, if you do it before the other party, or if multiple redemptions are possible, you could have gotten access tokens, which were legitimate and bound to the key of your choice. As the attacker, you could switch keys before redeeming the code, whereas by giving the option to send a thumbprint of the key in the initial request, the authorization request, then the authorization code is cryptographically bound to the key. And therefore when you redeem it, you can check that an attacker hasn't switched keys on you.
So I'll give you a bit of status.
The Depop spec has gone through working group last call at this point in the OAuth working group, which is a big deal because once you're through working group last call in the ITF process, at that point, you are close to becoming an RFC. You send it on for ITF wide review. So that's in process. And following that you send it to the RFC editor and it gets published.
So after a long and winding journey, which in some sense starts with oof one and moves through Aaron Harmer, Lajas attempt at a proof of possession bound token type for OAuth two, which at the time there was an interest in finishing. And so it languished through Justin Richard's HTTP signing thing, which the O working group again, decided to drop because they thought it was too complicated or some people did here.
We've finally, finally come full circle. We are very close to an industry standard for doing proof of possession, for access tokens, and refresh tokens. That's end to end.
Not everybody will want this bar. Tokens are still probably fine for some use cases, but we anticipate Depop being used pretty widely for any higher security applications. And you might not think of email or, you know, office or whatnot as a high security application until you think about what if you know, an adversary had access to all your corporate email, what would they know about you and what could they do with it? So we've got a lot of important customers that view, for instance, office 365 as mission critical.
It's not a Microsoft only protocol I'm describing, Microsoft's intended use of it for illustrative purposes. What else would people like to know? Or what would people like to say, Erin? Do you wanna say anything I'm gonna go interactive now or Peter, do you wanna show people the diagram? If there's not questions right now,
I saw a hand come up here. Thank you.
I was just interested in getting to know the actual use case for the, I mean, is it for say file upload or is it for message use cases for, so what, what sort of use use cases do you envision, if you say it's for higher level security scenarios,
Anything where you think that an attacker which might be your employee, if you're an enterprise could get unauthorized access to the data and do some form of harm. I mean, there's plenty of cases too, where the attacker is not an insider, where there are vectors by which they could find an access token in a log.
There's a number of ways that are documented in the documented in the forthcoming OAuth security, best practices, document about ways that tokens get stolen and ways to try to prevent the token theft. But, you know, you might not, again, I'm gonna use the Microsoft example because I know it, you might not think of your outlook email or your Excel or your PowerPoint as highly sensitive applications.
But if you're in the military, if you're an intelligence agency, if you're a bank, if there is sensitive customer data, including PII in that's accessible, if you have the bear token, particularly in the age of GDPR, all of a sudden those applications are a big deal and for your customers and for yourselves, you owe it to yourself to protect them.
We have one more question here.
Yeah.
Hi, I'm Aaron peri, also part of the OAuth working group. So I've been around these discussions quite a bit. I first wanna say I'm very excited about this because I think it's a very near term solution to this problem that Mike has described of proof of possession of, of getting, doing something better than bear tokens. I do wanna clarify point on HTB signatures, which I'm also very excited about, which is a solving the same problem of proof of possession.
And my personal opinion is that HTTP signatures is a better long-term solution to the problem because it solves it at the lower layer of HGTP. Whereas this is something you do in your application. You build this into your app code, the tradeoffs here are HTTP signatures while it's it's. It has a very long, long and complicated history that has gone through many challenging times itself.
And it probably has a similar road ahead, but it is now in the HTTP working group it's adopted by the HTP working group. So I think it's finally in the right place to go ahead.
I think it's gonna be years before that is practical in any way to deploy. So I think that's why this is a very good approach now for like, we can actually use this today. Like what is described, you can now build into your apps and it does solve the problem, but I'm also still, I don't wanna, I don't wanna dismiss the ATTV signature's work at the same time, but that's more like it'll be a couple years before the spec is done. And then several more years before it's baked into libraries in the place where it's usable. So
Right.
And I should add that the Depop spec effectively signs a very small fixed set of the HTTP request parameters. And you need to do that. Or the attacker could generate a different message and still have it processed or change the message. So for instance, we do sign the HTTP method, whether it's get or put or post, or what have you. And we sign the origin where the message is from. And so I am not saying that we're not doing a form of HTTP signing. We AB absolutely are, but we fixed the set of fields that are signed. And we tried to sign as few of them as possible.
I mean, part of why HTTP signing has a tortured history and Aaron or Justin Richard who's also here could talk with you about it is that HTTP goes through all kinds of gateways and proxies and whatnot. And along the way they add headers, sometimes they rewrite headers. Sometimes they remove headers and that's all a part of the way the real web works today. And it wasn't ever designed to be something that the headers you sent were exactly the headers that were received. Some of them are added along the way to provide context of what happened in transit.
And again, I'm not an expert in this, but I do know that one of the reasons that OAuth one didn't take the world by storm is it had the requirement to sign a number of HTTP headers. And sometimes they got rewritten. And so sometimes oth one just didn't work.
Do we have any more questions? We still have a few minutes.
Hello.
So this, it might be very basic question. So is the deep op expected to be used for authorization talking for valid? Validities like for very short time, like could be like how, how long is it could be expected because
That's, that's a great question. It is up to the server. How long to accept the Depop bound tokens. So you could, again with the nons every time that a token is used in the reply, you could say next time, use this new. No. And so you could even have them be single use. We think in practice because we don't wanna add huge server load. There will be time based windows, right?
Where you can reuse a Depop bound access token or refresh token for a limited period of time, until the server decides to issue a new nod. At which point you might try to reuse it and it fails. But the good news is the failure situations are by design self-correcting because the failure can include a response saying, try again, using this new nuance, right? You generate a new Depop proof. You ask for token issuance.
Again, you can get a new token and you're all good, but so it is completely in control of the issuing server. How long to let the token be valid.
Oh,
I see. Thank you.
Well, if there are no further questions, we'll have a break now and we'll be back here at five 30. Thank you. Thank.