Some of the top trends that we're seeing today. So this is the slide we usually use to give a bit of background on the Open ID Foundation. Our mission and our vision is to help people assert their identity wherever they choose, and to lead the global community in creating identity standards that are secure, interoperable and privacy preserving. We are an open standards body. All of our standards are freely available. They're developed in, in the open. Anyone can come and observe our working group sessions, and there's no cost to contribute to the work of the foundation.
So we're very much a, an open standards body and we operate by consensus and Trent, with a lot of transparency following the principles of the, the World Trade Organization's best practices. We have billions of users using our standards predominantly for login, single sign-on that's open Id connect that you'd be most familiar with, but we're also very commonly selected for open banking and open data.
Our FPI family of standards as well as for digital identity, such as our open ID for verifiable credential, family of standards that has been picked up for the European Digital Wallet work.
So you hear a lot about that during EIC and also used for, for shared signals and other, other use cases like health verticals and so forth. We more recently, in the last few years have started working very closely with governments because they're often the, the decision maker on the selection of standards and in some cases even regulating the use of our standards like in, in open banking and open data, which we'll talk about in a bit. And our board is comprised of a lot of very impressive experts in the world of identity.
Most recently, MasterCard just joined our board and, you know, many, many well-known players there. A couple of the, the icons are quite small, so I'll call out that Chicago Advisory Partners represents the Central Bank of Brazil for their open banking and, and open data work.
That one's a little bit tricky to read and also connect. Id a little tricky to read is a bank led consortium for identity based in Australia. So great to have two, two ecosystem representatives on our board. So just as a quick note on definitions for conduits, payload and, and digital identity.
I'll let you read the, the definitions on your own, but that's helpful to distinguish between the conduit and then payload because some of our standards comprise both of those and in other cases it's just the, just the conduit. Alright, so my, my group hug picture, I like the, my little meca friends here. So you'll see a, a couple of fun pictures as we go through the themes.
This one is, you might know the me cats are particularly good at sh raising an alarm when they see anything from a scorpion to a bird, any sort of predators, the alarm goes up and they, they alert their family to, to, to have safety.
So the analogy here is to our, our work on shared signals, which is helping, you know, ecosystems often across different entities to share signals between each other predominantly for what would be account or session changes and issues. But it can be maybe also used and adapted for the open ID for verifiable credential family of standards.
There could be a class that are around lifecycle management of credentials. So that's a, a category that's being explored between the shared signals and the DCP working group right now to see if a another family of standards would, would be good. On the right hand side, you can see there's been some, some great take up in recent the last couple of years with, you know, great work and advocacy by, by Cisco in that shared signals guide. Apple has adopted it for use in, in enterprise use cases in partnership with Okta and others.
Gartner has been a great strategic partner in, in advocating for the work of shared signals. And in London this past March we conducted an interop event in London and we're working with Gartner to do a replicate and a next generation of that interop event in, in Dallas. So we're looking forward to progressing on on the approach that we take there. We were pleased to see neat the NSA and Seesaw reference the work of share of shared signals to say that it's an interesting emerging piece of, of standards work in their developer and vendor challenges guide.
And we're also progressing a shared signal white paper under the leadership of Sean odell from, from Disney. He's helping to pull together a white paper, a white paper in partnership with the working group that will make it easy to kind of understand what are the key benefits of shared signals and help kind of cross the chasm in, in the adoption of that family of work.
So shared signals. Next fun picture. This is introducing the theme around lack of interoperability. So using London as our guide, you can see the train stations that were built for London.
Each one had a different owner back in the 18 hundreds when they were developing the train stations. And so they're not automatically connected to each other. So they had to develop the circle line in order to connect the train stations and allow people to move between and, and across the city of London. So that's my analogy for the world of open banking and open data and trying to connect the data that's held often by banks with other fintechs, with other banks, and allow user consent based movement of the information. We have the standard here for the, the conduit.
So open ID foundation's, FPI security profile is what's allowing for the secure movement and interoperable movement between many ecosystems.
These are a bunch of flags for the countries that have chosen to use FPI in the last couple of years. So really great adoption starting in in the uk expanded into Brazil, Australia. And you can see in the last recent year, saama in, in, which is the Saudi Arabian money monetary authority launched last year in Norway. There have been presentations here at at EIC on the work of, of scaling it in the, for the Norwegian Health Network.
There's been a, an implementation with a bank in Japan, and you can see the rest of the list at the bottom of the page. There's a lot of movement going on in the US with the CFPB, that's the Consumer Finance and Production Board, which has, is going through a rulemaking process right now. And they plan to move to final, which will oblige the first major banks to, to conform to the law by next spring.
And then that will cascade to other banks in the US that will need to, to adopt it. So a lot of questions going on.
We've also shared an open letter with the CFPB 'cause we're concerned that they, they might not achieve some of their objectives unless they, they tweak their, their course. So there's some active work we're doing with the US government to try and help, help nudge them in a, in a good direction.
And the, the Canadian government is moving in parallel, maybe slightly behind the US government in their adoption. And then here in the uk in the Europe, I'll come onto this page. On the left hand side are the, the jurisdictions that have chosen to use FPI on the top right hand side are our other jurisdictions using other approaches like the Berlin group is, has got a, a mo an approach for the European union.
India has their India stack, Singapore has their their own stack as well.
But we're seeing a lot of appetite from the rest, some of these mul multinational organizations to start moving towards not only deploying open banking and open data to enable and empower individuals to, to as share, be able to have control over their own data, but also entities like the Bank of International Settlements are interested in seeing cross-border use cases start to come about.
So we're expecting to see an ongoing wave of adoption cascading around the world as more of those kind of multi multinationals kind of influence through the UN and the G 20 focus on digital public infrastructure and more adoption of open banking, open data, and ultimately it crossing borders. So next theme. A lot of conversations I think will happen here at EIC around, you know, the, the kind of more traditional model, the more centralized digital identity ecosystem.
I could probably, I could also have IDP there instead of government in the middle of that kind of trio.
And then on the right hand side, the, the wallet based enablement of digital identity ecosystems. So on the left hand side, our conduit of open ID connect is one of the, the key ways to, to enable those systems. And on the right hand side, our family of open ID for verifiable credentials is, is one of the, the key standards for wallet based technologies. And there there are others. So this is a, a simple diagram on the model for the European Digital wallet where one mechanism for issuing the credential that is used by the EU and also by California.
The California DMV, and which is also being pursued by the Japanese government's using open ID for verifiable credential for issuance, and then using the SD jot and WW three CVC as data formats.
And then for presentation, open ID for verifiable presentation in Psyop V two are part of the modes that California DMV and, and Japan and the European Union have selected. And then there's also the 18 0 13 dash five, the mobile driving license standard that's being used in i'd, I'd know for sure California and the eu.
I'm not sure where Japan is on their, their, their track, but I know that they've just announced in the last couple of days the the plan to enable Apple for ID and wallet. And so most likely they would be using the 18 0 13 dash five for the in-person presentation for mobile driving licenses.
So moving on to the, the next trend, digital identity is not yet globally interoperable. So we're seeing this emergent emergence of different digital identity implementations. I we have well over a billion users of digital identity in in India with adhar. I think it's maybe up to 1.4 million right now.
But what can easily be missed is that many other jurisdictions are actively deploying digital identity as well. So there's, Nigeria has a deployment, Bhutan has a deployment, Ethiopia, South Sudan, Morocco, many, many countries. Brazil is very notable.
They've, they've actually had incredible adoption of their more centralized digital identity ecosystem, but those deployments are not automatically interoperable with each other. So this is the, the kind of problem, but is what is our goal? What should we be trying to achieve? And from my my perspective, we should be seeing digital identity that's as easy for someone to present or a relying party to consume as it is to exchange an email or a phone number or payment cards and that you should have the option of interoperability.
So it's completely normal that one would start with domestic focus on how to deploy digital identity, but it's more the exception than the rule that you have a European architectural reference framework where there's interoperation across European countries. So this is a a, the left hand side refers to a paper that we, we spoke to last year and we had a lot of listening sessions over 18 months to develop a paper on human-centric digital identity for government.
The Open ID Foundation co-branded this paper with 13 other nonprofits, including mip, which does open source software for the global south and U-N-H-C-R, which is particularly keen on human-centric digital identity deployments and allowing for interoperability across jurisdictions that U-N-H-C-R serves for, for refugees. But we have quite a spectrum of different ident ident digital identity deployments. You have those that are skew more centralized. You have those that skew more decentralized or wallet based, you have public sector led deployments and you have private sector led deployments.
And those are just a few flags of, of implementations that were studied by the, the lead editors, Elizabeth, Elizabeth Garber and Mark Hane and gather that feedback. But the problem is, of course, they're not automatically interoperable with each other. There are questions on how to achieve that interoperation in terms of issuer discovery, relying party policing, mapping of trust frameworks, and quite a few challenges to overcome. So how do we, how do we solve for that question?
And we, we acknowledge that there is at least a problem. And so we started a, a journey called the city hub, the sustainable and interoperable digital identity hub to bring people together to at least first say, are we agreed we have a problem around cross border interoperability? And then how on earth do we solve it? Because no standards body, no government, no multinational player is gonna be able to solve these problems on their own.
So we, we posed the question, we invited people in September to an event in November in Paris and we were quite shocked with the amount of take up and traction we had with 24 countries, 24 plus different standards, bodies and nonprofits, a lot of thought leaders who were willing to kind of vote with their feet to, to show up in Paris and, and have a conversation. And many of the countries from Africa that arrived were actually the people running their digital identity programs, right? So senior leaders in, in Africa.
So it was great to have that global north and global south conversation because there's actually a, a massive divide. There's not that many people, literally human beings that cross over between the global north conversations working on standards and the global south conversations where it's skewed more by development dollars and national interests than it is by private sector competition and, and competitive vendor offerings.
So it's a very different universe and again, a reason why you could end up with countries or continents left behind that don't interoperate with each other.
So we had our first meeting in Paris, 92% of the people said we should keep going. And so we brought together every, we said we should have five conversations this year. We've just had two. Cape Town was two weeks ago with a lot of the development players again. And then just yesterday we had a, a city hub event in Berlin with a smaller group of countries as well as some multinationals and a lot of the thought leaders from the, the standards worlds. And so this is our goal to find what we need to achieve digital, digital identity interoperability.
And we have a bunch of work streams to support the decision making bodies. So to help the G seven and G 20 with policies that can flow all the way down into protocols to help governments to nudge towards a path where they can take their existing implementations and converge towards interoperable approaches.
Making sure standards bodies, if there's gaps and interfaces between standards we need to solve that we're supporting them and, and, and certainly making sure private entities, vendors are on a course to, to help na you know, national bodies converge as well.
There's also a gap in research. We see a lot of research coming out of academic institutions, but it's not necessarily directly impacting what we see on the ground as implementers. So trying to develop and collaborate more constructively with academia is a, is an area of opportunity.
So the, the five work streams, our key goals are to develop the champion use cases, just one like one might do in a private company. What are the first use cases that need to come off the production line to build the foundations to allow for interoperability. And so we've got, we're going, we're doing the analysis to make that a clear and transparent process.
Not just putting a finger up in the sky and saying, I like these three use cases, let's do them.
But really trying to make, have a criteria based and transparent process to select use cases that will serve the global north and the global south.
And set foundations we can build upon and layer on additional use cases as we scale determining the minimum requirements for digital identity interoperability, mapping trust frameworks between jurisdictions in partnership with the OIX and trust over IP and DAC and other experts in trust frameworks determining metrics of success for this work and working on the governance, both how City Hub operates, but also trying to identify the gaps and come up with pragmatic ways to close those gaps. So there's a lot going on in City hub. Happy to talk more about that.
Citi, it's SIDI dash hub community if you would like to get involved in the work online.
Our last theme for today is convergence. So pretty picture of Hawaii here, the convergence trend in in 2024. This is looking at just some of the, the government partners that the Open ID foundation is working with. We see these spheres of digital identity, open finance and data and faster payments on this kind of convergence course.
Some countries like Brazil have built their tech stacks with the intention of having picked their faster payments capability as well as their open finance initiatives and open insurance initiatives and their digital identity plan all connected together with a government led approach. Others are kind of still in very different places. You might have digital identity in the US with mobile driving licenses and individual states and then a separate open finance national effort led by the government with the CFPB regulation.
So two very different spheres of of stakeholders in the US side, which is why they're on two sides of the, the coin.
And then the next page to kind of expand that, that diagram, if you add in digital government services, and I should have up here civil registry as another sphere, another, another circle. When you add all those kind of four or five circles together, that adds up to digital public infrastructure.
And what we're seeing right now is a lot of that discussion at the G seven, the G 20 level with UNDP is actually using the terminology of digital public infrastructure, particularly coming out of India's leadership of the G 20. They brought that term and how they think about it within the India stack into this global discourse.
But it can be tricky for us as identity professionals to realize that digital identity is one kind of central piece to that wider set of spheres, especially when you add in the, the kind of civil registry and that foundational identity role that that governments play at the Open ID Foundation we're, you know, we're in the middle of several of those conversations on the basis of the standards we have with Open Id connect the open ID for verifiable credential family of specs for digital identity, fpi for the open, open banking and open data world.
So we we're kind of seeing a lot of this opportunity and we also feel like it, it behooves us as part of our, our mission and our vision to help those governments to make sure that they're deploying in a way that is going to be interoperable and secure and that they use our standards effectively. So if they choose our standards, we wanna be of service to them. And that pretty much brings me to the end of the story.
This is a, a set of the initiatives, a combination of our, our work groups as well as some areas where we're listening and learning, you know, post quantum computing and or cryptography, I mean, and AI monitoring a few white papers that we're working on. But for the rest of the today's session, I'll be calling up some of my, my friends who are working first we'll have Alan talking about the offend working group and some of the interoperability event insights that the group has learned.
That's one of our newest working groups just formed last year.
Later at the, at the end we're gonna have Giuseppe talking about some of the work within the AB connect working group and, and that work on federation, which is really exciting. So delighted to have him. Joseph will talk about our certification program, which is expanding rapidly to support our different working groups that are all taking specs through to IM implementers draft and to final. And so we have hard work at the certification team to keep up with our, our working groups and then security analysis.
Fabian is gonna talk about the great, the from the University of Stuttgart will talk about the why security analysis is important and how what we're, we're doing together to progress security analysis on our protocols. So that is it, right? If you are not already a member, please feel free, you're welcome to contribute at no cost. And of course we are very happy to have additional members joining the foundation to help us deliver on our mission and our vision. Any questions obviously, feel free to ask any quick questions before we move on to Alan. Alright.
All right, everyone drink your coffees. We'll come back to questions later on, Alan.
Thank you.
Thank No, no, no, no. I need that piece too. Thank you everybody.
So wow, that, that was quite a walkthrough on a whole bunch of different standards and different working groups and all sorts of things. I want to talk, oh, I don't want that one.
Oh, sorry. No, keep going back.
I just, I'm just going back to where we were. Yep, next page. There we go.
Okay, good, good. I'm okay now. Okay. So I want to talk just a little bit about the Zen working group.
As, as Gail said, one of the youngest that were the most recently formed working groups that we have within our IDF, A little bit around who we are, why we are, and then some of the things that we've sort of achieved about this. So first of all, why, right? So authorization has been one of my passionate items for several years that we've been going through things and David Psad and myself, David Psad, axiomatic and myself have been Zal fans since I think the turn of the century that we've been looking at that.
But the, the challenge we've had with authorization is that standardization is just hasn't happened. Everybody has their own silos in terms of authorization. And part of the reason for that is that we needed to actually kind of work out the authentication part of the problem first, right? And we need to sort of agree on who we were dealing with before we could really get to standardizations on authorization.
So the, the, when we started talking about this, we started two years ago at Ivo, I got a group of people together, it was, I don't know, 10 people or so. And I was pretty excited because it was a lot of the big players in the authorization space and we got into, I believe it was the OIDF conference room at, we had some lunch, we started talking about this and said Wow, that'll be a really good idea.
Let's do that. So we did that and then about nine months later Jerry Gable called me and said, Hey remember that meeting we had, should we do it again?
So last year at Iver we all got together again and we kickstarted things and actually formed the the author then working group. So the challenge that we are looking at is that when we start looking at authorization, there are so many different areas where we don't work together that it is an open green field for standardization. Almost everything needs to be, have some level of agreement. We broke down into essentially three tracks or three work streams that we're working within.
And I'll, I'll go into some of those in a moment where we sort of tried to break down and said, okay, these are some particular areas we're going into. One of the reasons for this was that we started talking about authorization not from a practitioner's perspective, not from a software perspective but from a consumer perspective.
And actually I see Hutch sitting right here in the corner and he was the one who passionately made the claim and said I have 150 different apps in my organization and each and every one of them does authorization differently.
And so when I'm trying to invoke a policy and enforce that consistently across all of these apps, I have to have basically 150 experts, one for each of those products to ensure that we are doing that. And that was sort of one of the goals that we are trying to get into. How can we get away from each of those silos Note to self, there's a big flag here for SaaS apps, right? And bringing SaaS apps into this picture. There's a really big flag on this.
So yes, people will say that Zack Mill is a standard. We did have that, we had the NIST work done, I dunno what, 20 years ago where we had the NIST ABAC work done and there's been a fair bit of work around authorization but when you look at the product vendors, we don't work together, right?
You you we don't work. And so that was a space that we started working towards. So we formed the group a year ago, almost exactly a year ago that we formed the working group.
We started off with a whole bunch of the vendors, obviously Amazon with CDA and all of the different standards that we went through. We started talking about what they offer. We did technical presentations from each one of them and started diving into that to try and see what was there. And so the first work stream that we came up with, we thought what can we do that's easy? What can we do that's achievable and what can we do so that the world can look at it and say hey these guys are at least able to talk to each other. That was our low bar of success.
And so we started looking at essentially the communication between the enforcement point and the decision point, right?
And I'm using zal terms here in terms of PEP, the PEP and the PDP or the PEP and PDP. But there was sort of this decision that almost without fail, it doesn't matter which vendor, which implementation we are looking at, there is something that is asking a question that says can this user do this thing? And is expecting an answer that basically says yes or no? Is there a way we could at least agree on that and could we standardize on that piece?
So that was our, our sort of low hanging fruit November-ish last year we suggested, let's put ourselves in a timeframe on this, let's have an interop event and we did the interop last week at Iver and we've got another one scheduled in one of these rooms here this week for any of the European folks who wanted to come into that.
And I'm gonna talk about what that interop was was actually quite a successful event.
And I'll, I've got some slides that we are going to go into that is there more work to be done? Oh hell yes. Right there, there is enough work to keep us all going sort of for the next 20 years. One of the things that's actually really important, one of the work streams that we've got is use cases and what we actually need to solve, that's sort of a little secret behind this is that most of us on the working group are sort of technical guys who work with product and we like standardizing really difficult things and stuff that's fun to work on very often.
Those are not the things that the market actually wants and needs. And so working on the sort of use cases that we wanna try and get to is actually one of the work streams we've got to get there.
This is my plug to encourage you if you have input on those, come and get involved in the group and start putting those in and working in with those.
Okay, so here's basically what we did with the interop. We said what about if we built a simple web app, it was a to-do application. It doesn't really do terribly much. We defined a rough API, anybody who remembers zal or the inter sort of intermediate we went into with jackal where we had the Jason based, we agreed that when we starting to look for a request, it's going to be about a subject and action and a resource. There's a lot of discussion in the working group about the names of these attributes, what we need, should they be optional, should they be compulsory, et cetera, et cetera.
But that's sort of what happens in the standards.
But essentially we set up an API for this kind of request that gets that kind of response. All of this in the API is set up inside of the the GitHub so you can go and have a look at these. And then we built a very simple web app.
It was, I don't know, JavaScript based web app that we've got up here. It was probably liberally borrowed from some other web app that we had with a to-do app. But essentially the piece about it is we had a few different roles, right? So we had a an owner and a viewer and an admin and various things like that and various kinds of actions that they could do. And then we set it up so that each of those were authorization points. And the authorization points that we came into was that request that we just had a look at.
We pushed it out into essentially that this is the backend of the application and we abstracted out the authorization service. We said for anybody who wants to work within the interop, give us a single URL that's gonna be the destination for your authorization service. And you get to consume that request and generate the response. And so that's what we did. So at Ident diverse we had a whole bunch of various implementations. The web app itself you can go and have a look at, it's got a pop-up menu of each and every one of these implementations in there. So you can choose the backend.
And one of the things that I think is pretty cool about it is you can choose it mid use, meaning that you can create the note using, I dunno, axiomatic backend and then you can switch over and use open policy agent to check the reading of the note and things like that because we all agree and interoperate on the requests, the understanding of the subject, the understanding of the resource.
And so everybody came into it.
Needless to say we had a very successful interrupt and all of them were actually working and we proved the the right things and actually working within that, which in my mind was the sort of first time that we had multiple or very different authorization servers or authorization services all working from the same request. Why is that particularly interesting? It's interesting because at sort of just the base level, it absolutely does not solve all of the problems.
But at the base level it means that an application developer doesn't have to decide on who their authorization service is when they're working on their app. That means that the, the, the resistance to using an external authorization service goes down, right? If you can just switch who's going to provide it and you can say okay, well we can go off to Ping or we can go, I guess Ping and for rock are the same animal now, but you know you can go off to ping or you can go off to any one of the other ones axiomatic and you can use that for your authorization service.
That means that from a developer's perspective, the commitment to use an external authorization service becomes a much easier decision to make. Is it's there yet? Absolutely not. This was simply an implementers draft. It's a very simple use case. There's a lot of things that we want to be able to put into that but there was some definitely some cool takeaways that we took out of this and that was from any working group perspective, don't boil the ocean, take something simple, take something easy that we can mostly all agree on and focus on that particular scenario. Let's do an interrupt.
We specked it out a tool did a whole lot of work to actually define out the spec and we have a pretty reasonable spec that we could all work with. Needless to say, whenever we have a spec like that, every single conference call involves changing this or changing that or discussing this point, but that's what it's there for.
And we then went in and we created, let's do an interrupt on the simplest possible scenario. And that was a list of requests and responses.
Forget about things like multiple requests in the same or multiple authorization requests in the same physical requests or box curring. We did not have any search APIs. So we don't have anything at the moment that says show me all of the users that can access this resource or show me all of the resources that this access can use, this user can access. We will get there. But that's not where we started off here. We went for a very simple API and we then built the payload to match that.
Most of the implementations ended up being some form of proxy in that each of the vendors built something that could take that response in, take the the request in, they converted it to call their own API system, return back the response and sent back the Jason response.
However, what was really interesting about that is that that whole list of logos that I put up here a few minutes ago, the actual work involved in it was sort of measured in the realm of a couple of weeks in actual to get to get them up and running. It was actually a relatively quick process.
Now on the one hand that probably points to the fact that most of us are doing exactly the same thing and calling out on a request, but it was a relatively simple one. We got the implementations built, several of the implementations were actually able to leverage each other's implementation because they were doing sort of very similar kinds of things and we got that momentum going with the, with the interop in that. So where do we go from here? Well we'll have another interop next year and try and do something else. One of the things we want to be able to do is looking at more complex requests.
The interesting part of this part is when I was talking about the use cases, this is definitely, even if you're not a a deep authorization person looking at the APIs, come and talk about the use case, the use cases. 'cause do we really need to do these? We've got strong proponents, we've got strong naysayers if we call that. So come in and let's put the use cases together. But things like putting in multiple requests into the actual request out or the the search API, right? How do we do those things?
One of the challenges I've got with the search API is sometimes it's really hard to make that sort of evaluation and we've gotta then worry about things like exceptions or I can't tell you that or various pieces like that. Finding all of the subjects to search the API in addition to this, there is another work stream that we also want to work on and we haven't even started yet.
And that is the idea of is there some way that we can standardize the policy administration level rather than the actual request response. And Chuck's jumping up and down now hut is jumping up here.
So the challenge there is you pretty much know that Salesforce is not going to call out to an externalized authorization server and any of the the, the major SaaS apps, they can't, right? They can't sort of hold their own application hostage by calling out to things. But is there some way that we can agree on some form of policy that they can all agree on on the in sort of on on the top end and then enforce it consistently?
My dream is the fact that policy is defined by the CISO and what would be great for CISO to say something like contractors cannot access HR information, that's our company policy.
Make sure that everything we have enforces that and that means giving it to Workday or Office 365 or Salesforce or ServiceNow or any of the other ones and have the, the SaaS apps join into the standardization. The win for the consumers are really, really big there.
However, that's a really big job and and there's a lot of work involved with that. One of the things we desperately do not want to do there is we've got 15 different languages. You know what we need? We need a standard.
So now, now we have 16 different standards, right? So just throwing another standard onto that is not the right way to go in and deal with it. There's also some work with other kinds of frameworks. We do have both graph and more traditional policy frameworks working in the current interop but there's some other ones reback systems and things like that.
But we really wanna move towards this model of being able to externalize au authorization decisions. This is where you take your pictures and take a picture of that and all of the places you can come into.
We generally have the meetings on Tuesdays on the west coast. It's Tuesday morning about that or 11 o'clock or something.
We do, we've now just started doing a Tuesday morning and a bit more APAC friendly time zone, but these are all of the places that we are. There's quite an active discussion in Slack. Absolutely. If you have any side of interest in the authorization space, come in and join us then come in and work with that. And I think that's me.
Great,
Thank you Alan. Any any quick questions for Alan?
I haven't even said anything yet.
No, but seriously did when you're talking about the, the P-A-P-P-D-P coordination did, what kind of time, like how far in the future are you looking to start something?
As soon as we can, right? I mean we have, we, we have the interrupt running, which was nine months after we started the group and we have that base interrupt piece running in terms of the different products that request response, the, the whole process of that request response is probably about 90% the same.
And so getting to somewhere that we can have a standard is is it's probably, it's probably not months, but it's not many years. I would imagine that probably 12 to 18 months out we would at least have a sort of 1.0 standard that we can start working towards. But that's just my guess. I could be wrong.
Alright,
One more quick question.
So I think you've said more or less said it yourself that you know, P-D-P-P-P stuff is something we can standardize on, but where we really get lost in the weeds is when we get into policy definition and administration. In my observation it seems like it means different things to different people. Is there any way that we'll ever standardize on, I know you mentioned the 15 and 16 languages, but isn't it just a lost course?
I think the answer to that comes up to how many beers you've had to drink.
I think that the policy expression itself is probably a last cause. However, even that, if you can standardize at the request level, how you come up with the decision is not something that we are defining at this point, right? So for example, the graph databases and the, the graph ones don't even have a policy, it's just walking through the graph and so we sort of avoided stepping into the policy definition and rather saying, here's a request, give us, give us a response.
Yeah, I I think it might be a lost cause if we are looking at standardizing policy itself.
So, so join Alan for drinks later is how I heard the answer there. So thank you Alan. I'll be
Around if you wanna carry on the discussion. Absolutely, yeah.
Alright, so happy to introduce our next speaker.
So Fabian, a student with Stuttgart University is part of the core team that has been leading security analysis on a, a range of open ID foundation specs, sorry, click
Sticker.
I keep holding onto that clicker for dear life.
So yes, delighted to introduce Fabian who, who personally led some of LED the security analysis on the open ID for verifiable credential, family of spec last year. So thank you very much for that Fabian. And we look forward to hearing from you around the role of security analysis and Stuttgart approaches it. Thank you.
Yeah, thank you very much for the opportunity to speak here. So I'm very excited to talk about former security analysis today.
Yeah, I'm Fabian Hoke and I'm a PhD student at the University of Stuttgart and my group cooperates with the Open ID Foundation to analyze the security of standards. In our research we are developing a model called the web infrastructure model, which is used to analyze the security, security of standards. And in this talk today, I want to first motivate why we need formal security analysis and then I will give you some insights into how such a security analysis is carried out so that it becomes less of a black box for all of you.
Okay, my animations are not working apparently here. That's a bit sad, but okay, so why do we need formal security analysis? So on the internet we use, we heavily rely on the on open standards to guarantee the interoperability. So what we usually have is a specification, and from this specification we build implementations in common languages like rust or python and so on. As we all know, there are a lot of different attack vectors on web protocols. So it is safe to assume that a specification is not just automatically secure.
So this implementation can have different vulnerabilities or so we assume there are a set of vulnerabilities this application has, this can be for example a square injection bureau token leakage access token injection or buffer overflows and so on. But these are only the known types of attacks and there's also a huge set which we, we probably did not even discover yet.
So what one usually does now is, is doing a penetration testing on the implementation.
But the problem with pen testing is that it mostly looks for known types of attacks and since it's done by humans, it'll also probably not find all the known attack types. But since it's done by humans, it'll find sometimes new types of attacks. That's why the penetration testing does not cover all of the known attacks but also cover some of the unknown types of attacks. But we will never find all of the vulnerabilities in such an application. So the formal security analysis that we are doing in my group takes a different approach.
We are also looking at specification but not from an implementer's viewpoint, but we are looking there specifically to find security vulnerabilities in them. So what we first do is we look at the specs and filter out all the functional security relevant parts of the spec and build a formal model from this. Such a formal model can be imagined as pseudo code and this is always an abstraction of the real world. So we cannot capture all the details that are in the, in a, in an actual implementation.
So for example, stuff like encodings Jason coding or ULM coding is not contained in such a formal model.
So, and we also, since the specification is probably not secure, we assume that the formal model also has some attacks that are possible there. And all of these attacks that we have in the formal model are also attacks on the implementation. So whatever we find in our research and the analysis of the formal model is these attacks are also applicable in implementations and if we find them then they can be fixed and, but the inverse is not necessarily true.
So an an attack on an implementation is not necessarily contained in the model because the model is an abstraction. And for that reason it is really important that we have comprehensive models of the real world and our approach with the web infrastructure model is the most comprehensive model of the web to date. And the advantage of such a formal model compared to compared to point pen testing is that we can find all possible attacks in this model as you can see here on the slide as well.
So this approach looks at the protocol as a whole defined security properties on this and then proves the absence of all possible attacks in the limits of this formal model.
So this was maybe a bit complicated. So to boil down all of this in one meme we can say that even if pen testing did not find any attack, there can still be that one test case that was not tested, that is still vulnerable.
And if we compare this to the formal approach where we can prove the absence of all possible attacks in the limits of the formal model, it's easy to see why formal security analysis is an effective way to exclude vulnerabilities or to find vulnerabilities. And we can also do formal security analysis before specification is even implemented. So basically early on in the process to fix vulnera to fix security flaws as early as possible.
So what have we analyzed so far? So our analysis of the formal analysis in my group basically started with the analysis of O of 2.0.
That was among the first protocols we analyzed. And also there has been a lot of research be before on OF two when we started our approach uncovered new un unknown attacks in the in OF two. And since this approach has proven its effectiveness, we continue to analyze other standards like open 90, connect five P one and five P two and also others.
And, and we also looked into the more recent standards of the open ID for verified credentials firmly. But these results have not been peer reviewed yet. And all of these analysis led to discovery of many different attacks.
So as I said my in the beginning my group cooperates with the open ID foundation to analyze standards. Right now there are two analysis going on. First one is the shared signals and events frameworks and in this, this analysis uncovered some attacks which will lead to some, some normative changes in the draft.
And then there's another one going on in for the open ID federation specification. But this one just started. And furthermore there are also some planned analysis for this year. I think first is the grant management for O of 2.0 and the other one is open I for verified credentials at open ID for verified credentials we already looked at. But that was last year and the specifications changed since we looked at it. So it makes sense to look at them again when they are more more mature.
Okay, so now let's dive into more details of the web infrastructure model. So the web infrastructure model is a tool to formally reason about web protocols. And so we call it the whim. And the whim is a pen and paper model. So the whim itself is not mechanized in any way. So everything is written down on, on paper basically. And the whim follows many existing closely follows many existing web standards and it can be divided into five different layers.
On the lowest layer we have the do the generic DO model and based on this generic do VR model, we define the whim, which models web technologies like HDP, ht, ML and so on. So we have a model of a web browser and web server in there. And then we can use the whim to model a protocol. So the application specific model is then the protocol for example or for open connect and so on.
And then we need to define what security means in this context. And this is, these are then the security properties for this can be for example authentication.
So intuitively we want that nobody, if we have an open ID connect implementation, we want that an attacker cannot log in as an honest user that would be an authentication security property. And the final but most comprehensive part of such an analysis is the proof. And in a proof we proof that the model is secure with respect to the security properties. And if the proof does not work out then we usually have an attack and then we need to fix the model and do redo the proof again. And depending on the complexity, this can be several iterations.
Okay, so let's dive into more details of this different layers. So the D model was first introduced in 1983 by Denny Doll and Andrew. It consists of a network and processes that can resend and receive messages through the network. The network is typically controlled by a network attacker that can read, send and W hold messages there. So let's say we have Alice and Bob here in this network and they want to communicate with each other through this network. Since the network is not secure, they want to typically encrypt an their message.
In this case, Ellis encrypts the plain text Ellis with the public key of Bob and then sends this message to Bob. Bob can then use the decryption function to decrypt it with his private key and get the plain text out of it. In this model we do not care about the details of this cryptographic function. We just define that the Cybert text is secure unless the attacker knows the private key two decry message. And this is called equational theory.
On top of this generic YA model, we defined for the web infrastructure model specific processes for browser server and DNS server.
Of course there cannot be only one of these in such a system, but we can arbitrary but fixed number of each of them in such a analysis. And the browser implements concepts like multiple windows and documents, scripts, cookies, redirect HGP headers. So it's quite close to an real browser implementation and a generic HGPS server that is a blueprint for an actual server or for where we can then model the protocol. So it helps us to receive HGPS messages and send HGPS messages and so on.
And in this model, the attacker cannot only read messages on the network, but he can also take over on his processes. We call this corruption.
And by, if the attacker corrupts a server, then he learns all the secrets that the server knows, for example, the private key and also session data that clients stored there.
Okay, so then on top of the whim we can model the actual protocol that can be imagined as a pseudo code. And this is quite close to an actual implementation of such a protocol. On this slide you can see a section of the message handler from a five P two analysis. It's a client implementation and you don't need to understand everything that's displayed here, but I just want to highlight two things.
So as we see, we have a function that is called every time a a message is received by the client and then we can differentiate based on the path it was sent to, what should, what should happen. So that's what actually happens in the web server usually. And as you can see in the box below, the first thing that happens here is that the server looks into the header, looks for session cookie and then checks whether the session ID is contained in the server state. And if it is there then the, then it continues running the code. And if the session is unknown to the server, then it stops there.
So this is actually how you would implement it as well, I guess in in an actual programming language.
Okay, so then as I said before, we need to define what security means in an app, in the application specific model. And this is, and these are the security properties. Typically there are a few example of different properties we can prove with the whim on this slide. So we have secrecy properties, that means that a value is unknown to the attacker. And an example for this is the authorization security property for O of two.
Because there we intuitively want that the attacker cannot access resources belonging to an honest user. Another one is our integrity properties. And example for this is CES is session integrity. For open Id connect. This means intuitively two things.
First, the user explicitly expressed the wish to log in and second the user is logged in under their identity at the end. But with the WIM we can in principle also prove other properties, for example, privacy properties.
An example for this was is on a security property or on privacy property of the Mozilla browser ID protocol, which says that an IDP does not learn which relying party client uses all of these security properties on the slide are only rough intuitions and if we want to actually prove properties about them that we have to write mathematical definitions of these properties.
But I will not go into more details on that. So the final thing that we do is the proof. It's also the most comprehensive part of such a security analysis. And in the proof we argue on many pages that the model fulfills the security property properties or it doesn't. And if it doesn't fulfill these security properties, we typically have an attack and need to fix the protocol. The problem with pen and paper proofs are that they are not easy to verify by people from outside.
And the other problem that we have is that if the protocol changes then we have to redo the proof or at least identify exactly which sections are which sections of the proof we have to change, which is a lot of work for us. And, but that's a kind of a clash with what's needed by working groups I guess because for working groups it makes sense to include such a security analysis early on in the process. And for this reasons we are working on mechanized formal security analysis to improve the situation there.
And our approach therefore is called DY star, which is a dollar VR model implemented in the programming language. F star. F STAR is a dependently typed functional programming language and proof assistant developed by Microsoft Research. And this is a joint work by the University of Stuttgart, Lancaster University, INRIA Paris. And the Indian Institute of Technology Device Star allows, similar to the wim, a fine-grained analysis of protocols up to implementation level.
And the corresponding security proofs are mechanized, which means that the, in this context that the F star code verifies successfully and these, and the advantage of that is that everybody who can run the F star compiler can basically check that the proof works out. These proofs are only partly automated. So we have to give F star certain proof steps and additional dilemmas to, to prove the security dilemmas.
But one of the most interesting thing about this approach probably is that we can execute runable code from such a model that can then interact with real deployments, which increases the confidence in the correctness of such a model. And, and what one could also do is use such a formal model as a reference to test the production implementation production implementations. You have to note that this approach is not yet on the same level as the whim.
So what we are doing right now is mainly modeling cryptographic protocols that do not use web protocols because we do not have a model of a web browser in DY star yet. Okay, so what have we analyzed so far with device star one is the signal messaging protocol and then also the automated certificate and management environment protocol by the from, from the to issue its encrypt certificates. And an interesting fact about this analysis was that we extracted code from this model that was able to actually interact with the Let's eng encrypt certificate, let's ENG group server to issue a certificate.
And then there were also some smaller protocols that were modeled like the protocol, iso, DH, and ISO chem.
So what's next in the near? So what's next with D MyStar specifically in the near term, we want to improve the proof automation and we want to separate the proof from the protocol. This has two advantages because we, it makes it easier to write the actual model and that also makes it easier to verify them. And second, it allows us to, in the future potentially to generate such a model from a production implementation.
There's some experimental work done in compiling rust code to F Star and then proving properties about the F Star model. And in the long term, as I already kind of said is that we want to have something like WIM Star. So this would probably be then a layer on top of device star which models web technologies like HDP, generic web server and the browser.
Okay. So then to summarize my drug, we can say that formal security and analysis proves the absence of any attack within the limits of a formal model.
And penetration testing, on the other hand, attempts to find attacks by searching for common attack patterns. Our approach using the web infrastructure model has so far focused on analyzing specifications and not implementations or libraries. The WIM has proven its effective, has proven effective in uncovering security vulnerabilities and also finding respective fixes for these problems. This is not a complete work but an active area of research.
One example is obviously the mechanized whim, so whim star basically, but also the pen and paper whim that we are typically using to analyze standards. Also in the our work for the Open I Foundation we use the pen and paper model is sometimes extended with more details to better capture the real world because as I said in the beginning, it is crucial to to have a really detailed model of the real world, to have meaningful, meaningful results. That concludes my talk and thank you very much for your attention.
Thank you.
Thank you so much Fabian, for an excellent talk.
I feel like we're kind of in an academic conference hearing your talk, so thank you so much for that really exciting progression on the the whim star work and I'm delighted that the OpenID Foundation board has invested in this relationship with Stuttgart to, to apply this level of sophistication and due process to our standards as we go through the lifecycle. So a big thanks to the Stuttgart team. I think a quick hand or two in the audience.
Alright, let's take two questions then we'll move right on to our next talk. I got, I got it. That's right. I'll start here.
Thank you. This is very interesting. I remember working on formal methods in the nineties, early nineties and the whole idea was to sort of specify at a high level than to derive models more and more and eventually you derive an implementation. Do you see a path here where instead of you know, just modeling specifications that we could model specifications and through this derive an actual implementation?
Is that, are you looking at going in that direction as well? 'cause then you would be able to derive a formally correct implementation.
You mean whether we can extract an actual production implementation exactly from the phone model.
So say I want an implementation in some tech stack, I potentially, you know, theoretically I guess from your specification, your model, you might be able to derive a piece of running code that would implement a protocol and by that you could be ensured that it would work correctly.
Yeah, I, I think that's what's happened in the ACME analysis where we analyzed the let's anchor protocol and there we extracted this. So there are, that's certainly possible but we have to, we have some limitations there because we can extract the logic of the protocol. But for example, this model does not model cryptography. So if we extract such an implementation we always have to plug in some external crypto library into this.
But good news here is that there are verified crypto libraries in F Star which can be used to extend the form A model we create with an extra crypto library that does crypto then.
Great. And
Speaker 10 01:01:25 I think just to expand on that one a bit as well within the Open ID foundation we are actually looking at going slightly the other direction. So can we take the formal analysis and take a production implementation and prove that it matches what was modeled in the analysis.
But I, that's still at quite early days but if anybody is interested in that discussion there's a, the working group you could potentially contribute to two.
Great. I think I saw, Nope.
Alright, thank you again Fabian for an excellent talk. Appreciate it. We can click forward one there. Here we go. So certification is our next talk by Joseph. We might be running over just a little bit in our timing so we're probably gonna have about 15 minutes for each of the next two talks. So let's see how we go. Go ahead Joseph.
Speaker 10 01:02:14 Thanks. Go.
Yeah, I should be fairly quick with these. I'm just gonna give a a bit of an update on where we are are with certification. So Yep. So I think a lot of people in the room will already know the Open ID certification program has been going for quite a while now. It started with Open ID Connect but we're now doing tests for all the different versions of fpi, fpi, SEBA and we've got more tests under development for some of the verifiable credential specs.
Particularly we've already got verifiable presentations tests available in beta and we've already had a whole bunch of wallets run those tests and we've found some bugs in the tests and we've found some bugs in the wallets which have pretty much all been fixed. We're also working on tests for verifiable credential issuance. We've already got tests that people have tried for open, ID connect for identity assurance.
Speaker 10 01:03:06 They're against a slightly older version of the spec, but with that spec going to final hopefully pretty soon we're looking to update those tests and we're also actively developing tests for the FPI two DO. We've already got some tests there, but just building out the full set of tests for that, that DO standard is, there's a few nuances in that spec that we have to make sure we cover before we actually launch a certification program.
And again, the the Open ID certification tools, they're based off an automated certification harness. It's a test you run yourself against your implementation. There's no need to involve expensive consultants and so on. It's all open source.
The, the tests themselves are free to run and, but there is a fee if you want to actually have your certification formally listed on the Open ID Foundation website and then have the rights to use that open ID certified logo.
Speaker 10 01:04:05 So some of the things we've done in the last six months, we've developed a new simplified set of the tests for a new profile that open finance Brazil came up with. They initially launched with a protocol that had quite a few choices in it and they saw that causing interoperability problems in the ecosystem.
So they simplified their profile and we've updated the conformance test to match. We're expecting Brazil open insurance to switch onto that profile later this year.
So we're, we're supporting them in that. We've generally been supporting quite a few ecosystems as they look to adopt FPI and the test suite in due course and we've been tackling some technical debt as well and making sure we're bringing our dependencies up to date. As Gail mentioned earlier, we're potentially expecting some big activity in the US and Canada starting later in the year as various regulatory rules drop. Still not entirely clear what's going to be said by the regulators in either of those ecosystems, but we're hoping to see FPI and or certification endorsed in some way.
And the foundation recently sent an open letter to the the CFPB to try and make that situation clearer to them hopefully. So we just have to wait and see what they actually come up with in the final rules.
Speaker 10 01:05:31 And we've also just had a, a new team member that we've hired 'cause yeah there's a lot going on so we, we needed more effort in the team to actually keep up with all the output from the working group and develop all these new tests. So it's a great guy called Thomas from Germany. Has a wealth of experience in this space.
So I think he's gonna be a great addition to the team in terms of what we're trying to look at later this year. We need to build out tests for the Open ID federation spec, which Giuseppe is gonna explain about that Spector, we've had some directed funding from Connect ID in Australia who look like they're gonna be one of the early adopters of that spec. So thank you to them. We're con continuing to finish off those FPI two specs. So we have to implement the HTTP signatures that's part of FPI two message signing spec and also the the FPI two variant of seba.
We're waiting on a spec from the United Arab Emirates 'cause they're adopting api so we will need to create a CER profile of the certification test for that ecosystem once they know what their profile looks like.
Speaker 10 01:06:47 And as I mentioned, the open ID for identity assurance specs should be going to final later this year. So we'll be updating the text at the test to test the various specs there. They've actually split split their spec up into a few different bits so it's gonna be an interesting challenge to figure out how we test some of the, the bits they've split out.
And we're gonna continue work on the open ID for verifiable presentations and verifiable credential issuances and hopefully get some traction with those in the EU who are obviously pushing on verifiable credentials quite hard at the moment.
Speaker 10 01:07:26 And we've also started some initial discussions with the shared signals working group to try and see how we might test their specs and so on.
So they did some interop work earlier in the year and they're working on interop profile based on that because it's not a typical OAuth flow, it's not entirely clear yet how we might actually test that. But yeah, there's an active discussion with that working group to try and find out what we do.
And yeah, with just a note that we'd recently had a discussion with the, the board of the Open ID foundation to set some initial pricing for some of those new tests, which we've set at the same level as the open Id connect certification tests. So three and a half thousand dollars for non-members and $7,000 for members.
But sorry, 700. Yes, thank you Gail. It's my brain's still not quite there after a week at ous last week coming straight here.
And yeah, as usual we'll probably launch some of those profiles in a a kind of testing mode, proving mode when we initially have the tests and do a free cer a few free certifications initially just as people help prove out the tests and shake out any bugs. So if there's any of those specs you're particularly interested in being the first person to certify for, please get in touch with me and we'll share the details when we're ready. So that's me. So unless there's any questions I shall pass over to Giuseppe.
Any questions for Joseph on the certification program?
It's definitely a very rigorous pipeline of work and a lot of eager, eager folks. Keen to have things like the open ID family of, of, of tests certifications ready. So the large scale pilot, there's a lot of collaboration going on there. The California DMV is looking for collaboration for their hackathon coming in September, so the team is super busy. So thank you Joseph for all your hard work and obviously follow up with Joseph offline if you have any questions.
Alright, last but not least, delighted to have Giuseppe DeMarco from the Italian government wearing his hat as a contributor to the, the AB Connect working group and the specification on federation. So Giuseppe, take us away.
Speaker 12 01:09:51 Hey, good morning to everybody. I am Giuseppe DeMarco.
I work for the presidency of the council of the Ministers of the Italian government and today I'm very grateful to all of you here and the Open D foundation because I have the opportunity to talk about open-End Federation standard technical specification that I have studied in a first stage then I have contributed to and then I implemented in a large scale deployment. Open and de Federation is a technical specification that show us to how to evaluate the trust from a technical perspective because the term trust can be, can mean many thing.
And open de federation is a technology that show us demonstrate how to build a trust infrastructure more in detail. The open federation, the trust infrastructure that we can build with federation can be distributed, is hierarchical and therefore is is decentralized hierarchical because well when we talk about a federation, we talk about a single federation that is represented by a trust anchor.
Speaker 12 01:11:03 So an authority that hold and that defines the trust framework and holds all the rules and decide the policies, decentralize it because a federation authority may have intermediates and also participant of this federation may join multiple federation without changing its configuration. As our previous experience with ML two where different federation, different trust frameworks require the implementers to change the metadata and also the implementation.
Here in this representation we have the old word on the left and the federation design in the, in the, in the right side while the on the left we have a typical start apology where the trust anchor is on the center, it accredits it register all the entities or the participants within the federation. And on the right we have multiple trust anchor even if those share the same trust framework or the same rule or not. And each trust anchor may have more than a single intermediates.
Speaker 12 01:12:13 While this allow the onboarding system to scale up upon multiple intermediates and therefore register all the leaves that represent the entities that implements authentication, authorization and all the required required protocols. An important mention is that federation only covers all about trust evaluation mechanism method, not authentication authorization. Therefore trust came before authorization authentication.
Open de federation gives also a lot of freedom to its participant because an entity can publish its metadata on their own and join multiple ecosystem with a single deployment. So the, we have this kind of scalability pertaining a single entity. Each participant have to publish its entity configuration at a well known endpoint and carry and in in this configuration it can configure multiple metadata for many protocol roles. Therefore an entity within an entity configuration can put an out client metadata open ID relying party open ID provider or additional servers or server multiple one.
Speaker 12 01:13:35 And also there's also a particular flexibility because open federation is agnostic, is natural in relation to the metadata we can configure in an anti configuration. Therefore we can configure any kind of metadata. And because open I Federation is not limited to open ID and OL to zero. In this slide we have a representation of an entity configuration. This is represented in plain text for human readability and we have the JWT header on the top and therefore the payload issue and subject are the same because the entity configuration is self issued, self signed.
There we have the jocks, the JW key set per related to the federation operation, therefore related to the trust evaluation mechanisms. And then we have the metadata. There's JS an object that contains multiple metadata, therefore this implementation contains an open line party metadata credential issue. It's a resource server and auto authorization server.
Speaker 12 01:14:49 And another particular mechanism or particular I will call it verifiable assertion.
Another particular artifact we have in federation are the trust marks and the A trust marks is allow an entity to prove its compliance to specific security profiles implementation profile. But trust marks are very creative. Let's imagine in a, in a federation that an entity wants to vote or explain a position within a survey. It could be possible to publish a self with the trust mark that say, Hey, I like the green color and therefore we have the authority hands that the list of the superior authorities that are able to issue statements sign the statement about this subject.
Open federation therefore allow through its federation API to check if a participant is still active, therefore against any kind of revocation. And as mentioned, if a participant supports one or more type of protocols and is if it is compliant to specific profiles.
Speaker 12 01:16:05 As mentioned, thanks to the trust marks, here we have a representation of a federation trust chain. We had already seen the leaf S entity configuration. In the middle we have a subordinate statement.
This trust chain is composed only with the reelection direct reaction between the trust anchor and the leaf. We do not have intermediates because that would be too, too much verbose. And we color, we have the bindings that are both attribute base and cryptographic base in the subordinate statement. Therefore we have the issuer that's the trust anchor and the subject that they leave. And the subordinate statement contains the public key needed to validate the, this ture of the leaf and the configuration. And we have two special device there. Metadata policy and constraints.
The metadata policy are device to force or to dynamically change or adapt the leaf metadata according to the shared rules.
Speaker 12 01:17:21 We in federation, we have given the freedom to participants to change their metadata where in, in all the way they want. But let's suppose that in our federation we have some rules specific for the implementation or let's suppose that a day signature algorithm became weak or vulnerable. Well in the old world the federation authority was forced to send thousands of email without any assurance about the time when this change will be applied.
Under the propagation of this change with federation, the trust anchor and its intermediate only has to publish a metadata policy. And when the trust chain related to all the entities will be updated, this change will be applied. And now soon we will see how and therefore we have the constraints. This supported statements say that the this subject, the belief can only play the role of open idea line party In this context, in this federation and well in with this light, I want to represent what is happening here.
Speaker 12 01:18:34 I'm GI DeMarco and I was introduced by Gail the such, the workshop chair. And everybody knows that Gabe belongs to open, I federation, open I foundation, sorry. And all everybody of us are here thanks to EAC. And we have a representation of all the links that allowed me to be here with you. And this allow you to establish a sort of confidence or at least listening to what I'm going to say. There are many other ways to establish the trust with me. You may use other trust anchors, you may use my local university.
You can use the guard consortium and get on, on, on on until you find another trust uncle like John or research and education community. You can find my name in the department for digital transformation. That's a department for the presidential, the council of the minister. And therefore we, you can find my name in the IDAs expert group.
Speaker 12 01:19:39 And so on. The the meaning of this is that it's up to you decide which are the, what's the, the just the trust anchor you recognize as trustworthy to establish the trust with me.
And let's enlarge the, the picture bringing other subjects. The question is how Gustav will be able to interact with Mike Jones or Mike Jones with Gustav because they found themself within the same context. And today it's up to you decide if open ID foundation is the trust anchor or EIC because probably, I don't know how many of us doesn't know Open ID foundation and today has the opportunity to attend this meeting and listen to all the stuffs and this allow Gus Gustav and Mike Jones to have some good conversations.
Something that won't have happen in a, in a, in a, in a bus station, in a train station. Because how Mike Jones would suppose the Gustav has something to do with the digital identities, cloud and technical stuffs.
Speaker 12 01:20:51 While this reduces the ecological cost and allow us to have good conversation, we are all here. The trust anchor is the a a c, the conference.
And while this concept of delegation of trust from that scales upon multiple intermediates or this transitive property of trust because you trust me because you trust open foundation and the IC and so on is something well established in the literature in the last 25 decades, we used to deal with X 5 0 9 certificate chain. So the, the first question that comes to our mind is why should I have to implement open ID federation if the mechanism is equivalent and X 5 0 9 certificate chain chain allow us to establish the trust with this transitive properties.
The answer is, can be summarized in this slide. Yes, we have different format Open 80 federation allow us to build an infrastructure using multiple endpoints specialized for different tasks and using a restful API.
Speaker 12 01:22:08 So therefore you have transparency, you can navigate the federation in real time. In the old world using summer two, we have a sort of proxy in, in the Italian deployments and every time the minister wants to know how many service provider was registered behind that proxy.
Yeah, you can imagine the emails and all the double check with federation authority and stuff like that. Nothing. Now this problem we do not have using federation because the federation is transparent. You can navigate it in real time. And another, another evidence that the federation trust chain allow us to carry multiple cryptographic material specialized for multiple different scopes. I would say in in the federation trust chain. It's something well establish that the key use for the trust operation are must be different or should be different.
Roland should be let, let's, let's have this kind of flexibility different than the one use it for protocol specific operation.
Speaker 12 01:23:20 Therefore the jock use it for each protocol specific operation must be included in the related metadata we can bring, we can carry this verifiable assertion as well.
And it's funny because in the federation trust chain we can also publish X fine nine x 5 0 9 certificates to what else here I want to share a light of my friend Takako Kawasaki there where we can see the end points, the well-known Open de Federation where the entity configuration is provided and the fetching points were the intermediate trust anchor publish the subordinate statements about each other until the the leave.
And here I want to show to you how the metadata policy are published in the subordinate state made related to the subject to each subject and therefore how the final metadata is processed. All the stuffs not supported, not allowed in the metadata of the leaf, therefore will be changed.
Speaker 12 01:24:34 And in federation we use, we have this concept of final metadata, the leaf published its metadata, but the final metadata is something that will be compliant to the federation rules.
Therefore, in Italy we have worked a lot personally, I work in the wallet field from 2021 and I was supposed that well the new will have will come. We have implemented federation for the legacy infrastructure. Let's see what happen after three years. Nothing showed us the policies, constraints and all the feature that we already have in federation are defined in other specification or known in other implementation.
And therefore we have decided to implement the wallet infrastructure, the national wallet infrastructure using Open I Federation tool be do it to the flexibility it is Swiss ball for offline use case because the federation trust chain can be provided in offline use cases exactly like X 5 0 9 certificate chain.
Speaker 12 01:25:48 And as mentioned before, federation solve allow us to solve many problems.
I try to summarize some of them here, here, how to establish if a credential issuer is allowed to issue some credential applying metadata policy and trust marketing federation solves the problem. Therefore our lying party when evaluate the trust with a credential issuer is sure that the, the, that credential issue was allowed or is allowed to issue some specific credential. And another interesting pertaining the trust mark, we can establish very complex policies by such.
The one answering the question is that reli party allowed to request data from underage user, disabled user or stuff like that. And another example that I love is that coffee machine allowed to ask me email username telephone number.
Yeah, you probably have experienced the stuff like that. I had an answer that was, well, you can decide if you have to take the, if you want to buy the, the, the, the coffee to that coffee machine or change the coffee machine, okay, but what, what if all the coffee machine start asking over asking data?
Speaker 12 01:27:21 So we have to control this and the concept of federation is something that was born for this.
Therefore, technically we have a draft, a national draft providing the particular characteristic roles within the wallet ecosystem. We have wallet provider, it's a resource server. We have a open de credential issuer aside with authorization server. And we have the wallet airline party because we have decided the open legacy open airline party's something different from the wallet relying party. We are still implementing in OpenID for vp. And in this representation in green we have the federation trust chain. And when multiple trust anchor comes into place in a single ecosystem.
And also thanks to the work made with the IDAs, we have a well established con con well established come on guys. Concept, concept of trusted list. Therefore in the we suppo, we suppose at the end our trusted list will contain multiple trust, anchor, subject identifier and public key.
Speaker 12 01:28:41 And to have, be sure that the trust anchor is variable or trustworthy. Trustworthy in publishing its own entity configuration, a double check represent our enforcement, a security enforcement for that. And therefore we have the trust generated to the subject of a wallet provider.
And the wallet distance is something out from the trust chain. And this because the wallet distance is a personal device, therefore is not an organizational entity. And this means that a personal device cannot be registered in a federation. Therefore we comes in help the concept of transitive trust. We established the trust with the wallet provider. The wallet provider issue, a wallet distance attestation establishing the trust with the wallet provider. We can prove the validity of the wallet distance sation and therefore establish the trust with the personal device.
So this means that in federation we only deal with organizational entities. Personal device is something that are in our pocket and we are not forced to get accredited or registered to a federation entity for this, we just have to decide we, which will be our wallet solution.
Speaker 12 01:29:54 And well in this sequence diagram there's a representation of a dis trust discovery. The subject is a credential issuer and use case flow is the credential offer.
Well, the wallet obtains the entity configuration of the credential issuer and obtains all the entity configuration of the superior authority. And the subordinate statement, it obtains at the end the trust chain validate the trust chain and apply the policy obtains the final metadata and therefore he knows if that credential issuer can issue the credential for him. And this is discovery. Using federation, we can do discovery about anything we want. Credential types, grants about relying party particular compliance using trust frameworks.
And I finished, but hey Mike, would you like to say something here because I-I-I-I-I think that you are the right guy for that.
Speaker 13 01:30:59 Thank you. So no more slides.
Okay, so I realize I'm the last speaker and the last slide between you and your break. So I will keep this short. The main thing I wanted you to know about the status is in fact this past weekend we started the Open ID Foundation-wide review for the proposed last implementers draft of of open ID federation. We thought about going to final, but we did enough substantial things.
We thought well we'll have an implementer's draft, we'll get implementation feedback and we'll try to go to final sometime this year we've started developing the specifications for certification of federation implementations and I'm working with the certification team, including Joseph, who you just heard from about getting that going. There's a bunch of libraries, open source support, there's some use cases we're looking at for FPI and for wallet ecosystems that were will result in some additional definitions. Excuse me.
And you know, there's the discussion about X 5 0 9 has chains, federation has trust chains. But go see John Bradley's talk this afternoon on multilateral federation. The solution to the problem that the identity wallets don't yet understand. They have to understand the potential relationship between them. And with that, if unless any of you want to probe Giuseppe or Roland or I, we will release you to your break.
Any questions from the audience for Giuseppe? Mike Roland?
I have, oh, sorry, is there one back here? Alright, I have one actually, maybe you can elaborate Giuseppe on where the European Jais expert group is on their evaluation of federation. I know that it's been a journey as you've been deploying it and some of the other countries have been deliberating whether they'd like to adopt the federation model as well. And what it, what's the temperature at the moment?
Speaker 12 01:33:29 Oh, thank you for this question. Okay.
I would like to get to, to go to the point I I, I got a, a lot of positive feedback with a huge concern about the industrial diffusion of Federation. Many of them said, well wow, it's, it's fantastic.
Oh yeah, my developer, well it looks really developer friendly. Wow. It's JWT. Wow. It's the same mechanism with of X 5 0 9, but with a lot of additional feature that we still do not have. But nobody uses it. Okay. I work in the field of digital innovation so I can do this even if other didn't. So the question is who drive the in industry? I would love to say that we are a community and a community needs time.
Great. Thank you Abby.
Yeah,
Speaker 14 01:34:41 We should, we have the X 5 0 9 day for from the ITU, so it's like yearly event. This need to be presented in, in the, the ITU in the workshop. We have X 5 0 9 appreciation conference every day and you get all the adoption and the story and like what's happening with it. This need to be part of the program. So I'll make sure it'll be next time.
Thank you Abby. So it's an invitation from the ITU where Abby has a, a role obviously to, to present the work on federation to the ITU folks who work on X 5 0 9. So thank you Abby. Cool. Other comments? No.
Any other questions for the Open ID Foundation as a whole, we got a lot of great experts. NAS in the room, Mike and others, any other questions for the Open ID Foundation? No.
Alright, well enjoy your break. Several of us will reconvene here to talk about the sustainable and interoperable digital identity hub in 30 minutes, which I teed up earlier in the chat. So thank you. Have a good day. Been in the session.