KuppingerCole Webinar recording
KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Unlock the power of industry-leading insights and expertise. Gain access to our extensive knowledge base, vibrant community, and tailored analyst sessions—all designed to keep you at the forefront of identity security.
Get instant access to our complete research library.
Access essential knowledge at your fingertips with KuppingerCole's extensive resources. From in-depth reports to concise one-pagers, leverage our complete security library to inform strategy and drive innovation.
Get instant access to our complete research library.
Gain access to comprehensive resources, personalized analyst consultations, and exclusive events – all designed to enhance your decision-making capabilities and industry connections.
Get instant access to our complete research library.
Gain a true partner to drive transformative initiatives. Access comprehensive resources, tailored expert guidance, and networking opportunities.
Get instant access to our complete research library.
Optimize your decision-making process with the most comprehensive and up-to-date market data available.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Configure your individual requirements to discover the ideal solution for your business.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
KuppingerCole Webinar recording
KuppingerCole Webinar recording
Good morning. We'll be beginning in just a couple of moments as we allow everyone to filter into their seats and sit down, but just to make sure you're in the right webinar. Remember this is clearing up a cloudy standard, simple cloud identity management. I'm Dave Kerns from KuppingerCole I'm being joined by Darren rolls, the chief technology officer at sale point and Patrick Harding, the chief technology officer of ping identity to old friends who have been involved from the beginning with the development of the skim protocol.
Kuppinger call is one of Europe's foremost analytical agencies involved with enterprise it research advisory, decision support, and networking for it. Professionals. We do our work through events like this webinar through events like the European identity conference through subscription services, through our writings, through our consulting.
For those of you in the us who may not be as familiar with us, you're invited to go to KuppingerCole dot com and take a look and see what we offer Our main event each year is the European identity and cloud conference, which will be held this year rather next year in 2012 in late April and Munich, this will be an excellent opportunity to find out what's going on in the world of identity and with the cloud to topics, which are very close to, I assume all of the people who are attending today, you can find more about that by going again to the website, clicking on the conference link where you'll see, well, the agenda beginning to form and develop.
You'll find the registration information. You'll find everything you need to know about that conference. Housekeeping details to take care of. I'm sorry. If I seem a little disjointed here, my, my PC seems to be working very, very slow and not giving me the slides that I want at any rate. Don't worry about muting and unmuting your microphones. We will do that for you centrally, And I'm, I'm very sorry that I'm having this trouble with my slides.
Again, we'll have that up for you in just moment. I, what I'm going to do instead of talking you through, I'll tell you what the agenda will be for today's show.
First, I'm gonna ask Patrick and Darren to tell us a little bit about themselves and their companies and what their background is to qualify them for this. And then we'll have a presentation from each about civil cloud identity management. Patrick is going to tell us about why we have it, how we got to this point. And then Darren will tell us what it's all about, how it works and so forth.
It's interesting that the actual 1.0 spec was just locked into stone late yesterday and is now going out to a vote of the participants to make sure everybody signs off on it before it's, it's led out to the public right now, I'd like to throw this to Patrick so he can tell us a little bit about himself and ping identity and perhaps what their, what their view is here. Patrick, you want to take it? Sure.
Thanks, Dave. And thank you. Carpentry call having us on webinar this morning. My name is Patrick Harding. I'm the CTO of identity, identity addresses, identity management, federated single sign on through standards like Sam.
We have been very focused on ensuring that when organizations need to address identity management problems, you know, between applications and different environments that you know, we can focused on standard standard standard everywhere, which is why over the years, as we've looked at how music provisioning needs to be addressed between enterprises and cloud applications of cloud vendors, we've always felt that a standard was necessary, which is why we've got involved in the skim. Some background You have lost Patrick then. Okay. Sorry.
I'm again, I'm deep into my, my, my screen here, but Well, let me perhaps jump straight in and do the same. Hopefully you can hear me a little better, but Patrick was a little weak, weak there on, on, Yes, go ahead, Darren. So Darren Ross here, I'm the chief technology officer, as Dave said at SalePoint and SalePoint, for those of you that don't know us, we are the leader in identity and access governance, and we provide a full suite of identity controls and governance capabilities. Our interest in standardization in this space has been long standing.
I myself was been involved in identity standards, right? The way back to the original TML and through Sam and, and here now at skim.
So, you know, my goal today is to, you know, Patrick's gonna give you a bit of an overview of, of how we got here as, as Dave said, and then I'll try and give you a, what is really a 10,000 foot view of, of what's in included in this spec and how we think it helps. Well, thank you very much.
Well, let's see. So Dave, this is Patrick. I'm happy To, let's see if I can actually get my slides up. I'm happy to go over a little bit of the history here, if you like while You are.
Oh, okay. We, Again, I'm having trouble with my slides here. So I'm gonna just skip that part and remind you that one thing I will remind you that I forgot to say beforehand was that for questions, you'll notice there's a panel should be in the bottom right hand corner of your screen, where you can introduce questions. We will take them now. And then we'll, we'll try to get them at the end of the presentations and we'll, we'll feed them in as best we can. I do see that we had one question comment there that they can't really hear you Patrick. So you're really gonna have to speak up.
I know you're normally a soft spoken individual, so you just have to be as loud as you possibly can. And we'll see what we could do. And we're gonna be calling on Patrick right now because we'd like Patrick to tell us why skim and how we got to where we are sort of the history of simple cloud identity management. Patrick.
Thanks, Dave. So I'm gonna speak a little louder now.
I, I apologize if people couldn't hear me before Again, ping has been involved in this from the beginning. This actually skin actually goes back to the summer of 2010 at the initial cloud identity summit that P posted in Colorado. And that's where we had Google Salesforce and Microsoft all hearing loud and clear from the enterprises that were there, that they needed to standardize on a provisioning API.
All of them had independently implemented what I'll call proprietary APIs that all did very, very similar things, but you know, it obviously made no sense that something that looked so similar, you know, shouldn't be standardized into one thing, which is why in the sort of September, October timeframe of 2010, we formed something called the cloud directory group, a Google group of interested parties, including Google and Salesforce sale point as well at that time, as well as a company called Unbound ID, that has been very, very heavily involved.
And, you know, we started to talk about what we need to try and standardize those APIs with a premise that we wanted to try and keep it as, as simple and easy as possible, preferably keeping in place. A lot of the API principles that those cloud vendors started to, you know, lean towards, which was, you know, to focus on leveraging rest API principles whenever possible, as well as trying to standardize on a user schema that could be leveraged by all of the cloud vendors effectively, as opposed to doing those independently.
We then took that for discussion at I I w 11 in November, 2010, it's the first time really, we talked about this stuff publicly with anybody. It's where I fill up a sort of a, you know, a look at all of the different user schemers that the cloud vendors were using and sort of showed where the overlap was, which got a lot of actually a lot of interest, a lot of skepticism as well. I must admit at the time, but actually we could actually do something like that.
So it was a very interesting debate that I, I w that internet identity workshop, so that we left that with a realization that, you know, this is definitely something that people were interested in, even with the skepticism. It was definitely something people felt that they could use. There were questions at that time around why we couldn't look at leveraging S SPM L in that place space.
I elected to the Googles and the Salesforce to respond to that where they had basically felt that P L was more than they needed for Aing API, which is why they hadn't actually leveraged it for the proprietary APIs they'd implemented themselves. So we stepped forward into sort the beginning of 2011, and that's where we started to really sort of work on the initial draft with specifications. And you'll see Darren in a minute sort of discuss some of those.
It was basically how we sort of wanna look at developing rest API, what the core user schema would look like, and then building in a sound binding for this as well. So that just in time provisioning could be leveraged using skim schema elements as well.
So we got, it was around that time that we decided to call this simple cloud identity management, as opposed to cloud directory, cloud person, cloud cloud, account management, a bunch of other sort of candidates came up at the time, but we focused in on simple cloud identity management and took those first draft of those specifications in may this year to I, I w 12, where we had a lot of discussions around what, around those specifications, the first draft of them, and really get, get people a chance to really provide some initial feedback. That's where we started to get a lot of interest.
And a lot of people from their joined the mailing list, joined the weekly call and became active participants in ski. You skipped forward a little bit to July this year at the second cloud identity summit. And that's actually where we started to show some initial demonstrations of skin working between ping pings software and Unbound ID software, which we showed actually active directory being provisioned into the Unbound ID directory effectively using skim messages, which was obviously very exciting. And then Smith forward a little further to November this year at I, I w 13.
And we actually now have we, we then showed multiple vendors, ping nexus Salesforce, Cisco SalePoint, Unbound ID. Apologies. I missed anybody on that list. Get to me later and beat me up, but actually showed a lot of interoperability testing where we showed products actually working. We another with one another exchange in ski messages.
Very, very exciting given that really the first drafts that only appeared about seriously about sort of eight months before that. And now in December, this year, nine months later from when really the first drafts of the specification came out, we actually about to finalize on skim version one, as David mentioned, Dave mentioned, it's being voted on this week and it will be something now that people can start implement against and actually rolling into production. So we're very, very excited about that.
And obviously I'm looking forward to a lot of question and answers on this stuff, you know, later in webinar. Thank you, Dave. Okay. So as Patrick kind of touched on there briefly, and, and, and so did Dave, one of the primary goals of skim has really been to drive simplicity and, you know, that is what the S stands for, right? S for simple. And it's really about simplicity in the specification and in the target resource and vendor support that we hope to get through that. So one of the huge potential value props of skim will hopefully come from that wide widespread adoption.
If we can get the SaaS vendors themselves to support this on mass, it's gonna be good for every week. Now, obviously we've already taken some big steps there with, with Salesforce, Google, Cisco, and others, absolutely on board.
In fact, driving from the start with their support, if we really can move from a place where each SaaS vendor has its own remote API management model to this more widespread industry standard based adoption of skim, we are really gonna get win for everybody. We're gonna see more innovation and simplicity and better tools at the technical level. And we're also gonna see a lot more control and visibility and increased compliance at the, at the business level. So let's hopefully screens building through. You should now see the second part of the slide.
So if we drive into a little bit about an overview of the ski specification itself, and first take a look at 10,000 feet, we always say an overview use case skim really is nothing more than a simplified interface specification that's targeted at the SAS account management functional or use cases to be clear. There's nothing really in here that actually ties it to SAS vendors per se. It's more about being able to manage any kind of an account that we see out there. So in abstract, skim the skim client itself is going to make a restful call.
Let's see, you should now have a bill that didn't come through there. You now see that. So the skim client's gonna make a restful web services call to a skim server to basically create, modify, retrieve, or, or discover a basic user account. This is done in line with rest best practices. I'm not going to attempt to try and define that in its entirety for you, but it's basically about the client using a basic HTD P method call to get the job done that it's after it sends that get request to read an account, a post or patch request to create or replace an account.
And as you might simply guess a delete request to actually remove an account, the in line with the rest model, these operations are taken against what we call resources is a rather technical way really of saying that you point the, the request at things on the other side of a URL, there's a lot of thinking and detail behind the concept of a resource, but for now that really tells us, you know, all we need to know at 10,000 feet, the simplified skim client is basically making a CDC here that get H CDP call to a skim server.
And the server maybe pulls the account data from a traditional account management repository, like a directory, and sends it back over the wire in an HTTP response message. So all in all pretty simple, the client request. And so here, a basic get to an account on a slash user resource. And I'll cover that in a little more detail here in a second, from example.com in response, the skim server is going to, in this case, respond with an HTTP 200 response, and that's gonna include in it, the data about the accounts in, in, in play. And in this case, this is actually a adjacent format.
So we can see here, as you look through it, you can see the schemer information and some metadata, you can see a name, some phone numbers, emails, that kind of stuff, right? So, so basically account information. So pretty simple. If you look at the skim specification that enables this, what is the set of simplified flow? It's really broken down into three specification documents. The main meat of the affair really is in the defined rest API document. And this covers the application protocol itself and defines what rest based operations to use to, to carry out a particular action.
And I'll give you a view of that here in a second. It defines what accounts things are modeled behind those rest resources. So this is your end points. This is your user, your group, et cetera.
Okay, we'll look at those. And it defines the HTP response and error codes that you get backwards and forwards over the flows. We just looked at the schema document, provides the platform mutual schema and extension model that defines what the data actually looks like.
So this is where Patrick said, we have a concrete representation in an account, a group, and, and all the stuff that you need to to understand how to, to interact with a server and one sort of extended point there is that really a part of the huge value prop scheme really does come from having its own defined fully materialized schema.
Those of you like myself that were involved with, or, or worked with SPM L SPM L really punted on that schema activity and that in itself by, by having extension and allowing people to define their own scheme, but a lot of complexity was introduced into specification and that really hurt interoperability and ongoing adoption.
So in contrast scheme really jumped right in there with its own basic user schema, which is derived from poco and some basic enterprise extensions, a, a concrete group schema, a service provider, configuration data schema, and an extensible resource schema for modeling metadata on the right hand side here, there's also a ski binding doc that covers how to map and model skim objects within the Sam. I EP world. There is a little contention as to how complete or functionally full that Sam binding is. And we'll talk about that a little bit more in a moment. So go one slight level deeper.
We're not gonna get in a look at the spec, but like I say, it really covers the basic HTP methods that drive the account management operations. The spec covers a, a pretty flexible or, or authentication model so that the client and server can authenticate with each other in, in a, in a number of different ways, nothing is mandated, but there is very strongly recommended support for, or Tolbert tokens.
You know, there's a lot of support for that in the community. And it's very well aligned with, you know, our view of where this goes. It then clearly defines how to use these methods. So how to use a get call to retrieve full or partial resources. It defines how that post standard HTB post calls can create new entries or, or do boat modifications. And it then defines how to use things like put and patch to do updates and replaces on the items that we wanna look at.
The spec defines how to operate with the protocol at scale with guidance and, and operations for volume data, there's optional support for filtering sorting and paging model for dealing with the kind of large scale account sources that we, we all deal with on a, on a daily basis. There's a, an optional bulk slash bulk endpoint model that is implemented via its own resource endpoint. That's really how we do that. Looking just a little bit further inside the, the spec.
I mean, the, the key model entities in skim itself are really represented in the four main resource endpoints and that optional bulk endpoint, these defined resources are to the best of everybody's ability. That's participated have been modeled as true rest endpoint. So if you are a, as they say a rest area, and you'll be quite happy to look at the specification and, and understand what we've done and a pretty good job there, I think, and it's, they're really there to understand what that users are, users, all the basic accounts and groups are groups.
So collections of members, service provider configurations are there to represent an extensible model for really grabbing information about what parts of the skim specification, a given service provider support and the bulk endpoint is there as a special representation to handle the complexities of the BComm and batch operations. Very quick note, on this point here of a flexible ID model in skim, the spec does support both a client and a service I name space model for uniquely identifying the users and groups, etcetera.
And this is really there to allow the server to maintain a primary unique identifier for an account, but without mandating the, the client side, then use that exclusively as its own representation. So it gives us the flexibility to really simplify the, the client and have the client have one unique ID and be able to map that to a service side back at the server. Last thing here on, on the, on the API spec, it does the data representation model for the exchange of the data is both has support for both Jason and XML.
So we can keep the rest folks and the soap XML folks happy here, and to some degree. So basically the requester says, Hey, you know, get me an account, the account called Darren. And I'd like the information back in an XML format and it's kind of format in format out. The last thing that the API does define is how to overload the basic HTP response code.
So, you know, very much as I make a, a classic request on a get, I'm gonna get a 200 okay. Back. So I really get a, you know, a formatted document returned with the information in it and things like, you know, 4 0 4 not found the kind of behavior that you expect to see from a basic rest exchange is, is clearly defined in the specification as well. Quick thing on schema, as I said, user and enterprise schema provided. So the user itself is really a basic user that's that's deriv derivative of the, of the poco specification. That really includes the basic user attributes.
You expect to see name, address, contact, group password, those kinds of things. And the enterprise extensions covers all the enterprise stuff. The enterprise extension is optional, which covers stuff like employee numbers, department and managers. So as you can kind of overload that onto the schema and do sort of enterprise things, as well as basic users, the group schema, not a lot really to say about that, but it just represents a classic authorization group. So we can say here's a group called Fu and here are the members of that group.
So we're able to make operations against the groups, the service provider schema, as I said, really defines the optional methods that a skin server can support. So I can query that schema and it will tell me what, you know, for example, do you support the bulk and batch optional operations server, and it will respond and, and also tell you how directions on how to authenticate with that server as well. And then lastly, it is extensible, as you might expect, both the resources and the schema are both extensible. And the schema extension model is very similar.
If you're, if you're familiar with LD auxiliary classes, it's a very similar model there. So we do expect to see custom extensions, the developed and the interoperability model should be worked out as we take this to the next step.
Lastly, as I said, there is a SAML binding and at 10,000 feet, its purpose is there to tell us how to map skim schema to SAML two O messages and distortions. The specification really tells you how to map the Scheer attribute to the Sam, attribute how to use Sam attribute query to pull skim user attributes from the IDP and how to expose skim service provider metadata via the standard SAML SPSS O descriptor model. I will throw in on the end here, there are a few issues that are being discussed still around this.
There's a very strong feeling by many that the complex attributes that ski provides don't map well into SAML. And there's not a lot of help on how to do that. And that really the skin user itself, that concrete schema that we talked about really doesn't map well to the existing profiled SAML schema attributes out there.
But, you know, it's there it's a step in the right direction. I think, you know, Patrick and others that are at the forefront of this are ping already using this and, and have working, working model product and prototype outlet for people to, to look at. And that's really what this has all been about.
Lastly, before I hand back today, just wanted, you quickly have a view. What does it, you know, we are an identity in access governance provider. So why do we care about skim more importantly, why does our customer care about skim? And for us, it's kind of an interesting thing because you know, really from an identity and access governance perspective, things just become more inclusive.
If we see the vendor community, the SA vendor, community ubiquitously support, skim, you're instantly gonna get the ability to roll those skim accounts into things that can access review and then access certification. So as you can look at things like policy violations that crossover your ski providers, and really have all that data brought back in, in a seamless way. Now we don't charge for connectors in our product. So it's almost immaterial to the client cause we've been supporting SA. It just makes my life a lot easier.
You could argue, which sounds rather selfish, but, but it does help there. And then I'd also say that, you know, what we're really hoping by this is that we can make things like access request and provisioning just more ubiquitous, more widely deployed. So as we're able to provide self-service for our end users so that they can come in, see accounts, be able to model and manage them and have all that sort of seamlessly on the back end being passed through an open standard, like ski back out to the SAS provider. That's pretty much all I had.
That's pretty a high level run through there at 10,000 feet and 10,000 miles an hour. But with that, our pass back to you, Dave, We may have lost Dave too. I have unmute my mic so that people can hear me. Thank you very much, Darren, for that presentation, that lovely presentation. We're into question time now.
So, and let's see how's that looked. Can you actually see the slide this time? I do apologize to everyone for the trouble I've been having with PowerPoint. Microsoft seems to have done me in here perhaps because they weren't involved in the discussions on skim. I do invite those in the audience who want to ask questions to type them into the little question box you have down at the bottom right hand corner of the screen. We do have some already, and that we'd like to talk about here. And hopefully both Darren and Patrick have their microphones open so they can just jump right in here.
Now you're talking about skim, binding to SAML there at the end, Darren, is that mandatory or is that optional? That is optional.
As I said, there's, there's been some discussion in the closing stages of, of the activity around its completeness as well. So I do think we're gonna see a little flurry of activity around that part of the specification. I'll be brutally honest with you.
It, I haven't dealt, I haven't deal with that great deal myself. I know that, but you know, it, it was people on Patrick's staff that really helped drive that and we've had some folks, you know, from the other community contribute. So I know Patrick, if you had a comment on that. Yeah. So lemme chime in Darren. Yeah. I think what, what we've recognized is that some of the people in the Sam community who hadn't participated in the skim working group, we reached out to them to get their feedback. And it was the right thing to do.
I mean, basically we wanted to make sure this was very inclusive and that everybody was gonna be on board with this. They brought up some questions around the Sam binding quite late, which has meant that we are not going to do a V one of the Sam binding to probably the January timeframe. Cause we wanna make sure that we address all of the concerns that have come up, you know, before we declare a version one.
Now that said, you know, I, I think a lot of the concerns that have been brought up are a questionable in my mind, I'm not gonna argue that NA complex sort of complex structures into SAML is problematic. And, and hence will probably not do that purely because of the 80 20 rule that 80% of the time, 90% of the time people aren't gonna need that.
It really, it really raises the question some late questions on the skim user schema and, and the fact that it, it is sort of inventing specific names or certain elements. We base that on portable contacts, but in the, in the Sam community, there has been some work where they've leveraged, for example, in higher education in the us and EDU person schema. And we really need to sort of define a way to map the skim user schemer into some of those other user schemers. So people understand, you know, how they can relate how they can work together.
I don't think that's particularly difficult to do here at ping. We've already mapped the active directory user schemer into skin so that our customers can use sort of active directory as the baseline for things. And we can map that in skim messages, in the case of Sam, this is really about supporting what we just for time provisioning. So it's the ability to create counts dynamically at runtime during yes.
So event at the service provider, people doing that today, the fact that we can use skim messages means that the same message formats can be applied to all these different service provider business. So Before we, we go on too far, one question that came in from a, from the audience and he calls it a dumb question, but you know, there are no dumb questions is about the, the geography here. And he says, is the skim client, would the ski client be the cloud service and the skim server, the consumer's ID system or the other way around, Maybe I I'll grab that one.
I think it's, it could be, you know, I think you can see a server operating as a client itself, but fundamentally the, the client is the might be, you know, me as a, as a management platform talking to the server as salesforce.com. So that's really sort of one way of thinking about it. But I think part of this from inception is being the fact that then when salesforce.com needs to synchronize with another vendor, it would then operate as a client in order to make exchanges with another server.
They said very much in line with the HTB client server model, hence rest, and, you know, sort of keeps it simple and allows everybody to sort of participate on both sides of it if they choose. Yeah, I, I would add to that down. This is Patrick again, that the ski model actually supports both what I'll call a push and a pull mechanism. In the push case, the ski client would be for example, the enterprise, the authority source of identity pushing, create, update, and delete messages to the SaaS provider or cloud provider or skin server.
But there is another model where, where, which would support pull where the cloud provider itself, Salesforce in this example would act as the skin client and call back to the enterprise, exposing it server and, and looking to read attributes out their identity store that they would then use to create Back to when we're we were talking about the Sam binding, if we, in a particular setup, we're not using a SAML binding. What about security?
Since I'm assuming that a lot of people wanted the SAML binding there for security purposes, as I understand it, all of the data is being sent in the clear, back and forth under skim. Is that correct? Yeah.
Well, I think much, much like all HDP. I mean, it is, it is in the clear, but it's over a secured channel. So we are now mandating the use of TLS and the, of that, you know, HBS connection. So the way to think of this, I think really it's just Like, but that was only added last night, the mandating TLS right before that really doesn't seem to have been thought about very much.
Well, We, you know, we, I think in the protocol level, we tend to just sort of assume that people know these things right. That, that, you know, of course you, you, you don't really have valued exchanges operating over, you know, a non HTPs channel. So I think to your point, David was kind of put in there to sort of say, well, of course, cuz it really doesn't affect protocol at all. Right.
I mean, it, it, the actual exchanges aren't any different at that point. Okay. Question comes in. How does skim support on the fly provisioning of users of a cloud service? And I assume by that they mean some sort of self-service provisioning. Is that involved in any way in this back? Well maybe I'll take the first stab at that and then pass over to Patrick for this sort of back channel view of it. I think from the front channel sort of let's look at it, the use case that kinda screenshot I put up. Absolutely.
Of course we expect those that provide self-service tools and capabilities to make use of it on the backside. So allowing somebody to, you know, go to a self-service interface, be able to create iden, create accounts for themselves in a, in a managed, you know, approval controlled environment is, is one side of this. I think the other side of it is, and that's what Patrick was alluding to on the backside with the Sam Lester.
So tokens is the view that we can have a managed flow for having a family surgeon for a single sign on kind of say, you know, I'm at the door for Salesforce, but I don't have an account on the backside now rather than sort of in a customized way going sort of say, okay, Paul here for a second, let me create an account. Now I'll give you access to do that over skin flows. And then that means we have one common control model, one common visibility in order over that. So we can kind of push it all through the same channel. I know Patrick you, or add anything to that.
Yeah, Sure. So when I hear on the fly provisioning, that that tends to me to mean a couple.
It could mean a couple of things, but, but just recognize that with, with skim, we're not, we're not really inventing anything new here in terms of how provisioning processes work skin works with all the existing provisioning processes that people are implementing today, which could be bulk uploads of users to a service, or it could be one time atomic transactions from a enterprise to a, to a service provider at some form where they're doing independent creates updates and deletes, or it could be just in time where, when a user, you know, initiates or clicks on a link to access a service provider, there's enough information in the SAML token to allow the service provider to create an account for that user at runtime during the SSL event.
So all of those things happily work today sort of with different services in a proprietary way, skim sort of standardizes on what the messages would look like and what the use of schema looks like. Those scenarios. One listener asks if there is a, an implementation of skim that we can make use of to incorporate skim into our applications, I guess sort of like an API there. And my fuzzy memory seems to say that Unbound ID published something, didn't they? Yes they did.
Now we should really, I think Patrick and I would both hear join ster sort of throw out a shout to, to Unbound for, for everything that they've done on this. Cuz I mean, they really did, you know, go to bat on this standard and trade Coke. The main guy from Unbound on it really through his work. This is actually this, this activities come to fruit in time that it has. So that said they have a, a toolkit which several folks have used. There's two or three actually out there.
If you search actually on the, on the simple, simple cloud info site, there is a link that you can chase down and actually see those toolkits that are out there. The, the truth is the, the client is, is designed to be fairly simple and, and, and fairly open. And that's what you're seeing most of the open source code for the service side, how you get on and actually implement that. There's obviously a little more complex and, and something that really falls more into the realm of the service providers and software vendors that will be supporting you. Okay.
Questioner has a nuts and bolts question for you, which hopefully one of you will know the answer to and wants to know if there's a way to differentiate between a delete of a, of a user from a service and one that is a disabled. In Other words, that leaves all of the information in place just doesn't allow the user to access it. That's an interesting one. Unfortunately, I've actually got one of our principal engineers, Kelly Grizzle, who is he's the co-author of the API spec here with me. So perhaps tell Kelly, is You sure, sure.
There, there's kind of two answers to that one is that the skim spec mandates that after a delete, that users will not appear again, if you try to get them that doesn't say exactly how the service provider implements that for example, Salesforce never deletes users, they kind of put them into a disabled state. So, so that's, that's, that's one way that that's done in the standard, the core user schema, there's also an inactive flag. So some consumers may choose to just use that, do an update and set inactive to true, to move their users into an inactive state.
Yeah, I mean, that's that's right, Kelly, there was actually a lot of discussion about this and it really boiled down to what, what delete means in terms of delete versus disabled versus inactive is actually implementation specific based on the service provider. And that it's actually very rare in, in the cloud providers to actually really delete a user. They actually virtually all of them just put it into a disabled or an active scope, especially My colleague Martin. Kuppinger also wrote about skim a little while ago.
And in talking about the spec, as opposed to SP L he said, wouldn't it be better to join forces and build a spam O version three, which supports rest. And I know that we've talked about this a little bit, but if we could maybe get some sort of on the record answer here for me, what, what about a restful SPM L Rest? So Darren I'll I'll, I'll take that one.
I mean, Darren actually has a lot of background in SPM L I'll just boil it down to this, that the key to the success of any standard, it is less about the standard itself, but more about the adoption of that standard in the industry. And as I mentioned at the beginning, we, you know, while theoretically we could produce a very nice cohesive breast based SPM L specification for, for a number of, of reasons, the service providers, the actual applications themselves didn't wanna touch it.
And, you know, there was a mirror of different reasons if you were to go and ask them all independently, but, but the fact that they were willing to adopt a standard, but they did want to, you know, be involved in its creation meant, meant that we had to start from the beginning again. Now we're actually able to move fairly quickly and have something now, nine months later that they're actually willing to adopt.
And, and I think with that, we'll start to see more and more adoption as, as the large staff providers influence the smaller SaaS providers to adopt the same thing. And I think it's the fact that, that adoption, you know, we have to see that adoption happening to group this out, but it I'd almost say that I'd rather just see the adoption happening of something than arguing around the merits of one versus the, that are trying to converge and spending a year or two doing that sort of thing. Yeah.
I, I agree completely what Patrick said there and I mean, okay. All, all truth told I was, I, I, the editor of the one to OSP L spec and, and I can tell you from my experiences there, we actually started with very, very similar intent. And that was to do something that was more resource focused. The idea was to say, all you resources owners out there, can't you give us a simplified current interface, right? A simplified create, read, update, delete interface on your applications. But through the participation in the process, SML really turned into be a provisioning provider spec. It really is.
You know, I think of it as being what I call a northbound spec. It's really designed to sit on the front side of a provisioning service and, you know, an enterprise provisioning service and sort of take a simple request in from somebody and fan it out into multiple backend resources. So it covers things like complex relationships. It models things like the relationships between an identity and multiple accounts and an account and its target identifies. It really is a complete object. This is SPM L a complete object model for provisioning skin.
Doesn't need to be that, you know what we've found, it's almost back to our roots really. And that is if everybody has the same API, it's not really a lot more complex than that.
If we can, it makes it very easy to build tools on top of it. And so with, you know, so with Google and Salesforce and, and, and Cisco and others supporting this, the bears in the market, we can get everybody with that same interface, all the rest of the staff, all the complexity we do in provisioning and identity governance. And Patrick does in, in, in, you know, just in time provisioning and Sam management and the security token service can all just be above that if you like, and, and not really be engaged with it.
So I'd say just, can't say a short answer, cuz I'll really spend a minute is for me, is the fact that it would've, it would've, it bought too much baggage. And I think that's really where it stood now could a, the next step for this is to, we haven't really talked about that, but the next step is to take this to a standards development organization. If that standards development organization wanted to somehow merge the two together, we would be outside, you know, that business under the community's control. But what we've done here is, is, is, I mean, this is nine months, right?
Start to finish to have an operating spec and you can build software around this right now. There's vendor support for this right now. That's been done in a timeframe that I've never seen. So that's really, you know, on a next speed, we might say, Let's talk about the next steps.
Now, as we say, the, the spec was essentially closed down last night, version 1.0 and now goes out to a vote, assuming it will be approved because people were working on it, who were voting on it. It then is, as I understand, it's going to be submitted to the, I ETF as a standard or as an RFC, I guess, in this case. And there will, I saw tentative plans for an IOP demo at the next I ETF meeting in Paris early next year. Is that right? Yep. That's and I'll, I'll throw a few comments in there and pass back to Patrick, the community that has driven this.
Obviously there's a lot of people listening on the news group and there's a lot of people, you know, listening into the calls. But I think the core people that have, you know, put the work in here and, and have been on the forefront of this all feel a drive towards this, going to as an I ETF.
And obviously there's to the ETF, there's a number of, there's say SDOs standard development organizations that this could go to right now, the community feels with its alignment, with, you know, the restful nature of the specification with its close partnering with, with two that I ETF is the place to do that. And as you say, you know, that really just takes the community to take it to the next ATF working group meeting.
And, and there's, everyone seems to be keen to go to Paris, actually, Shane, you know, to do this, I, I won't Maui, but they don't do in there. But anyway, so it seems like that's the most likely next step. Yeah.
So Dave, I'd add that the, the ski version one has been developed under the open web foundation IP model, where on the simple cloud.info website, you'll actually see the key contributors to the specification have all signed O WF agreements where they basically have basically, you know, made sure that, you know, we're sharing, you know, giving up IP when we do this.
So I don't want anybody to walk away thinking that this is, this is a, an uncontrolled specification ski has, has IP control around it from the, from the contribute contributors, which is really about as much as you're gonna see from the I ETF anyway. So really that doesn't change taking it to the I ETF. We wanted to finish version one before doing that and actually see some adoption and see some real code working, which is one of the sort of major, major drivers for, for I ETF success essentially.
So that in, in the I ETF, we, we can focus on improving upon things that we find in the field with implementations and essentially working on the improvements there, UHF umbrella. Do we have commitments from any provisioning providers or cloud providers to do a quick implementation of ski? 1.0 Yeah. Salesforce have already basically implemented and I'm sure they'll be taking that into production sometime next year, Google reengaged with the effort about three months ago.
So my, my expectation is they will be rolling a skimming from a patient out in 2012. And I think you'll see Cisco WebEx do the same thing next year too. Can't tell the exact timeframes, but I would expect that you'll see those three vendors implementing skim servers in the 2004, sorry.
Yeah, 2004 point frame. And then I can say from the provisioning side of things, I mean, we support skim today.
We've had, you know, from an early, when the very first sort of concrete spec came out, we had a, a code available in our labs internally for our customers and our partners and with the ceiling of the one spec, you know, in fact, we, we went to the interop and everybody used our client, basically our, our front end to that. So it's available from us now. And I know other provisioning providers are kind of jumping in there as well. And why wouldn't they, it really just, like I said earlier, it's a giveaway for us, if everybody is less connected to, right.
So I mean, why wouldn't we Now we noticed, and, and Darren, in your presentation and in some of the discussions I've seen lately on the mailing list for skim that there's still some questions about the enterprise extensions for skim skim 1.0, really can't take the place of standard provisioning services within the enterprise. It, it just doesn't do everything that the enterprise needs. I assume there'll be more of that coming in version 2.0 or 1.5 or whichever the next, you know, release of the skim spec is, does work start on that right away?
And how optimistic are you that that will happen quickly? The extension model is in place today to allow people to take advantage of, of widening the object model. And that's something, if you take a very, if you take our view in an identity access governance view of things, we, we want to model concrete relationships between the accounts and the identity, and that isn't something that Skimm directly addresses.
We said, it's really a wire protocol between two guys that wanna exchange account information. So, you know, we'll be building extensions on top of that. I think myself, there's one thing we didn't really talk about here, that the very interesting thing that Skimm does, and this is something that everyone needs to take note of is it creates what you can think of as a per link, a concrete reference for an identity.
So, you know, I can go salepoint.com/v1 skim slash V1 slash ID equals Darren roll. And I can, I can create a concrete reference for that account, all that identity. I think we're gonna see a lot of interesting uses of that to make extensions and capabilities beyond that. Like I said, you can always think like blog permalink, you can think identity PERMA links. And I think we're gonna see some extensions around that. But I think as I say, the, the spec allows us to do that independently outside of the specific, the standardization process, and then submit it back into the process in two.
Oh, Thank you very much. I see by the old clock on the wall, that we're just about out of time here, I'd like to thank Patrick Harding from ping and Darren rolls from SalePoint for telling us everything we needed to know about simple cloud identity management or skim. We'll be following this closely as the voting process goes along and the IATF process goes along and as hopefully implementations come along, we'll be letting people know about them. A reminder that the entire presentation hopefully cleaned up so you can see the slides will be available at the co Nicole website shortly.
And I think I can safely say at this point that I'll be back again next month, with some more insiders to talk about a, a subject that's near and dear to my heart, privacy by design something that we've talked about frequently. When we, when we mention social networking, that'll be coming up late. Next month's more information will be available soon at the Coture call website. Or if you subscribe to my newsletter there, we'll be telling you about it.
So again, thank you all for coming and attending. Thank you, Patrick and Darren for giving us your expertise and we'll say goodbye.