Mike Neuenschwander, Sr. Director, Oracle
April 17, 2012 18:50
KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Unlock the power of industry-leading insights and expertise. Gain access to our extensive knowledge base, vibrant community, and tailored analyst sessions—all designed to keep you at the forefront of identity security.
Get instant access to our complete research library.
Access essential knowledge at your fingertips with KuppingerCole's extensive resources. From in-depth reports to concise one-pagers, leverage our complete security library to inform strategy and drive innovation.
Get instant access to our complete research library.
Gain access to comprehensive resources, personalized analyst consultations, and exclusive events – all designed to enhance your decision-making capabilities and industry connections.
Get instant access to our complete research library.
Gain a true partner to drive transformative initiatives. Access comprehensive resources, tailored expert guidance, and networking opportunities.
Get instant access to our complete research library.
Optimize your decision-making process with the most comprehensive and up-to-date market data available.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Configure your individual requirements to discover the ideal solution for your business.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
Mike Neuenschwander, Sr. Director, Oracle
April 17, 2012 18:50
Mike Neuenschwander, Sr. Director, Oracle
April 17, 2012 18:50
Mike nos fender who is here and he is director at Oracle and his topic is scaling identity access nor to controls internet, proportions. And welcome. Thank you. And you need the mic because you haven't got yourself wired. Thank you very much. Thank you.
Hey, we got some extra time. We can tell a few jokes or something. This is gonna be great. Yeah.
I, you know, honestly, who's got a guitar, anybody anyway, it's great to see everybody here. You know, I love being in Europe. I love this hotel. It's curious though, especially when you're at a security conference, I, you know, you have to use the key, right. To get up to the third floor where you know, I'm staying, but you can apparently walk in and, and get anything to eat that you want outta the breakfast area, without anybody stopping you and you can use the spa. Right.
So I don't, I don't know exactly, you know, what who thought through that, one of those German process things going on, right? What free food free food. Yeah. I like it's fantastic. Okay.
Well, by the way, I'm with Oracle now. So, you know, I'm just kind of doing the industry thing, but hopefully by the end of this presentation, you'll all want to buy lots of Oracle product. Right. It's gonna be great because we're doing some cool things, but I'm actually here to talk about very specifically around scaling and it's convenient that I follow up with doc. I think that doc and I have been kind of on some parallel tracks here for a little while, and I have very similar ideas to, to what he has.
And I, I would be glad to discuss some things over dinner later tonight for now. We'll just kind of move along.
I, I don't know if you can read this thing, I guess you can. Can you read that in the back? That's kind of somebody sent me this the other day. It's got some funny stuff in it about like, you know, the problems that we have with the modern society. One of here says that, that my GPS made me drive through the ghetto. Right. It really sucks. Right.
I'm, I'm, I'm trying to text while at a red light, but I keep getting green lights all the time. Right? These are like things that are just really annoying. And you know, that, that third, third world nations have not even yet discovered that how annoying these things are.
But, you know, it's interesting when we talk about scale, I think we're kind of in a similar situation in that it's a good problem to have. And, and I think we're very quickly getting into a place where scale is on a much different scale than we even used to think about. And for that reason, we, we need to, you know, enjoy the problem, but also treat it appropriately.
Oh wait, did I? I think I clicked twice. No. Okay. So these are actually some questions. And by the way, I spent some time in consulting before I came to Oracle.
And, and when I was at Burton group, there's there's, these are like actual questions that people have, have posed to me over the years. The, you know, one of them is a pharmaceutical company came to me and said, look, our business has decided not to keep acquiring other companies. What we're going to do is to try to enable kind of an ecosystem. Right.
And so what we're going to do develop drugs and other kinds of things is we're going to have a bunch of partnerships and, you know, that's going to include, you know, small two and three person research laboratories in some basement of some university or something, but also some very large organizations as well. So they said that they would like to have 70,000 partners as their goal in the first year for this program. Right?
Like, so they, they said, so how do we do that? And I'm like, you don't right. It's not gonna happen. Right.
The, the, the problem is if anybody has tried to set up a SAML connection or something like that, right. It's, it's not gonna, you can't do 70,000 of these in a year. You can't even do a hundred. Right. So 70 thousands outta the question, another one is, alright, well, how can an organization manage 200 million users I've worked with last year?
I worked with two organizations and, and different, very different kind of business models, but basically because they were integrating with Facebook and with other social networks and so on, but also use, you know, they were essentially consumer identities. They needed almost 200 million users right.
In their, in their directory service and so on. And, you know, I think we can kind of accommodate a number of those things, but it's still a very difficult kind of problem. What about, you know, 10 million entitlements?
Well, we, you know, we can, we can run scalability tests on something like that, but you know, if you've got a million roles and what are these things doing? I mean, we're, we're talking even today about scale at this size. So the question then becomes, how many administrators do you need for something like that? And we get this in, you know, in RFPs and stuff. What people ask us, how many administrators should we plan on?
Well, that's a fantastic question. You know, I, I guess that what I would, what I'm trying to say about today's products, if you think about the products, the technologies and everything else we have, I think it's woefully inadequate. We are not ready in some ways for that kind of scale. We're a couple ideas short, right?
For, for the kinds of things that we really need to do, you know, borrow a little bit from, you know, no Einstein or anything, but, but those of you who have studied a little, you all look sort of nerdy out there, right? Actually I think I see a non nerd in the back, but, but, but for those of you that are familiar with Einstein's theory of relativity, the thought was, you know what we perceive as an adequate solution for a problem at a certain scale, right? When you take it far enough out to extremes, to its very extreme, there's some relativistic effects that start happening.
And as a result, those effects start, start to be really quite dramatic. And that's kind of what this is showing.
So let me, let me show you what we mean in terms of scale, in terms of identity management. So this one says, if you think about the cost per user, as the number of users goes up, if you, as you get into the millions of users, right, they're, they're at, at very high scales, you end up having to the cost per user goes way up as well. It's not linear, right? And the quality of service at the same time tends to suffer. And the complexity of that system also intends to increase. So this is not very good. This is not a great picture. This isn't what we want.
If you also think about how we end up funding security, right? The, the, the cost justification for adding security into some new platform, something cool that we're doing sort of isn't there. And beta certainly isn't there in one.
Oh, and, and, and so what ends up happening is you have to wait for the revenue to get a certain place before you put security in. And what, and what really happens is that we only wait for the system to break. First of all, before we'll go back and add security to it. So we'll put security in at the worst possible moment, which is after the things already been built after it's already broken. And after that has so much adoption, that basically it requires it now. So that's the model we've been in for a while. And of course the risks are going up as well.
So that's, that's not a very good deal, right? What is it possible then to create a system that doesn't do that? Is it possible to have a system that says every user I add to this thing is actually going to contribute to the resilience of this system that every, that the cost per user in fact goes down with the number of users, can we envision this playing out?
So that, that we take advantage of in fact, the, the contributions of everybody in the system? Well, we do have some examples of those kinds of systems. They don't necessarily, they're not directly relevant, let's say, but, but they're, but they let us know that such systems are possible. Right. And you think about BitTorrent such as it is steady at home, these other sort of peer tope kinds of networks and meshed networks that I think doc was alluding to, or even the worldwide web itself. Right.
And, and look, there's no perfect system out there, but there are some very interesting characteristics in all of these cases. But the point is, is that to meet the kind of scale that we need for identity management and access management and, and audit and so on.
And, you know, we kind of need a new approach. Right. So what would that be?
Well, let's go back to this idea of administrator. The, and I don't know if you can read that.
I, I, I happen to find that cartoon on the web, so I'll, I'll let you ponder that for a moment. It's, it's quite funny. It says Clinton in there too.
So I, I don't know if that it's either really older. They're talking about secretary Clinton, I guess. I don't know.
Anyway, so the, but the, but the problem of administrators, if you're saying to yourselves, all right, I've got this identity and access management product, and it's focused on making my administrators more effective. So how then can I make them even more infe?
Well, the thing is, is when we're starting to talk about administrators, we're having the wrong conversation, right? Admins, aren't going to fix this problem. You can't have a ratio of admin to user that doesn't even make sense. Right? You have to get outta that mode. You have to disconnect and say, look, if you're gonna control everything, then you have to have everybody be an administrator, right. You have to have everybody engaged in the problem. So what I'm saying is if adding users requires more admins, it's broke already. It's the wrong question.
So here's, here's a, here's a quote from our, our buddy, John Clippinger, which works at he, he's still, he's still at the, you guys work together still, right? Beman at Beman he's at, Beman still.
Oh, he said MIT. Okay. Quick update. All right.
Well, John, John wrote this book a while back called a crowd of one and he, and he said, he made an observation about how the network scales the internet scales that I think is quite interesting. He says, as networks become more interconnected and complex, they simply cannot be centrally controlled. In the case of the net, it is designed to grow arbitrarily large and diverse because all of the components are not dependent on one another, every user, every new user or new device does not have to have permission of other devices to be added to the network. Okay. Interesting concept.
Also, this is a, a quote from a, from Adela Schlager, who is in this research paper on collective cooperation, in common pool resources, which is a fantastic study. I mean, really it's, it was quite a good read. You should go look it up, but she says that appropriators, but let's just say users in this case are active participants in creating the dilemmas that they face. And under certain conditions of given the opportunity, active participants in resolving them, they are not inevitably or hopelessly trapped in untenable situations from which only external agents can extricate them.
So I like the language in there. It's very highfalutin and stuff, but it's, but you know, basically it says what, you know, people, we can use people, people can help us out with this sort of thing.
So, so what are we missing then? What, what can, what can make us scale? So maybe some of you have seen this, this picture before. I don't know it's been out kind of on the internet, but it's got this bike there with a lock around it.
And, and it's, it's tied to this post, right. With which could easily be defeated, right. Point of your security with the big words, fail on it. Right.
Well, I'm gonna go ahead and defend this security for a moment, right. Because I think it's working just fine.
I mean, the bike's still there right there. It is somebody who took a picture of it in case it gets lost later. We can go look for it. Right.
So, so what's going on here? Well, you know, just because something can be stolen doesn't mean it will be stolen. Okay. Just because somebody could screw up your system doesn't mean that they will. Alright.
So, you know, as security, you know, we like to, as security people and technologists, we like to think that it's the technology, it's that lock around the bike. That's actually keeping the bike safe. Okay. That's an illusion. Okay. It's everybody knows that if you have a, you know, these car locks and things like that, nobody wants to pay for a car lock. That's actually gonna stop somebody from stealing your car. Right. What they do is they get people to notice, right.
That you rely on the lock to basically make it so that it's just inconvenient enough that there's probably gonna be a few witnesses. Okay. That's the idea. So this is, this is actually interesting security because it, it says something about the culture in which that kind of security works. Right. All right.
Well, in that case, if betrayal doesn't occur in every relationship, right? Why is that? What keeps people from defecting when easily they could go do something else?
Well, you know, I'm here to say, it's probably not their, because they have an ID ID card that's, you know, got a PKI thing on it. Or, you know, like, is it because we have ID cards that we're not all criminals?
I mean, is that, is that what it is? It doesn't even make sense to even ask the question. It's not the identity and access management system. In other words, it's keeping us all from being murderous and evil, right? What is it? What is it that keeps us involved in relationships?
Well, the, if you think about what they call brownie in motion, right? It says, if you see a particle flying around in the sky somewhere arbitrarily, and you don't understand why it's moving around like that, well, it must be that there's something, some invisible force working on it. It's there, you can't see it, but there's some kind of force that really does exist. That's working on it. So if we find out that, that in fact, there, there are lots of scenarios in which people could easily do some kind of malfeasance and they don't. Why is that? Right?
Well, I'm here to say, well, let's call that trust. Okay. Now I know that people use that word a lot and I've heard it a lot today, but we need to use this term in, in its proper sense, like in its regular English sense and not just in the cryptographic sense or, you know, other usage, the, the, the thing that, I mean by trust though, to be even more specific, right.
I like, I like the trust is short, but we have to give it a slightly longer definition. So we know what we're talking about.
It's a, Mulla durable, collaborative action. Okay. That means that it's a relationship in which all participants are cooperatively working for a benefit, even when the roles, risks and rewards differ. All right. So trust exists. It actually does help us secure things. Can we control it then? Can we cultivate it? Can we do something to actually improve trust?
Well, I think this is a good question, because if, if you think about scale coming back to that issue, right? What is missing in, in what, what allows us to move down? The scale to a distributed system is actually trust right? In a trustful kind of scenario. We don't need to have a domain centric model. Okay. We can allow things to be, to mesh much better, but, but, and by the way, so there, so I guess what I'm trying to show you here is that there are sort of two models out out there.
One, I call dis the distrustful model. And one is what I call the trustful model. The distrustful model is one that you're very familiar with.
It incur, includes things like command control capabilities. It has a high emphasis on security and structural kinds of approaches. It's it has a dependency on a single provider. There's usually like a big alpha player that keeps everybody else in check. It has explicit controls and contracts and checkpoints and vigilance. It's hierarchical. It's formal, a lot of regulation and coercion, that sort of thing. Right? So you're familiar with all that, right. That's kind of, that's like, that's like your job, right?
So, so, but there is this trustful model as well, which includes things where, where things are maybe a little bit more informal there's informal rules and agreements. We shake hands on things. There's collaborative kinds of solutions. There are shared duties and an emphasis on transparency among the parties that are interacting. We put that a slightly different way. If you look on the left, the tools of distrust are things like fences contracts that say, basically that you're screwed.
You know, I'm making it look really negative. It's not necessarily that negative. I'll get back to that in a minute, but things like identification cards, identity assurance, encryption rights management, blah, blah, blah. All those stuff that we do basically is on the distrustful side. Okay. But the tools of trust are things like reputation, reciprocity, empathy, signaling, collaborative action recognition, shared experience, social interactions, ceremony, connection, these sorts of things. Okay. So that's also a style, right?
And as I was saying, actually, distrust and trust are both good things, right? We need distrust and we need trust. We need both systems. When they work together, they create a resonance that in fact is very, very important. These systems can, can improve on each other's. They can back each other up as it were. You have a nice outer shell is it were, but then you got a nice soft middle. Maybe I don't just whatever works for you in terms of analogies, but they can also interfere with each other. There are times when a distrustful system can get really can cause all kinds of problems.
You know, if you came into this room and instead of, you know, what, if the chairs were all bolted to the floor and these bottles of water basically had like meters on 'em and, you know, we had, you know, everything was basically tied down. It's sort of making a, a statement about you. Right.
I mean, people react to that sort of thing. When people feel like they're being treated like cattle, for example, like, you know, to, to docks point, they rebel they'll start busting stuff just because they're pissed about it. Right.
Did I say, I said pissed on stage. Didn't I all, all right. Oh it's oh, you can do that in Europe. That's right. We're politically incorrect over here. I like that. Okay.
Anyway, so yeah. That's, we'll talk about that more over a drink somewhere, but what was I gonna say there? Actually let's go, go back five minutes. Oh yeah. Five more minutes. Good. Okay. Awesome. Cuz the big finish is coming up here. All right. So what's that?
Oh, all right. So the can trust be trusted?
Well, so what I, what I'm positing then is I'm saying that I believe that if you actually allow this trust thing to work okay. That you can create relationships that are trustful and trustful relationships are inherently more secure.
They, they have a way of being of securing themselves. Okay. And that they are in fact better at solving problems than distrustful relationships.
And I, and I, I, what I, I guess the main point here is that I believe that our industry right now is overinvested in distrust. Okay. We have done a great job of putting all the fence in gear and everything in place.
And, and, and, and that's fantastic. Right. But now it's time maybe to start shifting if for only reasons of scale, right. We can't get to the scale that we want to with this distrustful model, but there's lots of other reasons to start introducing trust. But anyway, I thought I'd show you this as well. It makes us look a little bit funny and I had to edit that a little bit by the way.
Ah, nevermind. I don't know how he got that.
Oh, wait doc, can I get in the picture? All right.
Oh, can we? Yeah.
Can we, oh, am it standing in the way? All right. There we go.
Okay, good. Great. Okay.
So there, it turns out that there's a lot of study in this area, even though in computer science, we don't know a damn thing about it, right? In social science, in many of the social sciences, there's actually quite a bit of literature over this stuff.
So, you know, if you wanna go look at some reading, I've got all kinds of ideas about where you can do that. At the end of this presentation, I'll even give you a couple of resources. But one of my favorites is, is by Eleanor Ostrom. She is the, she, she, the same year Obama got his Nobel prize. I think the next day or something after that, they announced that Eleanor Ostrom actually won the, the Nobel prize in economics. She's not even an economist, right?
So it's, it's, it's important to go have a look at her work, but basically one of the things that she advanced and, and she went and studied common pool resources is what I call them. But she went and studied very large problems, like, like overfishing in certain areas of the world, right. Involving all kinds of nations and, and looked at how people solved problems in very intense, difficult kinds of places.
And, and she came up with this theory around what it takes to actually create trustful. What I call trustful in other words, cooperative kinds of restructures. And this is what she came up with, that they, that there needs to be some way to exclude the proprietors from the non-pro proprietors, right? From people who are in the relationship and not in the relationship that there needs to be. Some rationality agreed upon where everybody in that relationship can, can, can agree on the rules.
And in fact, there needs to be involvement where members have avenues to participate in modifying operational rules, monitoring, effective monitoring, and auditing of, or policies enforcement that sanctions can be imposed on violators of these rules. And that doesn't mean necessarily by an auditor, right? That could be that's by people who are actually involved in the relationship that it maybe is a community. You can shame somebody or give them some kind of bad reputation that there's an, there's also arbitration that appropri have access to low cost, but efficient conflict resolution.
And finally autonomy that the rights of appropriators to devise their own institutions are not challenged by external governmental authorities. So is it possible to do this sort of thing on the internet?
I don't, I don't know. You know, like I hope so. I've got a lot of ideas about how that can happen.
In fact, my proposal is, is that we start looking at something, I call a trust protocol, right? How do we encode, you know, if we have a theory on trust, how do we encode that encode? Right. Is there some way to actually put this together and say, let's have a trust anchor? So I can say to you, look, maybe I don't know, you, maybe you're not, maybe you're not even human for heaven's sake. Right. Maybe you're just a program out there. How should I know whether to engage in some sort of cooperative activity with you?
Well, maybe if I at least say, well, I, I, I insist then for that reason that we, that we use this trust protocol. Right.
And if I, if we can do that, then I have greater trust that this thing's actually gonna go over very well. Right.
So, so that's the idea of a trust protocol. Now there's a lot more to be said about this than I can on stage.
So again, if you want to get me a drink or something like that, there's or if you wanna go read my blog and be better, excuse me, last minute you have fresh mic. Cause apparently the is dying.
Oh, battery's dying. I've been spitting into the microphone.
I think I, oh yeah. Hey, Hey, can I get the 20 seconds back that that took for all right. Well anyway. Okay. Also I've proposed something called the limited liability persona. Some of you may have heard me talk about this before. I think that's a, it's a way of helping of solving kind of the asymmetries that naturally occur because the problem is, is that, well, the consumer thing, doc, you know, like, yes, we are consumers, we're human. The problem is, is that we're dealing with institutions that are not human. Right. They could be a computer program.
They could be a government, they could be, you know, but basically they don't have emotions. Right.
And, and basically if they were human, they would, we would think of them as a sociopath basically. Right. So this is the problem. Like we can't, so what's, what's the way that, that we then as human beings can interact with sociopaths. That's a pretty good, pretty good question. So yeah. Fortunately I've been married, right. So I have a lot of experience with this sort of thing. Yeah.
Hey, wait, we're not recording this, right. Yeah. Okay. Scratch that last part. So I don't know. Who's gonna get ahold of that. Okay. So I've been blogging about this for years. You can go look@onhybridvigor.org, and you can also look on the identity blog@burtongroup.com.
The, that blog by the way is still up, even though they're not, it's not being contributed to, but there's a lot of ideas out there. And then eventually I'll get around of blogging at Oracle. Haven't done that yet. I'm afraid they're gonna make me do marketing stuff out there too.
So I'm, I don't know exactly how to make that work, but those blogs are all sort of live. And then I also have some, I also talked about this at the cloud identity summit in 2010, 2010. So you can go look at that there. And then also Bruce Schneider came out with a new book called liars and outliers enabling the trust that society needs to thrive. So I'm starting to see crypto people really get into this whole theme now, too, which is pretty cool. Right. So now I think that, I think that was probably a minute or something. Right.
So, all right, well with that, you know, thank you everybody for, for being, for being fun and it's, it's been great. So hopefully we'll get a chance to talk outside of this venue. Thanks a lot. Thank you very much.