So we're going to start our next session. We have the name of the session. Title of the session is a software defined, everything, you know, in this track, we've been talking a lot about the relationships, the different kinds of relationships between technology and people and people in their individual capacities, people in their institutional capacities.
And one of the things that we wanted to explore a little more deeply was the management kind of relationships and how those are being affected by software deployments and how different kinds of relationships can be established, can be enhanced and also undermined in depending on the, that particular relationship.
So I'm gonna, what we're gonna do the same format we've used in some of the other panels in this track, we're gonna ask each of the folks on the panel to introduce themselves, and then also to take a few minutes from five minutes or so, just to give some highlights and some major points. They want people to invite people to entertain. And then we're gonna just go into a discussion with panelists. They'll ask each other questions and certainly we'll take questions from the audience and I'll ask questions as well.
So, Mike, do you mind leading off
Sure. Mike Jones from Microsoft where I'm an identity standards architect, I've worked on a number of identity related protocols, including OAuth 2.0, the Jason web token and open ID connect among others and much work continues thinking about a software defined world.
I started brainstorming about where does that happen when I'm actually gonna come at that from an internet of things, point of view, not that things are software, but the glomeration of the things that may end up communicating with each other and or management systems and control systems certainly becomes a software defined thing. Now I have a PhD in computer science. One of the things that's taught me is there's very few tricks in the book. We can do caching. We can do hashing, we can do indexing.
And there's one other trick that we have, which came to the table later than some of the others, which is cryptography.
And I think it's the cryptography trick that actually lets us make order out of what otherwise would be chaos. If you have devices that suddenly pop up, some of which are hardware devices, some of which are software programs on the network, and you need to decide whether and how to communicate with it and what to share with it.
The only hope you've got is if the thing that you are considering interacting with contains cryptographically signed assertions from parties that you can identify. So per what Andre Duran was saying in his keynote, I think the, the security perimeter becomes identity. And this is certainly even more the case in a software defined world where, you know, people can barely be system administrators of the pieces and Mac and phones that we have when their refrigerators talk to their Fitbits, you know, it's game over, unless the software can do it.
So it really, it sounds like it really treats the management problem as a computational problem, in a sense,
If we can't reduce it to a computational and trust problem, it doesn't get solved.
Thanks, Mike, do you have some initial thoughts?
So I'm FL I'm the CTO and senior VP of products at high trust. So unlike the giants here, most probably haven't heard of high trust.
I'll just say that we focus on cloud security automation in sort of when I had thought about this panel, the, the, what I wanted to cover was, you know, most of the regulations and the standards, the industry at large recognizes, and everybody agrees that we, all of us do not have enough time people or even, you know, whether they're skilled or not, or even understand where all our assets are. And I'm talking compute assets right now, and storage and networks to be able to put in place the adequate control. So everybody's recommending you take a risk based approach.
And most of the frameworks that are out there and being adopted starts with identification of what your applications and your assets are, right?
So you have to know what you have in order to be able to protect it. So when you start with that as the premise, and that's where all the regulations, you know, liabilities, everything is associated with that. That's the model we understand today, when you start thinking about software defined, when you start thinking about things that just come to life when they are needed, right? So think about sort of availability.
Most of the cloud providers have SLAs in place. You know, all three of them right here on this stage have SLAs and their focus is to meet those SLAs to their customers. So they will focus on availability as their primary goal. And then they'll look at security and how they achieve that availability. So there is, is notions of with software defined that you could create virtual networks, you can create virtual data centers, software defined data centers, and it comes to life when you need it.
So you add capacity as it's needed.
Well, how do you do policy management in that environment? How do you do risk based approach when the assets, the things you're gonna protect were not there before. So there's this, you know, so I think what I'd like to do is kind of promote and provoke thought in terms of what are the changes required in terms of security controls, policy infrastructure, as well as then legislation and industry standards, to be able to help with this agile environments, things that come to life just in time.
And then how do you actually monitor measure, make sure that you're in compliance when all the regulations were defined when things were physical.
Thank you. And did you have some thoughts?
Yes,
Please. Hi, I'm Andy land from IBM.
I'm a, a product director for our identity and cloud security products. So I, I'm gonna give him a little trouble. We had lunch together and I actually told her I was gonna give her trouble.
So she, she won't be embarrassed, but I said, I think I'm sitting on the most abstract panel at KuppingerCole because there's a lot of words in this panel, but you know, I'm trying to, you know, even in my mind from identity perspective, I was trying to bring it home. And like, I know for you guys in the audience, clearly, that's what you're gonna want. Cause we're using a lot of big words that are not big, but they have a lot of meaning policy. Wow. Policy has a lot of meaning policy within a technology framework.
Carson works on policy from, you know, data protection laws, there's corporate policy.
Boy, there's a lot, that's a loaded term in itself. We could spend a whole hour on that.
Another, another term software defined network, you know, new thing. Not everybody knows all about that. Yet. Lot of loaded this there too. So I've been trying to think about like, how do we bring it home, you know, for you guys a little bit around what, you know, how does that affect you as an identity professional or folks that are trying to deal with the cloud? Cuz many of you guys are trying to deal with your organization, putting new, you know, assets in the cloud. Like you said, trying to get to the point where probably some of those assets are gonna, you know, work autonomously, right.
Maybe where you're not gonna be all that involved in it. So that starts to become, you know, a security problem.
And as, as you may have heard through the week with IBM, one of the things we do a lot of is tie identity to security in the past.
I I've been an identity guy for like the last 15 years. And I always think of myself as the redheaded stepchild of security. Like a lot of times security guys. I'm not kidding. When I came to IBM, I was working on a different part of the business and the, my boss was a hardcore security guy. And I said, well, I know security. I've been doing identity for the last 15 years. And he looked at me and goes, identity's not security.
And that's exactly the way he thought about security as compared to identity. He was hardcore security guy, but as we've talked about a lot this week, identity and security have a strong linkage. And as we do things like you're Hema talked about as we're gonna start to light up new assets, new systems, how are we gonna manage that? How are we gonna understand the identity of that device?
And then how are we gonna apply some kind of realistic policy to, you know, control it, monitor it and make sure it's doing the things we want.
So I go back to, again, what we've talked about a lot this week from IBM perspective, identity is the new security perimeter. It is the new control. It's one of the few things left. You have control of, if you are able to turn on devices or they turn on themselves, what do we got left to control? So that's kind of what I'm thinking about as we go through this panel.
Thanks Ken.
Thanks. So good afternoon, everyone. My name is Ken Owens. I'm the CTO for Cisco's infrastructure services team.
There's sort of two areas that I think about when it comes to this type of software disruption that's happening. The first one being Cisco, of course we have to have a, a reference architecture and a framework, right? And so we have something called domain 10. That that is actually not something that Cisco invented, but we've been working with the larger community around and, and within that, a lot of the existing controls you're probably familiar with in terms of what you would have in your physical environment.
One of the new things is sort of around trust and at a station of that, of that server and physical hardware and network hardware. So that's one thing that, that we should probably spend a little bit of time, you know, making sure you understand that your providers should be able to provide you some level of assurance or some level of trust within that environment that they've deployed two other areas within domain 10, do not get talked about a lot.
And I think they're really critical for this conversation.
And the first one is for something to be software defined, you have to have this automation and this orchestration layer and that automation orchestration layer in, in my opinion, and, and this is just my opinion, not Cisco's, but in my opinion is the new kind of threat and vulnerability area you have in your environment.
Because most things that get automated have to have a common user ID and either no password or a fixed password that never gets checked and never gets validated by any systems and is outside of your audit and your governance committee looking at what you're doing to provision those systems. And so just because you can automatically provision a system doesn't mean you should trust that you have the right system you're talking to, or that your admins don't have access to those systems, because if you have a common username and no password, it's wide open for anyone who knows that username.
And so that's one thing to keep in mind. The second thing to keep in mind is the API interfaces that have been defined to get access to all of these systems and those APIs while they may be behind a corporate firewall of some kind are still APIs, nonetheless. And if you have an API endpoint, that's another vulnerability area that you should ensure that you're locking down and securing those API interfaces as, as best as you can. The second area within domain 10 is what we call organizational or kind of governance and, and corporate processes.
And so as you look at how you do your, your legal controls today in your policies today within your, your enterprise or within your, your organization, those do not necessarily map to a cloud solution one to one. And so you're gonna need to sort of look at, and, and the organizations like, like you guys and like standards bodies, like the cloud security Alliance that help you with some of these mappings and identifying that.
But it's important to kind of know one that there is this sort of gap in what you could traditionally consider policy and what your cloud provider may consider that same policy control. And you wanna try to validate that the second area that I think about when it comes to software is why cause software defined disruption, which is sort of the DevOps or shadow it. That happens a lot today.
And so as I think about developers and what developers are trying to do, and another area here that I think is really critical is looking at those policies and those procedures that your developers have and how they create and, and push their software out.
And so as a developer, I think about things in terms of develop, deploy, and then run, right, and to develop piece probably doesn't change much with, with software defined, but as you move to that deployment, now that deployment area goes from inside a corporate environment to inside that corporate environment, plus any of these different types of cloud solutions or providers that you're connecting to.
And so, as you think about deploying that software, you now need to take into account where you can deploy that software and deployment is probably not that critical.
You could do things around your SDLC flow with secure S DLCs to kind of validate your code and check your code where I think it gets really interesting is in that running. So once you have that code out there in the environment, how do you then provide security controls around who can access the data that your software is creating, who can get access to the information that is being maintained? And when you publish an update to your, and do you update your software, right? How often do you update that software as a question, right?
And if you need to have access controls as part of that whole model identity, going back to what you brought up, how do you sort of provide identity to that software? Because it's, it's no longer a problem for just the internal enterprise identities, but now you have an identity associated with that application code and users connecting to that application to get data out of it. And so how do you manage those identities at different layers? How do they relate? Do they relate? And I think there's, there's quite a few questions there that we could spend that was on.
So,
You know, it's interesting that the last point, it, the attack surface and the usable surface are coterminous right. I mean, it's basically that we, we sounds like from all the comments, what we're seeing is, is an exponential increase in system complexity. And it sounds like the increasingly humans are getting left in the dust, both in personal contexts and also the humans who run institutions. Right. Because they're, no, I don't get extra competence when I go into work. I'm just the same person. I'm just supposed to do something for somebody else. Right. Right.
But in, but obviously companies have more resources, governments have more resources. So one of the questions I ask the panels is what, what can we do when it's software might have to manage software? So what is the role of people? Both.
We, we think of the personal human as the privacy issue. And we think of the company or government human as the security issue. Right. But what can, what is, what will be the role of people in those either, or both of those situations as the systems become sufficiently complex. That really we're just given a speedometer once given very little metric or to, to understand what's going on. Is it only at the front end? Is it dynamic? How can we, how can humans and the institutions be involved in that process? Do you have thoughts on that folks? Yeah. What anybody?
Yeah.
So I wanna say that there are sort of by moving towards software defined, there are a number of benefits, right? So things that were difficult, when you think about classic information security, patching vulnerability, scanning, being able to stay current with the technologies now that you go from sort of template driven, right. Master images that can be deployed, you can go to stateless, right? So a lot of these service can start becoming stateless. So you can always revert back to original mass sta image. And those are where you patch, right?
So the ability to scale is a lot better when you start going to software define, and you know, the programmatic makes it sort of things that you would've had to do with a human involved, become something where now you're just programmatically making those changes.
So, you know, as long as you have controls around that scripting or that programming, and you do thoroughness, I mean, we've had incidents where at least one financial had an error in that script where they were, you know, so they have essentially, they were adding capacity when they had peak times, and then they would power off these systems when they were off peak.
And because of a typo, you know, again, sort of in the script, they were able to power off 30,000 virtual machines sort of at a blink over time, right. Because it's not a human pressing power of power, right. It's just a script.
It runs massive scale instantaneous and to recover. So, you know, outage to recover, it took a lot longer, right? Because first they had to figure out where the era occurred.
You know, you wouldn't think about the script as your first place to go from an error perspective. You're thinking you had hardware failures. It was the cloud provider's issue.
You know, something else happened. You were maybe under attack, right. From a dos attack or a DDoS attack or something else. So I think that there are benefits, but you need to now have more controls around these programmatic elements, things that you couldn't do before that.
Now you can, and because you can, you need to bring it into the fold as Ken talked about, you know, make sure that it's part of your secure software development life cycle, that it's not just the application you're building, but all the DevOps, it operational things that are being scripted or program program now needs to be captured into the
So really it's, it sounds like from a policy perspective, there's still a role for centralization, essentially. Yes. That's the thing that's so ironic in a sense, right?
Because the, we have this massive distribution, not just of the devices and the management and everything else, but now they, our, our gut reaction and our only reaction perhaps, is to try to understand it through centralizing the, either controls, metrics, something to give us a handle on it. Any other folks have
So
Respond to the centralization comment. There is a kind of centralization, which I think applies here, but a centralization just as a way to find decentralized resources.
That's
Interesting.
It, it used to be that when you bought a device and plugged it into a windows PC, you had to have a floppy disc that came with it with a device driver, and you hoped that it matched your version of the system and you would install the driver and then maybe your, your device would work. We made that go away. Thankfully we made it go away by having plug and play where all the devices declare just a little identifier for themselves. And there's a central database of all those identifiers. And it wouldn't work if there wasn't.
But what happens is you plug it in the system sees, oh, this is a USB device with this identifier. And again, it goes to one of a few central places. Microsoft has one for its operating systems. Apple has one for its Google has one for its, et cetera, that tells you, oh, this is an HP printer.
Go to this location. That's run by HP in order to get a driver for windows seven for this device. So it started centralized, but it was really just the index to this widely decentralized authoritative set of information to enable software and hardware components to interoperate.
And I think that's a model which could work in this world where you're building systems out of the aggregate of many things that can communicate with each other, to the extent that we managed to have well defined vocabularies of kinds of things that can be done ways to declare, you know, this piece of software or hardware can do these things. And it's made by this producer all of a sudden then, you know, my shop and Hema shop and Cisco's shop could all for our own purposes for our own software and devices say, well, I can do this.
And I'm willing to do this with these other kinds of things from these other trusted producers. And to the extent that you trust the different vendors to make reasonable policy default decisions for you, the human doesn't have to do it. And defaults are probably more likely to work than decisions. Humans would try to make with at best in complete understanding of the choices they're making. At least the
Defaults are
Reliable. Yes.
But you hit on it.
I mean, there's still humans because even if you trusted defaults, there were humans at the vendor who still made those decisions, cuz whether software is controlling software or network or whatever, somebody has to put the rules in place. Right. And whether that's the vendor, I think you're kind of advocating that maybe the vendor's default settings might be a little more reliable than the, the other, you know, the humans at an organization,
Certainly then asking the consumer to
Figure the device.
Yeah, definitely. And we all talked about it lunch, like a simple example where I have a silly software defined network that I would say exists, which is your phone around a wifi network. Right. You can set your phone up to find your wifi networks and kind of do it for you. Right. And you could set some basic rules about it. Like in your example, would it be better than, you know, the default setting from Samsung or apple for most consumers is probably gonna be better. Right. Cause they're probably not gonna be able to figure out. And then there's a set of power users.
And I know we talked about your cell phone usage and he's got multiple cell phones that told me how great his new one is. You're probably gonna like want a little more configurability, right. You're gonna wanna look at and say, Hey, this is the kind of control I need to have, but I, I feel like I'm a sophisticated enough user to do that.
So I still think, I, you know, going back to your original question, there's a big role for humans.
Whether they're on the vendor side, that's setting up the right defaults or at kind of that power user level that says for my organization, I need to set some of the defaults because more importantly, I'm gonna be held accountable for that. Because on the backside of this, there is gonna be to your point measurement metrics, are we performing to the way we said we did? And somebody's gotta measure that. And that's, again, the software may do that, but people are gonna end up looking at that.
And then, you know, you're, you talk a lot about policy, then there's gonna be conformance back to standards that may be set for you by either governments, you know, policy organizations, whatever. So human interactions gonna happen all along that line. What's kind of interesting here though, is that you might be able to get to the point where humans don't have to do some of the things where typically, like you said, I do think mistakes get made.
They, they pick the wrong configuration choice and things go haywire. And then they're sitting there for the next two days, figuring out why they have a major outage right
There. There's an even simpler example of this. It used to be very common. You'd go into people's family rooms and there would be VCRs with the time blinking,
Waiting for the human being to set it. And you know, this was at least half of all the VCRs that were deployed and it's cuz it wasn't critical to set the time. Most people didn't bother.
And eventually the people making BCRs and in fact, the people making mobile telephones figured out that, oh, we could find a way to set the time for the human being so that when you turn the thing on or it connects it self configures into a usable state and that's the model I think we have to try to emulate even for more complex systems that get built out of a amalgams of things and devices that you have some control over. Right.
But
If it doesn't auto configure to a useful state, we've failed to serve our customers.
I think it's a great example. And then it fits right along.
What I'm saying is like auto configure. It let's say you auto configure it though for us time. And somebody wants military time. I still want the ability to change it to what I want it to be. Right. But I agree with you.
I think coming out of the box there, you know, whatever the current most logical standard is, set it to that default and then, but give the opportunity like, cause I'll use the example that somebody that says, Hey, I don't, that's not working for my business cuz otherwise, you know, the inflexibility is, you know, gets all in trouble because there's always some use case we don't think about. And then we get whacked with that.
So I, I like the default, so for sure. Right.
But I think there's a concern with, so the time is a perfect example, right? That you have external means to actually trust that the settings and the information that is being automated and given to you automatically is reliable. And I think that there, the, the trust anchor for software define is gonna become even more critical. And Ken touched on it a little earlier where you know, hardware and root of trust that are baked into physical systems is gonna be critical in order to enable the software, define that layers on top, right?
As more complexity happens at the software layer, you need to be able to have these anchors back to the physical, to know that the underlying can be trusted and relied
Upon. Right. And that brings me back to, I've only got a few tools in my bag. That tool is cryptography. And if we don't use that, everything's just in the open.
There's two interesting things here that kind of talk about human nature. Right.
And I, I laugh because we don't worry about time from most things like our watch and like the VCR. But if I'm doing a precision operation in the operating room and somebody messes with my time, I could kill someone. Right.
I mean, it's, it's a big deal to some people to others it's not. And I think what that tells me, one thing that's interesting here is that you sort of have to have this trust, but verify model, right. That I'm gonna trust this to a certain extent, but I wanna somehow be able to validate or, or, you know, at a state, whatever the right word is for a policy form. Right. I would basically validate that this policy is correct and that I can still trust that this policy is correct. Because if I don't, most of the time, bad things won't happen, but bad things could happen in some cases. Right. Right.
In fact, a friend of mine from the ITF context told me that it's mostly Navy people driving the network time protocol stuff. Yeah. Why? Because bad things can happen if they're using network time and an adversary manages to get it wrong for you.
Right. Yeah.
You know, it's interesting in that, it sounds like then we're talking a little bit algorithmically, like what you see in advertising people like you wanted X. So I'm setting my time. People like you, meaning in your time zone wanted this time instead of that time. So it feels like that interface may be a suggestion kind of interface.
Is that something deploying that at scale you'd be looking at variables, would they it's interesting. Right. Because the variables are they associated with people who bought this watch that's a set of variables or people who bought this VCR or is it from the consumer side that set of preferences, which are given ascendants or is it a mix of the two?
How, how might a system understand the groupings? The communities of interest, I guess, is one way of looking at it in order to then do the statistical analysis to figure out what to suggest.
It's the same way that you're gonna figure out red flags. And it's the same way as in law, you determine negligence, right? You have duties of care that are defined by a certain role or a certain place in community.
What, as that moves to a software defined system, it sounds like it'll be, there'll be more accuracy. There'll certain be, be deeper information.
What, what might the first is it doable, you know, at scale. And then second, what might that look like in terms of those interfaces? Does everything like I'm pulling a tissue outta the box box?
You know, people like you preferred two tissues. Cause I got a big nose.
I mean, one of the, what are the things that, you know, how deep does
It go? No, it's, it's a good, it's a very good point. And I think the, I mentioned earlier the programmatic interfaces, the APIs become really critical that we define those and have policies around those, those interfaces. That's one piece of it to kind of answer your broader question.
I, I always use this rule of thumb and I probably shouldn't, but I always use the 80 20 rule, you know, 80% of what people want is gonna be good enough just using NTP or just using, you know, your GPS, you know, signal to, to figure out what the time is. Right. There will be that 20% though that wants more precise, more precision around what they're doing. And I don't think you should Des I don't think you need to define your system for that 20%. You can probably define, you know, describe and define interfaces in the system for that 80%.
But like any other policy, you have to have an exception to try to figure out how do you meet the needs of these 20%? Because that 20% is, is critical. It's important.
It's, you know, just like most 80 20 rules, the 20% point make all the money and the 80% don't make a lot of money for you. Right? And so you gotta kind of address these 20% cuz they're the ones that are driving the business and bring in the revenue. The 80% though is the use is the most of the users and most of the use cases.
And so to me, it's a kind of a balance of do what you have to do to meet the majority of your use cases and then figure out how you address the corner cases and go back to what, what him, I mentioned, tying it into those root of trust and what you're talking about, cryptography, right? Tying it into the rules. And the tools that we have today is really how you make sure that 20% is, is validated in making, making sense to, to that user community.
Yeah. So I would say I build on that to say, I think software defined and all the technologies have to be highly customizable.
So that's the first thing, right? Software needs to become highly customizable so that you can have things like advanced options. They should be sane defaults, but you should enable the advanced options. The second is in order to do things at scale, you have to be able to do that classification. And you know, we talk about data protection and there's been a lot of panels and discussions on data protection. But from a workflow perspective, you start with your classification is this data of importance to, to my organization or to me.
And if it is, then I wanna have it protected to the, you know, to that classification. So when you do configurations and policies from access control perspective, right, you can't do it at, at an element level at an asset level, you have to do it based on categories of concern or communities of interest, whatever those groupings are, you need to identify them. And so that's the other piece which I think most vendors and technologists have not usually thought about the human aspects of how do their technologies get utilized and adopted.
And you know, it's usually focused on the 20% that is gonna drive the revenue, not necessarily the 80% that have to deal with all these technologies in, in the middle. And so that's the other area that I think from a vendor's perspective, right? We need to focus on,
I disagree with that successful vendors
Over time, pay
Attention to making it work for the vast majority of their customers,
But over time,
Vast majority of their, of the time. Yeah.
I mean, apple has a reputation for making things that consumers can typically use. And you know, I, I think a lot of us try to do that as well, but that's a route to business success as well as customer satisfaction, which is closely related.
Right. But I think there's a challenge with time to market. All the vendors are trying to get to time to market, right. And competitive differentiation is against what is good enough, right? So there's, there's this constant, I guess, balance of focus required where they're trying to get their innovation to market as fast as they can. Sure.
Which may not necessarily say, okay, did they make best efforts to implement the right levels of security controls or other types of improvements that, you know, they can, so they don't even know if they're gonna be successful, right.
To the extent that you can make it better by a software upgrade. Yes. We're in a good world.
Yes.
Yes. The software, I think the software piece going back to your very first point is, is the critical, those APIs is that you can make, if you do, if you do make a mistake, right.
That's why every software manufacturer has a patch day that they patch things or they patch. And now patches is, it was a major vulnerability. They patch it that same day. Right? And so it's, it's very critical to have that ability to adopt quickly with software changes.
Because if you have to ship a chip to someone it's coming, be much harder for someone to take a chip and stick it in their phone, then it is for them to just connect to the network, get the update, reboot the device and how they're compliant
With broad deployments in internet of things is they're gonna be sufficient onboard capacity in the hardware to accommodate software upgrades.
So if I have an R F I D chip, which cost 10 cents or one penny, and I find out that 10 billion of them out there, you know, one of the challenges, right, is that the, at the end point, the thing that you're upgrading has to be able to accommodate it. My assumption is, and tell me if this is true, that with this exponential increase in capacities, that it's gonna be an interesting challenge to create end points that can be upgraded continuously as the price point comes down.
Yeah, I would. I was gonna say, I would challenge the upgrade. I love software and I love the upgrade, but I, I could tell you probably many of our enterprise customers would say a software upgrade has not always cured. My ills for all of us on this stage has probably made my life worse. And even going back to the apple example, many of their software upgrades at times have not done exactly what they said they would do. And they're probably one of the best at it. So I think it's a good point.
I, I think if we could get that right, it is probably the easier way to success rather than us with hardware having to change up boxes agreed. And as we move more to this cloud world where it's software as a service and we have more of a continuous delivery model, maybe we have more of a chance here. Like you said to, you're not so much upgrading software. I'm not sending you some as much as I'm kind of manipulating it for you and, and taking care of that.
That seems to be more the promise to me, but like the software grid, I just worry, cuz I we've all lived through both on the enterprise side and the consumer side where software upgrade awry and it did not solve my problem. It actually made my problem worse. Yeah.
Yeah, indeed. But particularly in a world where the vulnerabilities may be in software, if you don't have an ability to patch vulnerabilities, once they're out there, your device is a sitting doc.
Yeah. It's compromisable. I mean you're in that case, more of a secure from a security vulnerability perspective. I agree with you. I don't know that we have a better method.
I mean, we have to get those that's right. You know, closed as quick as quickly as we in the vendor community and also the security community at large understand what's going wrong. Like all the ones we've seen recently, we had heart bleed last year. Immediately people had to take action on that. Right. If they didn't, we were all sitting ducks.
Right. And I was sloppy in my response to Hema in terms of talking about software upgrade. I think there's absolutely two different kinds.
There's automatic patching happen behind the scenes to close vulnerabilities and make things work a little better. And there's an explicit, I'm gonna get the new version kind of choice that you're getting a more advanced product. And those are not the same thing. Right.
But even in the software and defined world though, you will have to deal with both, right? Because you're gonna have a core set of, of software. That's a controller, if you will. And it's gonna need to be obviously if it has vulnerabilities, you're gonna have to close those security gaps.
But over time that software's instructions, rules, maybe capabilities that you need for it to perform in your, you know, your cloud, your network, wherever you're using the software is gonna have to be updated upgraded also. So we are gonna have this vendors deal with both. I guess what I'm saying, both sides. The one is probably a little easier, cuz we're more accustomed that we can patch something quickly for security, but can we also take care of the customer from the perspective of now we've gotta give them this new software controller, if you will, and it's gotta go through an
Upgrade.
Right. But I think with the software defined with these virtual logical systems, because the cost of creating new ones is so low that you have the ability to actually create a brand new one and patch the old right. The ability to have redundancy is there. Yeah. So it's not like a physical system where you have to get a whole new physical system in order to bring down the other one. Right. Right.
Like with, with software defined, your ability to create new ones is, is costless and is from a time perspective, very fast to do. However, because you can do it. You have to also recognize that the perpetrators can do the same. They can bring these systems down as well as create similarly right. Whole new networks that you did not know about beforehand. So just awareness.
But I think the other thing is that most of the what's known today and how we do protection and legislations and just industry stranded do not cover this notion of things that weren't there before can just appear and disappear. How do you do forensics?
You know, I mean just that whole notion of having software defined where things disappear is gonna be a problem and people. Yeah. I was gonna
Talk a about, about that, that, you know, going back to your question with RFIDs and the, and the in and of everything, internet of things, right?
The intention of the software architecture is so that those devices can stay pretty dumb and don't require a lot of updates, but then the communication channel to, and from those devices and how you get data, you get to process him as close to that device, as you can, is what all of this discussion's been about. All those security concerns, all your controls that need to be in place there. Right. But to the point him, it just made, I think it's really key that, that we, to answer a question also that I think is, is a very key point software today. You we're making changes every day, right?
There's no, there's no more release cycle that takes months to get software out the right work.
I literally every day have a new update software in my cloud.
And so it, and it's just gonna get worse as you get more and more devices out there, there's gonna always be something being updated. And, and we know from experience and we talked about this update sometimes to break things, right. But I think the important thing for this discussion is as you get more software defined and as you're doing these updates in real time, how do the lawyers, how do the legal, you know, the legal representations, how do the policy makers address to what Hema was saying, software that wasn't there?
We even thought about six months ago, three months ago, two months ago, right? It just, somebody said, Hey, I have an idea. I'd like to do this the next day. It's out there in production. There's no way you could have wrote policy in that period of time. Right. Or even thought about what policy you need. And it's a whole, I think, interesting environment that we're moving into now, and it's just gonna get worse as you talk about the internet of things, because there's billions and billion, you know, mobile was nothing compared to what they think IOT's gonna be.
And so it's gonna be a lot of, a lot of challenges I think, to policy and to legal, to address things that don't exist in the next six months.
Well, I think in ideal world, we would have some policy and a matter policy. And if lawmakers would do a good job, probably they would take over what I'd call a meta policy saying, okay, that's the framework. That's the boundary to technology.
Let's say lower, not meta policy that can be set right in that borders that would give everyone designing software, the freedom to do so within good legal borders and this meta policy making I'm really missing because throughout United States, Europe, wherever policy makers ignore the need of more detailed technical ideas within loss, they, you, you could threaten them. They wouldn't tell you what technical standards is good or not.
Or where are those meta policy borders that just wouldn't tell you, they're leaving everything up to the market, which is in, in loads of parts, very nice for the market. Everyone knows that. And at the end for the customer, of course, but I think at the end, it's a question of time, as you mentioned, and you are out there at the market after one day, but you can't be sure if the next day policy makers, lawmakers in that sense will come up and tell you, well, this was against the law and you're like, you didn't tell me, I didn't understand the meta policy because it wasn't concrete enough.
I didn't, I couldn't get it. You didn't inform me.
Well, now I do. So it's very restricted afterwards. You throw something on the market and you might learn that you couldn't have known before, but you must understand. Now you haven't done. Right. And I think this is actually what I asked from, from law makers in, in that meta policy, making sense to really define borders wide enough in order not to threaten business, but clear enough to give some guidance. And we are missing inspired solutions in that
We need software to find government policy.
Well, yeah, that's it. You put it very nicely what
You're saying, but I mean, it's a good point. How can I kind of feel for the governments too though? Cuz how can to the speed you're talking about? How can they keep up until in a way they have to see the evolution a little bit. Cause if they try to write the rules before the evolution, unless they write 'em like you said very loose. I mean it has to be very broad based. They don't have a context now.
And so you you're in this more than I am, but I would say if I was a government person, I would have a strong problem with like, what's going on trying to write any rules, frankly, but then you get the wild west. So how do you know that's the, that's kind of the world we live in? How do you, how do we as technology vendors keep moving fast, to your extent, like you said, try to keep the government out almost as long as we can, until we, we see some of the after effects. Right. Which is sad cuz the after effects occur. Right. Right. So
I think, sorry.
So I, I think it's one where in the past, the industries were very siloed and it could remain siloed because of the pace of innovation across the, across the board. But now given the pace it's I think it's our responsibilities, right? If you're a technologies to actually understand how can you and have those conversations with the lawyers and the public policy people, right. To say I'm building out or have this idea, what, what do you think and what else do you need to do? Right. So software defined has come up, right?
We're getting sort of broad adoption, but we don't really understand the full implications. And, but this, this kind of panels is the whole purpose.
Is it, you know, to have that mutual discussions right. To have that because I think if we continue, right, like the, the world's becoming flat, we look at it and we say, okay, it's flat.
Right? Because we, we essentially travel the globe every single day. Right. Without physically moving. Right. Well the same is happening on the technology side, where everything we do has implications globally.
And it's, I think at least for as the technologies, I think it's our responsibilities to make sure that we educate the folks that don't know. But I, at the same time I expect the other folks to participate in the technology. Right. So there's a lot of, especially sort of, if I look at the us, a lot of the folks that are being in the positions to make decisions that would apply to the whole country don't even have mobile devices. Right. They don't know what a mobile app is. They don't necessarily care to use all these new technologies, but they need to understand.
And I think they have to participate.
Let's see. Are there any questions for the folks in the audience?
So I, I wanted to spend a less couple of minutes before we have some kind of closing statements, just talking about an alternative channel of reprogramming. Let's talk about wetwear the human mind, right?
So we, in a standard technical standard setting context, the output of standards is patent cross. Well, the spec of course, the technical spec that people can build in conformity to then from a legal perspective, it's patent cross licensing, or some kind of patent safe space, maybe a copyright in the spec, maybe some kind of marking convention, depending on the situation.
What might some of the, if we, if we talk about now policy standardization of various sorts, we've talked about at different levels, legal product sectoral, depending on the type of risks that are in a medical device, things like that, what are some to move to the next step and start to have some of these policies?
So we're talking about reprogramming, remember Howard, Manila's about cognitive bias.
His, you know, that's the wet wear reprogramming in a sense where he, what he was talking about, right, is he is eliminating that bias. So people are making better decisions about risk. What are some of the policy, things that are not out there now that should be among the earliest as, as software people and very aware of what's not there and what people aren't doing for people who work in the policy are lawyers and manager, folks and economists, whatever. What are some of the first things that might help to that kind of reprogramming?
You know, cuz if then if we can change human behavior and they don't have an expectation that my R F I D chip is gonna be reprogrammed, then they can behave with the right expectation of the safety of that device. Are there some things that you could comment on about what we might see that would help make the system more secure by making the humans behave in a way that's more reliable, both institutionally and personally,
I have, I have an example. I always like to give people that.
And I, and I was talking with, with someone in Germany, just yesterday in, in Berlin about this, that, and it's around medical, cuz medical is, is one of these fills that is, is very highly regulated, very secure environment. And there's a lot of things and a lot of requirements coming out around the medical field and around electronic records and how data can be shared and where the data can be stored.
And I, I just looked at the person I was talking to and I said, well, the number one link, the weak link in this entire process you have is the doctor because the doctors don't lock their laptops. They don't have passwords and they know everything about the patient and you can't, you can't put a policy around the doctor and you can train him.
Right. You can do all the things to try to help mine map the doctor's mind, but he's a doctor, right? He's not gonna listen to you. He'll sit through your training and then he'll leave and go back to what he always does. That's what doctors do.
They, the doctors, they're not anything else. Right? You can't train 'em. They're like, they're like cats, you can't train 'em right. They just do what they do. And I'm married to one.
So I, I love doctors. They don't get me wrong, but, but my point in all of this is that to some extent, I kind of feel like software define, could maybe take away the human factor piece and help protect humans from making mistakes as much as we can. I still want there to be human verification happening.
I think, I don't think you should completely trust the systems.
I don't want some kind of, you know, like futuristic movie of the robots taken over the world or something crazy. But so I think you still need human involvement at some level to kind of, and maybe it's more the governing body bodies and, and legal, and some of the policy people who kind of validate that things are in check and that systems are being, being operated correctly.
And, and no one is out overstepped their bounds because they could. Right. And so that's kinda how I look at it. Maybe take as much away from the human mistakes as you can by software defining it and, and taking making decisions. And the example I gave with the doctor is, you know, I, I don't think the doctor needs to have a password for the, the device, right? I think they need to have something that they have to, like, they have to have a badge, they have to have something they put into it.
Maybe it needs to, to be a biomedical scan of some kind.
But once they connect and put something that is physically theirs on that device, they should be treated as a trusted person, cuz they're a doctor, right. That device is gonna stay in that operating room or in that, that room there, as soon as he walks away, whether it's proximity or whether he takes his cut out, it Schutze off. Right. There's no access to it. Take away the ability for anyone to break through that system because you put this, the trust in something else, other than that person.
And, and that nicely anticipates the next session with the metric session that we're gonna talk about. So we're right near the end. Any kind of final, brief, final thoughts from everybody, maybe a couple of thoughts, just as kind summary, thoughts
Open the pad bay doors. Hell
Excuse me, Emma.
Yeah.
So I, I would say that there is all these technologies come together, right? So when you think about sort of what Ken just talked about, right? The identity aspects, right? Whether they're physical or logical, you know, just come to birth, everything has an identity related to it and context is important. So making sure that sort of, whether it's human or device identities and the personas or the profiles associated to, in this case, a doctor being able to go from a biometric scan of, to identify the person to say, okay, they're a doctor, they have access to all this data.
But also things like, okay, as soon as that doctor moves away, this device should be locked. Right. So I think there's a, the interconnectedness of all these technologies need to be thought through and leverage because it is all available.
Thank you
For me. I think it's long lived humans.
No, I'm kidding. I think that there's a lot of benefits of kind of the software defined era, but it's gonna get right back to what you said earlier trust, but verify and governance is gonna be key. How and how are we gonna figure that out at multiple levels, our level of as vendors implementing hopefully the right way.
Cause I think we generally do try to do it the right way where you live all day Carson at a governmental level, trying to figure that out so that we don't inhibit our ability to you to innovate and succeed and inhibit all the folks in the audience from innovating and succeeding, but putting in place some level of protection also. So there's, you know, it's, I think a good word for all of this is always balanced, right?
As we start to see the benefits of software defined era, we've also gotta make sure that we don't create too many of the downside effects and they start to overwhelm the benefits cuz you we've seen many innovations where they come out with Harrah and then wow, what did we do?
And so I think, you know, for all of us as an obligation to all of you out there is try to figure that out as much as we can front and then bear with us when, as we all take this journey together, cuz there will be mistakes along the way.
I think, I think my last thought kind of goes back to the first question you asked as you look at history. And I always look kind of look at history to see what we can learn going forward. When machine workers, as automation started creeping into the factories and machine workers started losing their jobs, they were given a choice of, you know, learning new skills or finding a new job. And I think we're gonna be in that same situation with software defined.
And the idea is not to replace your engineering and your, your technical team it's to help them grow and be more mature become instead of assist admin become a cloud admin instead of a network admin become a cloud and network, you know, software defined user, right? Whatever that term is. So I think it's a lot of this is around how do you help train your people for that next, that next step of software defined and how, and to your point, this meta policy should be baked into that. So you're learning broader skills, not just generally focused skills.
It seems like we may have to ask engineers to start taking sociology and poetry classes and English majors to start taking computer science just to in anticipation. Well thank you very much panel. Can we join in warm? Welcome for a warm welcome a warm thank you for a panel.