Well, we wanted to, we, you know, thanks for the folks who have been attending different sessions in the track. What we wanted to do in this session in particular was to really talk about a summary of some of the things, some of our observations from having been moderators of the track, feed that back to you. And it's really in the spirit that we were talking about in one of the sessions earlier that this conference in general is not just informational, but it's actually part of what governance looks like.
Now, the, in my keynote, I talked about the idea that Ron it and Porter were two gentlemen from the Netherlands who wrote a piece on self-regulation and observe that legislative processes, whether they're self-regulatory or regulatory involve five stages in those five stages or agenda setting, problem identification decision based on the problem identification implementation of those decisions, and then review.
So kind of a feedback loop. So if you think about what happens at the conference here, certainly agenda setting is discussed and then problem identification also is initiated.
When you go back to each of your organizations, governmental or company, the engage in decision informed by those first two pieces. But the difference is had you not been in the conference with doing the agenda setting in problem identification together, we wouldn't have the advantage of each other's views.
And so one of the things that we've been observing in a number of conferences is that the agenda setting, now you have, you start seeing the same faces, essentially it's a another management level for each of your businesses because you're getting some broader perspectives and the sessions we just did on metrics, it was brought out that that expansion of view, that expansion perspective is really important to decision making now, because we're talking about distributed systems and everyone's kind of wondering what distributed governance is looking like.
And the irony is they're wondering about it together, actually doing distributed governance without recognizing it. So that's what we wanted to use this session as this initial feedback, kind of what we've seen in terms of agenda setting and problem identification to further clarify those things. Looking also for feedback, if we're some of the, if there's issues with those, the way we're characterizing things, we'd love to have feedback. And I think there's some feedback mics right here in the, in the front row.
So, but we'd love to get that feedback. So our intention in this one, you'll see if you attended our earlier sessions on the slides, you'll see that in the lower, right.
There's a, a little will be pictures that we had in the earlier set of slides, essentially. We're we had suggested some themes. And now we're gonna go back and, and give some observations about those themes.
At the end of each slide, I think may be a good idea to ask people if they have some other observations about those particular issues in each one. So the clicker doesn't work, we can't start, oh, we have to, we have to delay even further. Is that there, they can click from the back. Perhaps actually looks like they're gonna help us there.
So you see, while they're doing that, oh, there we go. So this is, this was one of the original slides, nothing, no new information here, unless you didn't attend the earlier sessions, this essentially breaks out the different colors.
Again, using that four color theory of mapping, our kind of map theme that we used, and it talks thematically. These aren't, we didn't have a session that break out that's identical to these cuz the sessions really overlapped a lot, but they, each of the colors represents a morning and afternoon set of sessions that we had together, where there were some themes in there and we're gonna go through each of those themes and the following slides. So they won't exactly track each of the sessions that we had, but the sessions did largely track that.
So let's talk about just the initial initial part, the mapping risk. Well,
Actually it was Hans Perin CISO from GE adding to his keynote. His keynote was I think very well received focusing on some pre thoughts, but when he was here, he was really trying to open the issue, mapping risk and he put it really on a perspective. I thought that's got to do with where this is supposed to be positioned within an organization and what he could refer to from his experience with GE being C for Europe is that first it was simply security.
Then it was information security and then it became digital risk. I think something, a lot of organizations have been going through and he illustrated that very nicely and what he was doing. He was reaching out to internet of things a lot and explaining, I think very well on that issue, how his personal job or his job or job as a CSO in general is affected by this moving risk.
So he was very much about the moving target and I think we discussed a bit longer afterwards and, and he, he was really sure about never reaching that point where you would find either the right name for a position because it would get broader and broader and broader. So just calling it a digital risk officer would never be enough because the risk is even broader. It's not only a corporate issue. It's a governmental issue beyond the corporation. That's that was one of his thoughts.
And he will never be able in a, in a large organization to do the full job as there is many risk officers, for example, for the products, which he's not. And then for, for some other issues for the networks and so on. So he thought the communication part was the most important. So in his map, the communication was actually number one. And that was one of the dimensions. I think that was his key dimension.
And well, we put up here the benefits, if we go into mapping risk, you have multiple dimensions and well, I think that's one of them.
Yeah. And it's interesting with an organization like that, large organization, a couple of observations. One is that, you know, when you go in a conference and you hear the same thing from a bunch of people and you're like, oh, I'm hearing the same thing again. And that feels like a disappointment that should feel like excitement because when you have that, it's an opportunity for inter-operability essentially right.
One of the questions that we asked in the last panel was, you know, why isn't there an Airbnb or an Uber in the commercial sector? And, and the answers are it's because of the competitive profile of most commercial entities. So that notion of an organization that's so large and complex as GE where they have so many different things going on, they have the opportunity internally to do sharing.
And even that's difficult, any of you have divisional organizations that you work with know that the division's always, there's some head heads moving and upward and down and sideways ways in the audience, but it's very difficult to get organizations even internally to operate in a way where they represent, recognize the multiple dimensions that they're all dealing with.
Right? So operation of an organization like that, you know, it, it's a great example of going from physical value to a variety of intangible value and relationship value over the history of that company.
And many companies, large industrial companies have that that's particularly important in the operating technology area, right? Any kind of industrial kind of facility or infrastructure facility and in internet of things context, right? Because you have that the value proposition for everyone has, it's not migrating entirely away from the physicality. We're still physical beings, you know, at some point, but the, but those institutions that have to have those multiple value vectors and how do they get the internal, even the internal parts to work together.
You know, it's it's as if my liver was negotiating with my heart to see if it wants to filter the blood, you know, that would not work too well.
Right. So right. At some point, one of the things that we're doing in the us is I work with the us federal and state governments on disaster issues. And the nice thing about disasters is that they bring people together, right? And so they have, it, it it's an exigency that makes it so that the externality is such that the divisions do work together and the divisions are no longer divided essentially.
So I think that his, his presentation and starting to vet out that complexity, and we mentioned in here also the black swans Howard presentation, that really was, I think, terrific. We're gonna talk about that in the next slide. I think if we have that one there, right? Yeah. Where we we're talking about preconceptions. And one of the things that struck me is we heard that I wrote blind leading the blind here. There's really two governing bodies that we are all subject to one is our own minds.
And that is what the, the idea of the cognitive bias that Howard was talking about.
But also there's institutional cognitive bias essentially, right? It's a formalized fashion, but it's very, it's, it both carries up the, the collective of the biases of the people who work there, but also has its own independent bias and, and an organization like GE and other large organizations. Many of the organizations represented here that ability to navigate that kind of complexity, that internal complexity is another theme that we had in the track, which was the notion of don't exacerbate.
Don't make worse, your sec, your risk by not being able to, by introducing second order complexity into the first order complexity. So it's the FEMA example with hurricane Katrina, where the hurricane, you can't stop. You can shake your fist at the sky, but you're not gonna stop hurricanes. I don't wanna drive a driver electric vehicle, but you know, but otherwise, but the, you can't stop them directly, but you can respond better to them. And so you, you create more problems.
And that, again, that divisional problem is something that used to be contained within an organization as outsourcing is gone and expanded. And now with the internet of things and other information technology outsourcing that notion of that second order risk is getting even more complex and more challenging because you can't necessarily deploy your response to risks that are first order risks.
Right. And I think it was at this session when Tom Langford was dealing with flushing away, his own preconception of risk. Yeah. That was great. Yeah. That was really good.
I think because he really connected well to the complexity issue and then he added another aspect to it in my, in my eyes, at least that really fit very well because he opened at least my eyes to tell me, well, nothing is, as it seems, it's pure coincidence a lot of times. And it's got nothing to do with risk, even though as a human human beings, afraid sometime. And they're really fast when they might see a risk, it was as monkey theory, right. Wherever there might be a risk just for pure coincidence, they will take it's a risk. Yeah. But there might not even be a risk.
You remember the chief example where he was explaining some pure coincidence issues, the more people eat cheese in general in the us, the more likely it will be as a statistic to, to fall out of bad debt. Yes. Even though that's not home happening only to people that eat cheese. So he was really being cautious to overestimate the link between certain happenings. And I think that's important to stay calm and really measure risk, which is a later session. But I think that was adding to it. Complexity is there, but not everything what's there. It's part of the picture and adds to the complexity.
That was a major finding there, I think. Yeah,
Absolutely. And it goes so much to that question of correlation versus causation. Right. Right.
You know, you see a strong enough correlation. I think it was, I read years ago, pirates correlated with climate change.
You know, that kind of thing. It's like these things that, and, and one of the challenges you have in a complex situation is you, there's very difficult to understand the differences in correlation in causation, in complex settings.
You know, you look in anybody who reads biology, what's going on in biology. Now you recognize that. So many of the things that we thought were causative turned out to be just correlative and, and, and, and the other way around, you know, we're and with big data, one of the things that seems really interesting is we're giving, being, given all these correlations or being it talk about pushed content, it's pushed correlations in a sense, right.
And we're being invited to try to discern if there are, if there's causative relationships and it feels like it's even changing the nature of causative relationships in a way, or our, or our definition of causation in a way, not just easing up what we think about a causation, cuz it's so complex, but that recognition, you know, it's the same thing you ask a physicist about dark matter, right. You know, there's so much that's not known.
And so there are, are the things I'd like that when they had the universe up before in one of those earlier slides, presentations that there's prob there's certainly more that we don't know than we do know. And the thing that's interesting about running an organization is you can't go into a boardroom and say, oh, there's more we don't know than we do know. Right. They wanna know
They wanna have some security.
That's exactly. That's right. So in the area of security, especially with these deployments now internet of things and just the deployment.
I mean, these, you know, people pointed out in the conference a couple times, these are things, even though we don't think of them, we think of thermostats and RFID tags, but these are things. And so we already have this massive deployment. One of the challenges I think is that exponential increases tend to blow systems apart in any vector that's relevant in the system. And so we have these exponential increases in so many elements because the internet thinks the deployment, the attack surfaces people call it, the information coming in is almost an, a denial of service attack on our attention.
You know? And there's so many things that are going on now that are exponential increases that even like I used to tell people, I'll take a good correlation any day.
You know, it's, it's a, it's something that's an interesting notion and that Tom's presentation in others really. They it's that constant desire to know. And because not just for risk, but just because intrinsic curiosity as people, you know, just to know what's going on.
And, and, and also because the value of knowing, you know,
Entrepreneurial, well sometimes not even only knowing, but being a shirt, I think that was a big part. You wanna make sure, even though you wouldn't know details, everything's fine. So what really struck me was his explanation. I think it, it was such a clear cut and everyone knew, but his words were so clear. And finally it opened my eyes when he said, okay, you, you wouldn't put a red light on a, on a traffic light in, in any, in any audit because red light is failure.
And you don't wanna give people a feeling of unsecure Inness because, and also you don't want to show people, you maybe have been part of failure or even the cause. And then you would put, you would never put a green light because your budget wouldn't, wouldn't continue to be the one it was before. And also if something happened and you had put a green light before that doesn't make a pretty good impression on people either. So why was there a green light? And then we had this incident. You don't wanna have that question so continuously you would go on Amber just for assuring people.
So the, the emotional part on, on the preconception of risk was a second point that I think was well well explaining and really struck me.
Absolutely. A couple other observations on that. We were just talking about there and it came up in a couple of other settings, the idea of failure. It's really interesting because failure immediately causes people to contract in, unless you're in Silicon valley, right. Where there's the idea fail early fail often is that it's and what, you know, failure is, is such a, a system dependent or scale dependent question, right?
Because the it's like ed Edison said that finding 99 ways not to make a light bulb was not failure, right? So from his perspective, the failures caused the success to happen. And if you look at, you know, what were all the failures that happened before we figured out how to serve clear water to ourselves or, you know, or, or make a, a tablecloth or a bottle. These are things that are lost in, in history, but really we need to, the culture of innovation requires some degree of, of, of experimentation and there's going to be failure.
And it's, it would be interesting if there was an ability to have a narrative where the failure was embraced more. And that gets to an issue. We were talking about one of the earlier sessions about insurance.
You know, we talk about risk mitigation and everyone wants to mitigate risk at the front end. Everyone wants to say, I don't wanna experience the harm, but the problem is we have complex systems that are gonna have non-linear behavior. Robert was saying earlier today, you know, the unknown unknowns, you can't plan for the unknown unknowns as even if you have unlimited resources, it's, they're intrinsically unknown. So one of the things we started vetting there and didn't have a chance to develop it too far.
Even if you can't prevent unknown unknowns, the idea of insurance lets you collectively decide, look, we're gonna get hurt by things we don't know about.
Let's share the burden and in a way that gets towards the mitigation of the failure as a bad thing, because you're recognizing as whatever group, whatever group you have in that insured pool is gonna recognize that failures are gonna come or, or externalities are gonna come. And so again, that's one of those things that is a, a practical thing that companies can do.
Not that you should go out and buy insurance, but understanding that the prevention of, you know, sustainability is not just about preventing attack. It's also about tolerance of attack, avoidance of attack, absorption of attack. So there's a lot of different things that can address risk without the, using the strategies and tactics of just limitation. And that's, you know, security. One of the themes I think has come through the different sessions was that security used to be more about prevention and more about a firewall and an edge.
And that softening the edge that permeability of the membrane is going to admit more good and bad things to happen. And for the organism of the organization to continue, that tolerance has to be, has to be there as well.
Yeah, that's that's I think one of the major points we there, I think after we had professor grim discussing risk of risks of privacy, and I think his, his key idea was really continuing with the story that we are trying to tell with our track. He was explaining first, very traditional methods of privacy, very legal approaches to privacy explaining the data minimization principle, which leads to the duty to have least access rights.
And also the fused information on people that you could imagine to have in a system, making it possible to run and not overcome a line where you couldn't use the system anymore. Things like that. And then he was talking about privacy enhancing technologies, explaining that certain parts within big data applications, but even within any network today, you couldn't guarantee privacy. According to, to those legal measurements with, without using technology again.
So law would be only part of the system and there would be technique or it need be needed in order to, to realize law at the end, what he was adding at the end, I think was very interesting that, okay, there's different layers. You have this legal idea, which you are trying to apply on a technical system. And in order to implement those legal ideas, you would use another technical implementation on that technical system. At the end, if you break the technique you will have at the same time, been in the position to break, to break law, which makes it more or less a double breach. Yeah.
So at the end it's privacy enhancing technologies. And guess what I think to my opinion was the most important part. Those privacy enhancing technologies do collect information. So at the end they have threat to privacy.
Again, at least they have meta information on people.
What's the use of the technology and so on. So his point I think was even though your intention might be good using technology in order to achieving certain legal goals or compliance goals at the end, it might be not only a winning situation, but a win and lose situation. Yeah. You win some, you lose some, yeah. You win some more privacy. You gain some overview on, on behavior that's uncommon. And therefore maybe you have some likelihood for misbehavior or misuse of information, but then again, you're scanning through people's behavior.
Yes. And this is actually part of what you're trying to protect. Yes. So that's something, yeah. It's not a clear cut case. And I think privacy is, it's not a scientific idea. It's manmade. It was manmade before we had all those opportunities. And I think he was doing a nice job in showing us again how theoretical the privacy approaches.
And that made me think in how far the system really is up to date.
Or if we rather should maybe start to protect identities more than privacy, because privacy, it's pretty one dimensional and you have that privacy thing you want to be left alone maybe, or you have various understandings of privacy, but an identity you can have many of. And I think that's a very modern under interpretation. If you're in Facebook or second life or wherever, you might have an identity there, or even more than one at the end, you're always gonna be one human being. And of course, privacy tries to see that and protect you as one holistic person.
But I think we have different roles and especially within it, when we talk about profiles and roles and everything and, and access rights, then there's no way around maybe shifting this legal perspective on one single privacy for one single person.
I think you are so many. And so many of you need to be protected.
And the, the level of protection needed really differs from in which situation you are in, the more private the situation gets or intimate, the more protected you want to be. And the more public a situation is the, the, the less protected you want to be because you wouldn't even expect privacy in public. And then there is a personal feeling of risk there for people being intruded in their privacy. And this is what privacy laws try to pretend, try to try to protect you from. But then again, I think it's more, it's, it's more your identity, less, less your privacy.
It's a certain kind of identity when you're sitting at home watching TV or it's your, it's your identity being on the computer and doing certain issues. So I think that really added nicely, even though, honestly, in the beginning, I was like, okay, now this is gonna be a very legal interpretation. He really got the curve to a very modern understanding and that went a bit away from the, from the risk from a network or corporate perspective, more to the citizen perspective of being protected. So that attitude, I think I agree, cause we're all human beings and all citizen at the end.
That's part of the
Picture. Yeah.
Well that, and that's so interesting that notion of the humanness, you know, one of the things that you in read about science fiction and we experience in life is the question of the paradoxes of being a human, right? So in privacy itself is a paradox, privacy and security.
You know, one person's insight is another person's intrusion. And I, so everyone wants to externalize their costs to someone else. So I want total insight and no intrusion, but you want total insight and no intrusion, well, it meets in the middle. Right? Right.
And so that, that notion of kind of, when you were just saying that it occurred to me like gradient of privacy or an UMBR and penumbra of privacy and in Umbra and penumbra of intrusion, you should expect a certain amount of intrusion living in a social, unless you're living on a alone, on a deserted island, there's a, it's a social structure, our identities, they're various theories of identity that would suggest it's a social construct like Irving Goffman.
It's a, it's a, you don't have an identity if you live alone in a deserted island, you feral.
And so that one of the things that's interesting is working with a lot with engineers and technical people as lawyers, we wouldn't have a job if it wasn't for risk. There's no such thing as law.
I mean, the reason you have laws is cuz people don't behave entirely predictably. It's supposed to encourage some predictability. So we love risk. So bring it on.
But the, but in terms of the, but the risks are really intrinsic in that humanness. And part of the security discussion, there's an underlying desire for perfection and perfection is not broadly available in the world, you know? And so the, we all want to ship the costs of the differential between our expectation and the reality, which that's the perfection measure. We wanna ship the cost of that to someone else all the time.
Part of what's going on.
I think in the, in investment profile and security is that notion of, is there a way to create a narrative that allows the board or whoever's has the budget in the division to internalize costs and it go, it goes towards those notions of the commons. If you think of risk as a commons, you don't want to be the person who's the fool who invests overinvest, who overs restricts their own grazing activity so that there can be a commons and everyone else doesn't follow it, right?
Everyone, you wanna be the person that everyone free rides on to use the commons notion and in a company it's that issue. You don't want to be the guys who made or person who made the investment. And it turned out to be a foolish investment. There wasn't a return on investment, right? And so that dynamic is really at the core of a lot of what's going on in terms of what people's attention, the not just money, resources, educational resources, you know, the preparation resources it's happening at every level now.
And that's part of the reason that it feels like the governance needs to be more collective governance because the notions, no one entity can carry the entire burden of the unknown unknowns. Right? And so the levels that creation of a common narrative, if you, the embraces the levels or makes them be eliminated in a way, because if you remember that Paul barren diagram, which had the distributed system with all the nodes, how do we get one node in that system to make an investment in security for the entire grid?
It's much easier in a centralized system, cuz all pads leads all edges connect to one node, the CEO or the person with the budget. In the case of a company in the political context, it's slightly different vector, but it's that same hierarchical notion. We have gone to an entirely distributed information system without yet changing governance structures and by governance, I mean law and investment and all the decisions that are being made.
And so we have this, this challenge of how to invite the nodes collectively to make the investment of attention of resources into securing all of them in a ways the, the primary, it feels like the primary way to do it. Or one way to do it generically is to really identify the, the, those things that can't be done unilaterally right by a node, right? That's how people engage in collective action to a company or a government or culture by things they say, I just don't have the leverage and risk reduction ability by myself.
And that's why I say that those disasters can be very helpful because it's a common exigency. Should we go to the next step?
Yeah, please.
So yeah, the people issue, this was, we had an early session and it won't, won't go through all the different specifics, but the, obviously we've just been talking about the human issues, but also the really looking at the recruitment notion, you know, customers used to be something that you sought to have them. It was kind of a one way relationship. Even though people talked about customer relations management, they really were wanted customer relations management typically historically was, oh, how can I keep this person on the line?
So they keep buying stuff from me now with security, especially with the internet of things. It really is a different notion because many of the products themselves need the recruitment and continuing behavior, not just a purchasing behavior, but other behaviors for things like security. And so that the, as the attack surface has become the functional surface of services and products, there really isn't a separation between security and the intrinsic value of the product.
And so that, because the systems are systems of, I think we have a slide in this later, the kind of tools and rules notion that you have the, their sociotechnical system. So in my keynote, I think it was, I had a slide of a person driving a car. The person in the car have to work together well for both the person in the car have to be functional for it to not hit somebody. So if the brakes fail, you can have an accident. If the person's drunk, you can have an accident. So that notion of how do we create reliability in people?
One of the notions that in law, obviously we've been trying to recruit people into reliability for thousands of years. That's, you know, that's the H Robbie's code recruited people with a narrative of, you will cut off your hand if you steal something.
So that's a, the penalty type idea. Obviously companies don't want to be going around suggesting they're gonna cut people's hands off. So the incentives be loom larger.
And so one of the things that's very interesting is you get this dynamic of recruitment for purposes of making the products more reliable and recruitment for purposes of having a competitive differentiation in your peer companies and trying to find a way to create that workforce or customer force in a way out of your customer base.
So it's, it's beyond customer relations management in its traditional fashion, but it's, it's, it's got some really interesting opportunities, I think for the companies that are in a position to engage their consumers and were consumers really more as towards employees. And I know Ian from Gardner made a presentation, we said, don't confuse your customers and employees. And that's exactly right. But do don't assume that you don't, they won't have some mixed roles I think is an important message there. Yep.
Should we go on
Session's yeah, let's go on.
It was cloud risk assessment.
Actually, I found it very interesting because we had three seemingly very different approaches on how you could carry out a cloud risk assessment. All of them were as we awaited discussing legal issues, technical issues, organizational issues, amongst the legal issue, very, very sub issues, such as contractual issues or general other legal issues.
At the end, it was pretty similar, even though they had completely different educations, backgrounds, views, experiences, and so on at the end, I think we agreed all on it, all being very subjective because the, the first question you have to impose is what's your risk appetite mentally going back to preconception of risk. I think that really is the most important point because if you have, if you openhearted towards risk, and if you accept a lot of risk, then you're going pretty, fairly easy through a cloud risk assessment because you're saying, well, I have a very high level of security.
I couldn't even grant that myself. That's why I go into the cloud. Everything will be delivered to me. That's really nice. I don't feel like I would be going into a risky situation. And then maybe if you would do a cloud risk assessment, not taking that perspective, but saying, okay, you have a very high level of it. Security. That's very nice, but that's not the conception of risk. Your risk is a legal risk.
Your, your risk is not a technical risk. Your risk risk might be an image risk. You're losing all the powers of your brand nothing's gonna work after. And that's a, a very, it's a question question of life or death for your company. So it really depends on, on various metrics such as the, the brand, the size of company, and certainly most effectively the, the appetite for risk. Yeah. And that merges all of that together.
It depends on who am I? Where am I heading to? What kind of information am I possessing? What kind of ownership do I have on that information? Is it my information?
Is it third party's information? What, what is the need of cloud? Why would I need cloud? Do I just want to save money and put, and, and save, save other resources? Or am I looking for a higher level of security in a technical sense? Both probably will work, but in between there's so many issues that may or may not count for a certain institution or, or company. Absolutely. So I think at the end it was, it was really that point. It really depends on your appetite. If you have no appetite for risk though.
And that's the lawyer again, in, in me, and I guess you agree, then you probably should go to, to the cloud and the risk assessment wouldn't be, wouldn't be successful.
So it's a, it's a relative question.
It's, it's, it's a relation question. How do you relate to risk and what will you be able to take? And where's the parts that you wanna make sure there's no risk.
You can, you can take mosaic parts that you wanna make sure there's no risk, like the technical part and then the legal part and so on. If you want to put effort into all of those, you're gonna be fine, but probably you are circling around your cloud risk only, and then everything you want to gain from it, you lost because you're so busy, that's right. Finding that solution.
So again, here it's balancing, as you were mentioning before with a preconception of
Risk, you know, it's interesting too. It relates to in our later sessions on the contracting solutions, because if you think about it, when you have humans, you have their exercise or discretion. If it's inconsistent with your business plan in some way, then you, that's not good. So the thing that's interesting about the, in the cloud situation, people we've talked about the benefits and the, and the detriment. So you lose some control because you put things into the cloud, but you gain some control.
We talked about reliability, but in a way, looking at the contract is the filter. So you have people working in both your cloud service provider and in your own company, in a way you're trading off the discretion, the contract is a filter.
Then, then you have someone to blame. Whereas if, if when your employees, you can blame them, but if they're judgment proof and you don't get any return, if there was 10, it's like the traders in wall street, they lose, you know, a hundred million for a company and they say they fire 'em.
So what's, they don't get the a hundred million back. So when you have a contract, at least it's a filter, which is offering kind of a more objective way of addressing that risk in a sense. Yeah. Yeah. Let's go to the next one.
So we talked about scaling issues. It was a really interesting discussion.
And, you know, the, the idea of scale is that first of all, the caution about using the same metrics at different scales and, and relying on the same kind of institutional constructs at different scales and, and understanding that there are different strategies that are really necessary for those different scales.
And so that, as it says there, that idea of the careful application, when you go back to your respective organizations, understanding that, are there situations where we're using something, just because we have it a solution that just cause we haven't, and we're redeploying it in places where we perhaps shouldn't. And that also goes to that discussion again, with your own one person's own thinking employee or someone who's running an organization. Are they thinking about the solutions in a way that they're misapplying the, the, the solution in a different context wouldn't be appropriate.
And those levels of analysis they relate, you know, it's like a vertical integration and horizontal integration of decision making. And that those levels, it's very difficult. It's like a supply chain kind of problem that you can't see up and down too many levels.
It's just, again, that internalization versus externalizing costs, right? The cost of attention for trying to understand all the different levels simultaneously is really difficult. And it really puts a premium on communication and communication and education so that you can promote the discussion between levels of the organization so that it doesn't become a burden just on one to try to figure out all the levels. So there's more natural kind of flow of, of discussion, knew a Boeing engineer one time.
And he's, he was one of the golden 100 at Boeing. Apparently as the engineers get to decide what they wanna work on. And he wanted to work on systems of systems.
I said, that's what I work on in a different way, not engineering. And I said, what do you do first? And he said, find a way for the systems to communicate with one another directly. And the same thing in the level idea is if you can promote that structurally as defaults, that levels can talk to each other, then it doesn't need as much maintenance and monitoring. Let's go to the next one.
There was the EU privacy regulation, and actually was a couple of people discussing various various issues of the privacy legislation. There was pretty many lawyers around, again, Quan.
She was, she was explaining seals and certifications. And actually to be honest, when I consult companies, people are like, I don't wanna seal maybe, maybe I want a technical seal, like, like an ISO or an ISO certification, but why should I have a privacy seal? People are very skeptic. I think about privacy seal so far and certificates. Why invest in that is unless maybe you are, you are highly relying on trust, but then probably your bank and people trust you anyhow, because it's a bank. That's how people think it's not so sophisticated, but that's what I see today.
However, the provisions that we see in the GDPR, they will bring up the idea of seals. And it's, unfortunately it's not decided yet we will.
We'll learn probably during the course of this year, what kind of seal this will be? Will it be something very, very roughly preset from the government, or is that something where private entities will be able to have a, have an idea and a set of what a seal, what what's worth a seal or a certification. So we don't know yet, but for sure, we will have that in real life.
And even with the code of conduct where whole branches find certain levels to live up, that will go into the same direction. So we will make things more transparent care about privacy, not only about security, that's, that's the idea of that whole thing.
So that, that was an important part. Then we had, well,
Lemme just make a point on that one for a second.
The, you know, that, it's interesting because I'm working with the us InStick and I D E ESG initiative. And there initially they're starting with a self certification regime with the listing service. So there won't be a certification mark. They call it a trust mark in the program initially, but you see how there's a convergence of those solutions, right? Right. There's different jurisdictions. They may have different requirements, but ultimately what they're looking for is people to make promises or institutions to make promises. Those promises are captured in a seal.
And as Robert was saying earlier, the notion of signal to noise ratio, a certification mark is a perfect example of that. If I have several cans of tuna on the shelf, and one says, dolphin, safe, tuna with a certification mark on it, I'm gonna be interested in that gives me all the noise of my purchasing transaction. It gives me some information that might be relevant to my purchasing decision. And the it's the same kind of idea in these marking programs as to, it's not about dolphins in that case, but about the humans we're close to
Dolphins.
No, I'm sometimes
A good day, but the, but that notion it's interesting how there's a convergence from a process perspective, even if the requirements differ. And one of the things I'm really enthusiastic about, especially in given this conference has an international representation is trying to find opportunities for us at the early stages. Even if the cultures are producing different requirements initially for the marks, trying to identify opportunities for overlapping those requirements.
So as you know, from technical standard setting, even if you have 1%, 5%, 10% technical interoperability, it's better than zero. Similarly in the policy interoperability, even if you have some interoperability in the requirements, that'll be beneficial.
Those of, you know, been in the area for a while, know those fair information, practice principles, those are represented in either softly in policy or sometimes in legislation throughout the world. Right? And that's an example of what policy interoperability looks like very similar to technical interoperability, particularly in socio-technical systems like we're talking about here.
So that idea of just thinking that technical interoperability can solve all the problems is literally like rowing with one or in the water.
And so when you have socio-technical systems to recruit that reliability, starting to look at the policy and then starting to look at the government led or in the us, it's privately led, but government supported opportunities for policy interoperability is a fantastic opportunity for us to find ways to lower costs, lower risk increase, leverage on an international basis.
Those people who think that the, these systems are gonna stay national or regional or state it's, there's so much border crossing going on that it's, it's absolutely critical that these front end, that we make the policy defaults as close to each other as possible. And it's not just a matter of commerce. It's a matter of state interest as well that we have, because it raises the expectations for more similar and, and predictable behavior and lowers risk.
Yeah. Lower risk. One more thing. I think we were not only listening, explaining us the certification and seal issue.
We also at Hanen talking about the privacy impact assessment and that's something that I'm really missing because it's so evident that an organization needs that there's so many stakeholders within the company at the time when it's about to decide what's the next it, next it system might want to implement.
I think it's just simply unprofessional behavior that are meeting companies that it's always the it on the driver's seat, implementing any it system without really having the opportunity to communicate with enough, at least with, with whoever will use that system later on when I ask an it guy. So what information is is in there?
He, he keeps telling me always in, in, in consulting situation, I have no idea I'm not the user. Right, right.
I I'm just the it guy, if I, if I ask the same thing to the operating department, they're like, yeah, I know parts of it. But if you wanna know details, it's the it guys knowing what's in there, they can look have, can have a look in there. So I'm like, no, it's not true. They wouldn't know. So it's a question of communication and privacy impact assessment would really support the communication part. Yes.
I think make things more secure, certainly raise of cost, but it's a race of attention, attention for whatever you're doing as a company. Yeah.
So it's a, it's a self-checking. Yeah. And this will have as
Well. Yeah. And in the United States, the agencies of the federal government are required to do privacy impact assessments now. So it's not a alien concept in the United States. It's just that hasn't been deployed in a commercial context.
Yeah. Right. So you're telling me, Europe tries to copy American privacy laws in a way
We should all be copying everyone's a little bit. Right.
Okay.
So we go to the cloud contract, the private, the private law. Yeah. Yeah.
So just along those lines, you know, in, in recognizing that there are differences in countries, one of the things that as practicing lawyers in practicing in international contexts, we are regularly called upon to build bridges of contracts. You know, the used the example earlier, there's a Tokyo London and New York stock exchange and exchanges in other countries in Germany as well. And they each are conformant to local law, but each is tied together with contracts forward swaps. Now that is good and bad.
2008 was a domino effective cross defaults of contracts essentially is what happened, right. From a legal perspective. Right? Yeah.
So the, the tying together of relationships has both, again, advantages in terms of leverage and risk reduction, but then raises other risks at other levels as well.
Right?
Well, we were actually, I, I had one major takeaway from that session. I think it was stunning how clear everyone agreed on the fact that if you're not willing to pay a little more, if you're going into the cloud, you will have just an average cloud, which might be fine for whatever purpose that is. But legally spoken, it's very likely that you're gonna be in compliant. What I want to add to that is that most people I talk to wouldn't realize that they follow standard contracts. They believe I'm giving data away.
I have this high data security ongoing it's it's technical security and they feel secure after all. So they're what they're doing is they're mixing up technical security with legal security just because they know the technique is quite okay and there's excess control. And all of that, that going on in the, in the cloud, plus they have a contract signed.
And just because there's a contract, they believe things are okay.
Again, very emotionalized because sometimes most of the times people wouldn't even check on the contracts, if they're okay or not, that is purely taking risk, not checking a contract is as good as not having a contract at all. And what you do is you give away your data and there is someone else being able to, to administer it and to work, work with that information, to pass it on to third parties and all of that, certainly that's not your expectation, but it is possible.
And a lot of times it will be addressed to your service provider to do so, for example, for, for, by, by, by the governments, governments, possibly than the ones in your home country. So I think just taking an average cloud offer without being highly individualized, people are taking the largest risk because legally spoken because they are so happy to have a nice solution. So happy to not spend that much money, that they think everything will be fine because it's just a professional surrounding.
So I think it's nice if there wouldn't be an incident in the future, there would be no hacking and so on because that might be even the case because cloud in lots of cases is pretty secure. Right. But then if there was the hacking, you wouldn't be secured. So the second layer is missing and people wouldn't care about the second layer because they're so happy to have the first.
Yeah,
Let's go to the, we're almost done. So let's go there. So the reason we don't have anything filled in here is these sessions just occurred right before. And we haven't had an opportunity to fill these in, but just very, very briefly. Cause our end of our time, the, the discussion here was really interesting. It's kind of mislabeled, cuz it was called software to find everything in the schedule.
And that really was, it was about standards, but it was really about that idea of a software management level and what that means, what the implications are and that human machine interface in terms of self-management. And one of the things that was quite interesting, there is the notion that the system has effects that our are gonna be outside of the realm of human ability to perceive. And so the question is, again, metrics, how, what kind of window on operation can be given in software defined systems so that people can still make decisions in those systems and the speed in which they move.
Let's go to last one. And again, this one you didn't fill out because it just happened right before this session.
But again, that idea of metrics, we talked both about access management metrics and then gen general metrics. Very interesting discussion because so much of this has to do with what you measure and the old expression, what gets measured gets done. It's weren't really measurements, sometimes overwhelm decision making. And so being very careful about what measurements mean, how they're applied. It was a very interesting discussion.
And, and again, in the interest of time, I think we'll wind up any, any kind of final thoughts on it.
Well, it, I just know it's very difficult to find a map for risk really because it's so subjective at the end, it really depends on your own situation. And I think what we've done is more or less found something like maybe broad checklist of issues that you, that you should check and thinks you should be aware of.
But at the end I was, I was overwhelmed by the thought that how important communication within the, the corporate situation is people do not talk enough, which with each other about where they want to go and how they want to achieve goals. I think that was part of every, almost every, every session.
Yeah. And for me it was that again, that governance that essentially don't look for governance to be happening in the future.
This, this conference itself is part of that governance, the agenda setting the problem identification, really try to embrace that as part of your information flow for the different, when you go back to your individual companies or organizations and then understand that there's that recursive kind of element to it that we can develop that together at future conferences, but that it is really a participation. And, and thank you all for your participation in the conference and for this session. So thank you. Thank
You very much.