So we had this, we used a mapping analogy in part you'll notice the word map pops up throughout the track. And the idea was we were thinking about how to convey the complexity here. Everyone's been talking about all the complexity, different kinds of issues, very orthogonal kinds of issues. And what we realized is mapping in cartography has some similarities. So we wanted to play with that a little bit just to make that idea, that complexity more available. And so here is just examples of different maps.
You have average temperature maps and political maps and continent wide versus Countrywide, topographic maps. They're all projections of the same geographic space, but they're conveying different kinds of information. So now we have a situation where geography is less important. And part of the, the idea in this track is that we're moving from what was a physical world.
We've done that move to the information world, that information world, we talked about security in some different ways. And now interaction security is kind of a theme that's been bubbling up already in the conference.
That idea that it's really the edges, not the nodes that are being protected in a way. And, and what does that mean?
So the, this overlay of the Facebook, it was relationship map over the geography makes that transition really from the, and in law. We see that Carson's an expert in intellectual property.
Well, intellectual property is a collective imagination that we come up with in order to render something that's intangible so that it's moveable and tradeable and can be used in commerce. And so that transition is what's happening with data. The challenge with data in particular and information more importantly, is we don't yet have the constructs for intellectual property.
They don't apply. So we don't have this kind of a shared understanding about what's going on. So in this track, this for anybody out there who has ever heard of four color theory and map theory, it's a typology question.
You can render any countries, any map with four colors maximum. Anyway, this is a little, it's a nerdy tease there with the four colors being used here in our mapping function. But this maps out over time, our next two days in the track, just to give you an idea what's going on. So today we'll, oh, I guess I can look down here today. We are gonna talk, we've already started talking and Hans did wonderful introduction to that idea of this, these new risks and what we're looking at, what the landscape really looks like now.
And the fundamental challenge we have here that we're all dealing with is risk and value really go hand in hand until something is valued.
You know, you think about geography. There were areas called waste lands, and they were called waste lands. Be not because they were laid waste by humans, but because there wasn't any use there of the lands as wastelands became useful because technology needed different elements from the land or something like that.
Then they, the value that be there was risk associated with it. And they'd be property rights were laid over on those wastelands as well. So we kind of continually have risk following value, wherever it occurs and emerging risk concepts we'll be talking about in the preconceptions of risk in the next next one, or that you want to mention anything about that one, the preconceptions,
Right? Could you click
Well, that's what you were just going on about. Yeah. Preconceptions of risk is the upcoming session and we'll have two talks.
We have two experts here looking at preconceptions of risk from different perspective. First, we we're very happy to have Tom Langford here, who is the global CSO of Sapient, and he will well talk about preconceptions of risk. I think it's more or less flushing away, your preconceptions of risk if you want. And then we will have the perspective of professor grim from constants university of stance LA, and he will discuss the same per the same issue more or less, but from a different perspective, it's more about understanding privacy and the risk of privacy.
So it's a slightly different perspective, but I think that will add up very nicely together in order to understand about our preconceptions of risk today and how we maybe need to adapt for the future. Both for technical practical and organizational legal perspectives that's
Or two.
Oh, sorry. And one of those things about the preconceptions, think about while you, when you hear people encountering challenges, think about how many of those preconceptions are really from existing institutions. One can think about institutions as being formalization of our preconceptions in a way. Let me go back to this for a second. So this afternoon, we'll, I'll go through the slide, then we'll move on. We'll go through peop looking at managing issues this afternoon, managing digital risk. We'll talk about people.
We'll talk about cloud outsourcing risks and, and insourcing new risks, and then also scale issues. So again, they're different collection of issues, but they have to do with the idea of managing risk. Just again, we'll go through the detail in a moment, but tomorrow morning, the, as it says, we'll talk about the designing, developing systems. And primarily the focus is there.
There are other tracks which are talking more with technology. There are so many of the issues that we deal with when I was in private practice, I've dealt with 48 different data breaches. I never saw one.
That was a technical issue, was always a people issue. Somebody left their laptop on a bus or somebody's boyfriend had a drug problem. So they stole the credit card numbers. So the people issues are very important. And the way people are rendered reliable is through duties. Those duties can be norms, ethics, and laws, the formal duties or laws. And they come in two flavors, the public law and private law. We'll talk about those sessions in a moment.
And then lastly, tomorrow afternoon, we'll talk generally about managing digital risk and that the other elements of that, including the technology standards and that idea like Hans was talking about in terms of reliability, what standards do is render technologies or expectations of technologies in systems reliable.
And we we'll talk a little bit about policy standardization, what that might look like in the public and private law as part of that discussion, right? We'll talk about metrics and then we'll put it all together at the end. So let's go a little more detail there.
So the, in, in the people in risk mitigation issue, what we're primarily gonna talk about is both from a management perspective, but also importantly, as we started to touch on in the questions after Han's session, what about recruitment of populations? What about recruiting your employee population when they have their bring your own device issues, recruiting the larger population, the internet of things, what does that look like? If you have people issues, how do you get people together to do something like the incredible task of trying to control risk in these contexts?
Session four is on the cloud risk assessment, being a lawyer, having a legal background. I think assessments are a very good tool if you want, in order to understand not only your own risk, but in order to already prepare, having an assessment already means to go into action, having it seen that way. I think we'll see three very comparable at the end comparable because they're on the same perspective on understanding the risk of cloud. But then again, in detail, certainly not comparable three different approaches of how you could approach your personal risk assessment.
We have colleagues from KPMG, which is John Herman's and ova. We have the frown Hofer Institute represented by Maria Hoffman, and we will have of KuppingerCole Mike Small and all three of them are experts. All four of them are experts in understanding the risk of cloud already. It's a technological perspective.
They take it's a legal perspective, and certainly it's an organizational perspective. It's a liability issue, I think for all three of them that counts. And we will listen to each 10 minute presentation before we have a panel going on with all of them.
And I'm very excited to understand even better the details of approach, because we all know you can do a hell of a lot and you can do a little less and maybe at the end, you're even smarter, the less you do, because you still wanna see what's the wood like even you have seen there as many trees, right? That's gonna start at three 30. Then we have that takes an hour. We'll have another session at five 30 and that's gonna be your lead then.
Yeah.
So at the, you may seen the slide. If you saw my slides yesterday, this is the power of 10 movie where they pan out from a person having a picnic all the way out into the galaxy and the idea of scale and order of magnitude differences. We'll be talking about really that effect of macro level risks on enterprises, on individuals, it's implicit in all of the discussion we've been having these scaling differences, part of the challenges, how do we make systems that we deploy at one level fit for function at other levels? How does it work when a person wants to interact with the technology?
That's very complicated, which can be deployed with a lot of resources. How do we create interfaces and allow people to be again, recruited into the security community, in a sense with these challenges in terms of those risk levels.
And, and then we'll talk about that. We have people from a variety of organizations who are actively involved in that kind of effort and then the public law.
Then we'll have another perspective on law tomorrow and morning, starting at 11 o'clock, we'll have more on the EU privacy legislation. Those of you who have listened to Quan and myself, introducing you to some major thoughts of the EU regulation and thought, maybe that's something you should wonder a bit more in detail about make sure to be there tomorrow.
When Quan, again, takes on her thoughts from her keynote. Also, we have Andrew Luman from Dell software who is giving a presentation on the role of privacy by design, which is one of the core elements. So we'll have three perspective after all with Han and Davis, from data witchcraft, she's a lawyer as well. Introducing privacy impact assessments. That's something that will be an issue for all organizations, introducing software, maybe just as an internal tool or selling it on the market. Privacy impact assessment is something that has been going on in the UK for longer time.
So it's a truly British idea. It's another approach by assessment and re will explain that Kwan by the well will, by the way, will introduce more the idea of seals. So we'll have the seal issue, we'll have the privacy impact assessment. And also we will have the privacy by design. That's the three core elements of the GDPR that's gonna be introduced there starting tomorrow at
11.
And one of the things that we wanna highlight is the, you know, we talk about technical interoperability and you folks are aware of that and kind of technical standards starting to think about policy interoperability. One example is in the privacy impact assessments in the us, us agencies of the government are required to do privacy impact assessments. And so there is a policy intraoperability opportunity there.
When you start to have these requirements, these duties put on different populations in different jurisdictions, that the potential bridge of some of the issues we're talking about, then we'll talk about private law solutions.
Yes, private solutions of risk will be more on the contextual side of cloud computing. We'll have various people participating in a panel. We'll have the industry there with many participants and looking at the time.
Yeah, I don't wanna introduce everyone just to give you an overview. That's gonna take part at three 30 tomorrow
And that contractual discussion, if you, again, if you think about policy interoperability, we have, I was a tax lawyer for many years and we did venue shopping basically. And so you put transactions into an advantageous jurisdiction where you get better tax treatment. And so similarly here, private law allows us to bridge jurisdictions. It's much harder to change legislation than it is to write a contract.
And so there's ways to create new duties that can be consistent with local law, but then create bridges. And that's gonna be a very interesting discussion. I think we'll then turn to technology standards. And just briefly, the idea is how do technology standards interact with management of risk? You have a set of standards that make certain things more reliable, but also it can make things less pliable, less changeable, and that can lead to other kinds of risks. We'll have a very nice discussion there of it.
It's called software defined everything.
And it's the policy management and standards kind of discussion. Then we'll talk about metrics. Part of the, one of the questions I was asking Hans is how might we do metrics for assessment of causality when you have an accident happen? And things like that metrics are very familiar to engineers are very familiar to lawyers as well.
Although we don't deal with them as nicely as engineers do sometimes, but that's gonna be an interesting discussion to try to understand what kind of metrics might be relevant here either to show us correlations or causations of harm so that we can understand what to measure because what gets measured gets done. So we wanna understand that as well. And then for our last session, that's
Gonna be the two of us, right?
Yeah.
So the, our hope is then we bring that all together. Just like an Atlas brings together a bunch of maps. We don't know, we don't have a, we haven't even prepared this session yet. How could we, so we're waiting to get that input and kind of understand what we're looking at so that together we can kind of construct something that starts to have some coherence for the future discussions.