Alrighty. Well, thanks for your patience. We just wanted to give people a chance to filter down a little bit. So we're gonna talk a little bit about scale here. If you have been oops, if you went to our earlier presentations, you've heard mention of the idea and the concept of different scales, yielding different kinds of risks.
You know, we had a cloud discussion before the break in here where people were talking about the differences of cost benefit analysis. If you're a smaller or larger company, for instance, going to the cloud, you'd be very different issues there yesterday. It was keynote on preconceptions of, of risk, excuse me, earlier today on preconceptions of risk and that same notion that you know, many of our institutions that we help, that help us to define risk in a lens through which we look at lit risk were built for different scales of function.
Now with the internet connectivity, the scales of many operations have increased dramatically. So today what we're gonna do is have a panel discussion and we're gonna stop promptly at 55 minutes from now at six 30, because they're gonna do the award ceremony or keynotes then an award ceremony in here. So what I wanna ask the panelists to do is we're gonna spend five minutes each, each the panelists, just giving their initial thoughts, and then we'll take questions unless there are any burning or clarifying questions.
We're gonna wait until all the panelists have presented kind of their initial thoughts for about 20 minutes total. And then we'll have about a half an hour for discussion and questions.
So, and I'll ask each panelists to come up and introduce themselves as part of their presentation, and then we'll take it from there. So thanks. Thanks.
Thank you.
Hi, my name's Ben Goldstein. I'm from the Australian government.
I don't, but these opinions are my own. It's important that I say that I'll start talking about risk.
My, my interest in this, I, I I'm particularly interested in cultures of risk and I think the reason bring it up closer. Yep. No worries. And I think culture and risk is a, is an issue that that does span the different levels of, of scale.
Wait, I should introduce myself. So where I actually work in the Australian government is the, is an area called Vanguard and we handle identity as a service essentially for, for the Australian, for businesses in the Australian government. So I've got that out of the way. And I think working in the public service in, in government gives you an interesting perspective on, on risk.
I think, I think what I would like to see and what I work towards in my organization is a change in, in culture with, with regards to risk, to, to manage risk better.
And to just gen generally, sorry, forgive me. I'm very jet lagged, but I'll, I'll get my feet shortly, but I will. I'll just start by saying, I think here's some things that you can take away. One is in your organization, you will find that there are people who are more, more inclined to take risk and people who are less inclined to take risk.
And I'll tell you something, the people who are more inclined to take risk are often the people who don't think that they're gonna be around for very long anyway, the people, because this is in the public service, I work with people who've been there for life or who expect to be there for life. They can't afford failure. So they're, they're very risk averse. So I think we need to make that cultural change that encourages taking risks and sees the benefit of taking risks. Doesn't see risk as something that's just all on the downside.
I think one way to do that, you can, you can do this in your own organizations is to risks, should always have risk owners, risks shouldn't be avoided. They should be engaged with. And I'll give you just a quick example. As I sum up just from this intro, when I told some people at work that I was flying over here from Australia, they said, oh, you're flying over the Ukraine. It's still fresh in everyone's memory.
That, that, that plane was shot down in the Ukraine. And I said to them,
If I get shot down flying over here, I, this is my last words on the subject of these. I am aware of the risk and I find the risk to be acceptable. That's the kind of thing I think we need to, to hear more people saying, thank you.
And also for the slides, can we put up the first set of slides please?
Hello, this is super intimidating. Sorry. So my name's Karen Higa Smith. I am a program manager within the us department of Homeland security, specifically under the science and technology directorate. So it's basically the research arm for the department. So has anybody here heard of DARPA? So a lot of people are familiar with the defense research agency.
Well, basically HS RPA is the Homeland security advanced research projects. And I am under the cybersecurity division within HS RPA, and we fund to develop technologies and tools that meet the Homeland security requirements and our Homeland security customers are not just the DHS components that you hear about like TSA, FEMA, secret service, but it's also our state and locals are some of our huge customers and as well as the private sector.
So we even had a project with the financial financial services sector, and we have a project ongoing project with the oil and gas companies all related to ensuring that, you know, they're protecting their infrastructure. Oh, thank you. And for anyone, you can see a very, very high level overview of the department of Homeland security, science and technology on YouTube. You can even do a search on YouTube and just say DHS science and technology.
So I wanted to start off by talking about, you know, some scary statistics, but you know, the Pentagon receives over 10, 10 million attempts, cyber attack attempts. And then I also, you know, I got a question by our fellow panelists or my fellow public servant, you know, how do you, how did you get out here? How did you, why, why is DHS coming out to a European conference?
Well, my division director says cybersecurity is a global sport. We're, you know, all in this together, we can't just protect our own nation. That's not gonna be enough.
You know, the cyber attackers are attacking from everywhere to our country and vice versa. And I wanted to talk about our within DHS. This is the operation side, it's a 24 7 operation it's called end kick. Oh. So by the way, I'm glad our thank you very much for, you know, speaking English or learning how to speak English as childhood so we can all communicate. But within the federal us federal government, we also have acronyms. I'm sure this is not new or different from your governments, but we have a lot of acronyms. So I apologize, but I will spell out all our acronyms.
So N kick is the national cyber security and communications integration center. And the president has asked them to lead the cyber threat sharing environment for sharing cyber threats across not just the government, but also the private sector and end kick. They get over two terabytes of cyber threat indicators from banks themselves, the financial services sector.
So I wanted to talk about some of the information sharing environments and how we're supporting, you know, securing data and information across various information sharing environments.
So I'm gonna talk about a standard-based user attribute exchange environment, and then a trusted framework that we've developed or funded to be developed for information sharing across the private sector, you know, in protecting intellectual property and personally identifiable information, as well as a data tagging technology. So tagging PII and making sure that it's either anonymized, encrypted or deleted and then a policy reasoning engine.
So first I wanted to talk about the enhance sharing for situational awareness working group and it's with these agencies within the us where they're sharing cyber threat information, specifically malware data, and they wanna make sure that information is shared securely and based on user attributes, whether it's top secret clearance or any other attributes specific to a malware attack, and they wanna leverage something we've developed called the backend attribute exchange. It's a SAML based attribute exchange environment and it's federated approach.
So it consists of both an API security gateway, as well as a metadata service, a lookup table. So across these agencies, so one environment where they wanna use this is again, end kick, where they have these specific technologies called sticks and taxi and the acronym it's also an acronym, but basically sticks is the standardized syntax of information of cyber threat information. And taxi is the way you transport that information. And they've decided ink. And this particular program manager with sticks INAX has sent it over to Oasis for, you know, wider adoption of the standard.
So we're also using this attribute exchange model, the SAML based attribute exchange model model for state and locals, and as well as, so within S and T, we also have a first responder group and they're trying to build a connected responder and all connected responder where they're, you know, protected and connected and they call them the, I always call it Robocop, but they said it's more like an iron man. And then, so they wanna make sure that information is shared securely.
And then also the borders and maritime division within S and T they're also building some information sharing environment for better situational awareness, but a lot of the data is sensitive. So they wanna make sure that it's secured based on an attribute based access control.
So one of the information sharing environments is building a trust framework that I talked about earlier, but it's basically a multi fleet response of large trucks being sent at an incident scene.
And I'm sure it's a situation in all countries, but we're trying, we're sending these trucks with critical resources for an emergency, and there's a lot of private sector entities that are supporting this emergency situation and they wanna be able to share data.
So they've built a Porwal and they said, well, how do we, you know, gain trust among these sector, these critical infrastructure components and the way they're gaining trust is first is there's a, this trust framework that says you have to come in with a high assurance credential, number one, and number two, it's gonna be protected based on the fact that you have a certain attribute.
And a lot of this work is also a lawyer who has built a trusted framework that says if, you know, that puts liability with a people who are, you know, hiring folks, but then if they leave the company or they get fired, that they inform, you know, the trust Federation in time or else they get fined fined around like $3,000.
So that's part of the trust framework.
So we've all heard I'm gonna zip through this, but we've all heard of a lot of the breaches privacy breaches with HIPAA laws, the children, privacy laws, and a lot of fines, you know, millions of dollars being lost to violating privacy laws. And then we have the, a C L U that is, you know, know, finding the law enforcement entities across the nation saying that you're doing you're, you're violating civil rights and civil liberties and these privacy laws, because you're doing the profiling of this criminal.
So what we've developed is for the state and local sharing information, and then doing data analytics is tagging the PII and making sure we insert privacy into the workflow so that, you know, when they do get audited or a C O U looks at them and says, you're violating a privacy, you're violating privacy. Then they have an ability to show that they're, you know, not violating based on this particular workflow and that auditing capability.
So this, in this particular instance, we're working with Northern California regional intelligence agency where they're sharing private sector is sharing situational, I'm sorry, suspicious activity reports to the fusion center. And they're doing, they're conducting data analytics, and then saying, this is probably a criminal, but we're tagging the data and showing that we're either anonymizing or tagging it to where it says only certain based on the fact that you have a, a warrant to access that data.
So another project I talked about this before, but this is the policy reasoning engine where we've digitized the laws and put it into machine readable code. And this is working through the MIT computer science, artificial intelligence lab.
They're, they're the ones who are building this policy reasoning engine, but it's consists of not just computer scientists and, but also lawyers working together to build this. And this is also, you know, connecting based on the law, connecting to the various resources in order to make a decision, whether they're allowed to share a certain particular data. So what they've done for this proof of concept is taken the privacy act and put it into digital code and a Massachusetts law into digital code, and then Maryland law into digital code.
So that if a Massachusetts Analyst needs to share criminal activity to Marilyn, then it goes through this reasoning engine and then it gives the Analyst, no, you cannot share because you may be violating this Massachusetts law. So it just gives them an informed decision before they start sharing the information. Then I have a lot of resources. Hopefully we can share these slides to you all, but there's been articles posted, and we also have various YouTube on the network.
So, so I just wanted to leave with this. You know, it's all about, you know, whether it's an emergency event, a law enforcement activity or a cyber incident, we have to work together. Cuz if we're not working together, it's not gonna, it's gonna break.
Thanks Karen. That's great. So let's put up the next set of slides please, or next slide and you know, your last reference, Karen, the cartoon you put up reminds me of a cartoon. I used to have next to my desk when I was practicing law. And it was two gentlemen sitting with wearing suits on a park bench.
And one says to the other, yes, of course a nuclear war would be terrible, but as a lawyer, I'd have more work than I could possibly do. So says sometimes it's different perspectives on risk Howard please.
All right, good afternoon. I'm Howard supposed to say, hi Howard. And I am a, I'll give you 60 seconds of introduction, reverse history. First I am managing principle of alternative resiliency services, a consultancy evangelizing organizational resiliency, and doing work in crisis management, continuity, operations, business continuity, et cetera, assessments, plans and exercises.
Before that, I was principle resiliency, strategist, big title for Expedia. And I started the global business continuity program. There ran that for eight years. Prior to that, I was a consultant doing disaster cover work in the east coast. Prior to that, I was Accenture KPMG. Prior to that, I was on wall street and arguably the finest investment bank on the planet. And then before that, the earth cooled in our dinosaurs. So that brings us to today. And we're talking about thinking global, global and acting local. I think you have to think local as well.
I just wanna make kind of two points.
Number one, it's all local. I mean, we're talking about cyber crime here, you know, and, and data loss and data spell tsunamis happen to people in places fires and power outages happen to people in places. Even the lo even the global incidents, such as loss of all your enterprise data or, or disclosure is happening to people in places. So when you're running a global program, you have to think locally and yesterday in my keynote, I was talking about not focusing on cause so much is focusing on impact. The impact is always local, cuz things always happen to people in places.
The second point I wanna make is if you accept that premise, then it's a matter of taking a local perspective when you're applying your global programs to your global enterprise. So you have to think locally, it's not just think global, black, global it's think local when you're in China and you want Chinese food, you go and you just say, you want food, it's local to them.
So when you're taking your global programs, the global perspective is good for the global frameworks for the investment for figuring out how it's going to work. But locally you have to apply.
You have to let them embrace what you're doing and you can't do it to them. When I was at Expedia, one of the big complaints for the London office, which is our second largest location, was that, oh, the Americans are coming to do it to us again. And we really had to learn to take an approach to let them embrace what we're doing. See the value in what you're bringing as the global enterprise and let them apply in their own way.
So at Expedia, we had a global framework where you're all gonna have plans and they're all gonna follow a format and you're going to exercise and you're going to follow the, the format and you'll capture issues and you're gonna follow the format, but how they applied, it was very different.
And it's not just for the, it's not just for the Brits, spelling the words differently. And with all respect, I think sometimes incorrectly, but it's also the flavor and the culture in how to apply that.
So the global enterprise, we had relationships with DHS and we had relationships with, you know, some of the global entities, but locally, it was very important that the local recovery teams had their own private relationships with London, met police burrow, vile planning, et cetera, in France for Expedia fr they had to have their own relationships with the Jean DME and, and whoever else they needed. And they applied it differently.
It, it was good enough that the Milan office applied that global framework in their own way because it suited them for Paris. They applied the global framework in their own way and it suited them. And so that's kind of how we, how we organized things. So I've put some thoughts to, to think about it there. I'm not gonna read the slide to you, but that's kind of the way I bifurcate where to put your global focus and running a global program and how to implement it so that it's accepted by the local locus of impact. If you will.
Thanks Howard.
Here, Robin, could you offer some comments? Thank you.
Well, good afternoon, everyone. I'm Robin Wilton. For those of you who don't know me, and I see a number of familiar faces in the audience. So don't be shy later on. I work for the internet society as technical director for outreach on privacy and identity. And so I'd like to briefly look at three things, which I think shape the landscape in terms of risk, as it applies to the privacy perspective, which is a kind of long-winded way of saying, you all know I'm a privacy bigot and here's my five minutes. So on risk.
Actually it, it took me back to my IBM days before I left the for-profit sector. And I remember being told as a very junior se by one of my wise old colleagues that a project which involves no significant risk is unlikely to deliver any significant benefit. So I sort of agree with what Ben said, that if you have an organization or a culture that is totally risk averse, it's very difficult to get it to move forward.
So what about the landscape? That's lies ahead of us. And I'd like to look at three things, which I think shape that landscape. The first one is internet of things.
The second one is privacy across borders. And the third is the scale of data collection and data usage. And they're all kind of overlapping. But when we think about the internet of things, we can already see that one of the major effects on that will be a massive upturn in the volume of data about us that is generated.
Now, we've all heard governments say, don't worry about the fact that we Hoover up lots of data about you. It's only metadata. And I think we all know that metadata is actually extremely revealing.
And to me, one of the interesting prospects about the internet of things is that it's basically a mass distributed engine for the generation of metadata.
And I think we need to be aware of that and find ways of coming to terms with it. Those of you who've attended any of the sessions about internet of things. This week will have seen the recurring theme, that it involves more data, more devices, and a huge growth in the number of relationships between devices and people.
And, and there's been a strong thread of int of identity relationship management, also running through the last couple of days for me, those raise issues of first agency and ethics by agency. I mean, what is it in that system that represents the user's preferences and wishes? At the moment, we tend to offload that to an application, to an app, to a phone, to a device, to a car, but as those things become increasingly autonomous, what happens to our agency as individuals.
And I think we'll find out for the first time when an autonomous car runs someone over, then we'll find out where the agency buck stops as it were. So I think there are real ethical issues that arise out of that. If we forfeit agency, then we are handing over that agency to devices, applications, and things which are already ethical agents. So I think we need to deal with that as the landscape changes.
The second thing that I mentioned was privacy across borders.
And here I come back to the point that Howard made, I think very well, which is if you're, if you're talking about thinking globally and acting locally, everything is local. Privacy is a social norm. And so as societies differ from one another, so do their concepts of what privacy is and how they want to achieve it.
I mean, there is someone who said that privacy is a social norm is dead, but I think you probably remember who that was. It was Martin Zuckerberg. And the day that I take advice about what is the social norm from a billionaire white male geek high school dropout whose big idea was a system to rate the women on campus? I don't think I'd take it for granted that privacy as a social norm is dead. At least not from mark. And the third thing I wanted to talk about was the scale of data collection and data usage.
And I draw a very careful distinction between data collection and data usage, because similarly we've been told by lots of governments recently that the fact that they collect lots of data about us is not a privacy violation. Not until they look at it.
Well, I don't buy that either. I'm afraid if you collect the data, you accumulate risk because we don't know what's gonna be done with that data. And neither of the people who are holding it, and with huge respect to Karen who I'm sure is applying, as she said, stringent security measures to the data that she holds on behalf of us, law enforcement and intelligence agencies, they do let data out. We found that about two years ago when they let quite a lot of data walk out.
And so that data, as long as it accumulates accumulates risk, and it's not for nothing that advocates of privacy by design say that the first PBD principle is data minimization. If you don't have the data in the first place, it doesn't generate risk for
You.
So just to round off with that, when we're talking about the scale of data collection and data usage, we need to look at it in two contexts, commercial data mining. We live now in what I think is the age of data monetization that powers most of what happens on the internet.
It funds a lot of what happens on the internet, but I don't think we've yet come to terms with the impact of the fact that we are all generators of data that is monetized.
And as well as the commercial context, I think we need to look at it. It very carefully in the government context, almost all data protection law has exemptions written into it, particularly for law enforcement and intelligence activities. But in each case I've seen those exemptions are qualified by requirements for necessity and proportionality necessity means that you shouldn't be using personal data for those purposes.
If those purposes could be achieved by other means. And proportionality means that you shouldn't be collecting all the data that you can in the hope that some of it will be useful.
Well, I don't think we've seen that principle put into practice recently. And so I think there's a challenge for people to manage privacy and privacy related risk by paying a little more attention to those criteria. That's it for me for the time being,
Thank you, Robin. So we first wanted to ask if there are any questions from the audience, we're a little short on microphones in anticipation of the event. That's gonna be in here after us, but if anyone has questions, I'll just repeat them so everyone can hear 'em any questions that folks have from the initial presentations. Okay.
Well, if, if anything comes up, please just raise your hand or make a noise so that I can find you out there. So I, I, I have some questions just so thought some thoughts that came to mind, I'll sit over here just so I can face you guys a little bit more directly. So one question, you know, risk, we, we in the title of the program is understanding and dealing with risk. Let's talk about understanding risk a little bit, and you know, the Ben, you were advocating for people to be in the government to be more accept risk a little bit more the inevitability of risk.
And are there some education programs that, or, or some ways in which folks might be prepared to be able to engage with risk a little bit more directly? Is there anything that your, your government is doing or that you think might be done to do that in other panelists as well? But I wonder if that's something that an education part of that solution to risk.
Yeah. So everyone can hear me good. So I think, unfortunately it's a, I think it's a really hard problem. And I'd love to hear from the rest of the panel about how we educate people about risk.
I think, yeah. Going back to thinking globally and acting locally, you can, you can take that same approach in your organization. You can go, well, we, as an organization, we need to handle risk.
Well, but you can't, you can't just have one person who does that in the organization. Every person is a risk. If you wanna look at it like that, and everyone needs to manage it. And that's why I say cultural change. I think we need to help each other to, to make that change. I think we need to have those conversations. We need to not, not look harshly on people who, who knowingly accept risks while being friendly with the people who ignore risks, because we never having that awkward conversation.
Other folks have some thoughts and then an educational piece of it first. Yeah. Rather.
Well, I think one of the best ways to accustom people to taking risk is not to punish failures too harshly. Again, thinking back to my days as a green se, I made several mistakes and actually looking back, those were far more instructive and constructive for me than checking all the boxes and not making any mistakes.
So I, I think acknowledged them and let people learn from them and create a culture in the organization where it's okay for people to, to, to screw up. Occasionally
That I met a vice president who once said, well, I'm 35 years old and I've never seen a disaster. Why are we doing this? And it was British. And I said, the number seven seven mean anything to you. And it actually didn't oddly enough, I think as far as educating people, the whole idea of how you roll out a program has to whether it's cyber security, whether it's risk management, whether it's business continuity.
The whole idea of rolling out of a program has to have a, an education awareness component to it, as well as just a technical role at so signage town halls, building it into the organization's DNA and embedding it in to the point where everybody knows the Christmas party is in December. And everybody knows that they're that, that the summer holidays in July, everyone knows that the bonuses are due in January.
And everybody knows that the annual risk assessment of the annual cyber education or the annual disaster exercise is in September because everybody knows that's the market success.
I think as far as educating and looking at risk, there's two sides there's giving and giving and a taking, which is the essence of any relationship. So when I was rolling out programs and I would mention, you know, this risk or that risk and say, Paris, for example, I had to, I was willing to listen to them to say, no, you, that's kind of like you see that in the papers all the time, but really we're local from our perspective.
That's not really, it's not really what you think on the other hand, bringing the external perspective and the independent assessment in is also important because that's the way that risks that they might not be aware of are surfaced.
An example is when speeding moved their headquarters from coven garden out to Islington, they thought they were getting away from the high profile terrorist target, cuz they're not in the theater district, they're moving out the suburbs. And they were very surprised when I came in and said, you know, you're sharing the building with cancer research.
Did you realize that they're a tenant and you do animal research and one of the most hated organizations in London, did you notice those people out there? And they said, no, we didn't. Do you also notice this and this and this? Do you also, and, and I won't get into details and some of it's more confidential, but there's a giving and a taking when it comes to understanding risk.
So one of the nice things about working at S and T is we get to do pilots and we did a fishing attack campaign and within, and we have about 500 employees.
And so within the first hour, and don't quote me on this, but we, we did have, we have a statistics table, which is kind of scary, but the first hour, there was a 100 people that actually clicked on, you know, to change your password on your PI card, click on this, you know, and they actually clicked on it, entered their password pin number and sent it to us and guess who were most of, I mean, all, all the executives were guilty of it. And so, and it's not just all the divisions, what's scary is some of, even within the cybersecurity division. So it's all it's.
So that was one way of us educating, you know, the users within just, you know, within S and T of beware of phishing attacks. And, you know, he's has working on some tools we might develop to, you know, kind of alert someone to say, don't click on this because it might be a fishing attack, but he thought, oh, you found a solution for me. Did you get rid of all my users? So that's not gonna happen. So it's definitely about educator,
You know, it's interesting. You talk about the education. You can have a big campaign where you go away for a weekend.
You can have a just in time, you know, in this, in the situation, all that information flow is so important. You know, one of the things I mentioned in my keynote yesterday that that's a center for information assurance and cybersecurity and university of Washington, we have it's an education program. And we have the, one of our graduates, as I mentioned, became head of China, cert, which cert for those who don't know, does infrastructure protection in different countries. And this is China's equivalent of our cert.
And the reason I raised that is I wanted to talk about education as a possible bridge to standards because essentially he got an education that was similar to the education of another, a number of us government, people were in the same program and a number of other people who were in private industry got the same program.
And so one of the things we talked about earlier was policy interoperability.
You know, we have technical interoperability that got us into this fix in a sense, right, by giving us the great value of interconnection. If we started, one of the things we're developing and talking about is curriculum. So you take education and if you make a curriculum package and give it to people in a number of countries, a number of companies, then you start to get standards.
And so I wanted to move a little bit to standard setting and thinking about that idea of if people are educated and have the same awareness in the same kind of curriculum and the same exposure, what that can do for interoperability of people's behaviors in certain situations. And Robin, since you work with an organization that's very involved in standards. I wonder if you have some comments on the role of standards, not just in the education context, but the role of standards generally in addressing risk and at different scales.
Yeah.
And it it's, it may sound a little bit strange to talk about standards when I've just said that privacy and privacy related risk are essentially arise out of a social construct, but actually you have to find ways of codifying that social construct while recognizing that in different countries, there are different expectations about privacy. So we've, we've heard a fair bit this week about the, for example, the EU attempts to harmonize data protection and privacy regulation, an effort, which I can best describe as ongoing, but it it's.
But it's interesting as a, as a global organization, we need to look wider than that and say, well, what happens with, for example, transatlantic data flows, do existing safe Harbor provisions actually provide effective protection or do they even provide what, what it says proverbially on the tin. And personally, I, I find that comparison often a bit depressing because I, I don't think that users are getting the expected protection when their data travels across borders under the safe Harbor agreement, doesn't help that safe Harbor means something different in Europe than it does in the states.
And I'm quite happy to take that offline if anyone wants to find out what, but also if we look east there's some very good work being done in APEC, in the APEC consortium to, to develop what they refer to as the cross border privacy rules to CBPR. And what's interesting, there is that like Europe, the APEC region is a very broad range of different cultural norms. And yet they have found quite good ways, quite effective ways of establishing cross-border privacy norms for the exchange of data.
And I think that's solitary because they watched what was happening in Europe and in the United States. And they took the opportunity to leapfrog that and start a little bit from a fresh,
Any other folks have any comments on the role of standards in the, in the risk at different levels?
Absolutely. You used one of the coolest words I've heard all day and I wasn't aluminum because with all respect, you guys are wrong on aluminum. I'm willing to bend on color, but you know, please no, in all seriousness, the, the word you used that was so cool was harmonizing harmonizing.
When you get a choir together and they're singing in harmony, there are Sopranos, Altos, tenors, and basses. They're not singing to a standard. And so I think that's a great way to look at it. I personally find that standards can be somewhat prescriptive and somewhat of a crutch.
So, I mean, in my world, we've had NFP 1600 and then BS 2, 5, 9, and N I O 2, 2, 3 0 1 and quack Wawa. And, you know, next year it'll be, you know, golly knows how many letters and numbers are gonna throw together.
But the, I think the problem with standards are they can be prescriptive and not let and not ensure quality and the other. And the other problem I have with standards is that they can be a crutch. You can have an organization say, well, we're just going to tick all the boxes.
And, and we are, you know, BS two, five triple nine compliant. And it really doesn't mean squat. I would rather have a program that does the right thing with the right trained people that can react and respond than have a, than have a program that takes every box in ISO 2, 2, 3 0 1. That's my
Opinion.
You know, it's interesting with the standards, just on that notion of the harmonization, all of the singers are able to come together because of standard music notation. Right.
Okay. Fair enough.
So one of the things to think about in this technical standard setting, when I was a practicing attorney, I worked in about 30 different organizations for Microsoft when they joined standard setting organizations, technical standards. And in each case they were standardizing a subset of the value proposition essentially.
And that's one of the things on the policy standard side, I think is really important for us also is not to boil the entire policy ocean. And I think that would, because otherwise you'd get a single tone, right? You standardize everything and you wouldn't have any harmony at all. But that notion of in technical standards, they don't try to standardize the entire product.
You know, the, how many standards are involved in your cell phone? Well, there's many that harmonize in your cell phone and it delivers the, the product. So one of the things I think on the policy side is for us to understand that there can be a, a subsection of the piece of the you're trying to achieve that can be standardized, but not people often think that it needs to be everything. And that is, is destructive.
Right,
Right. The bedrock, not the landscape of the park. Yep.
Did you have a comment?
I was gonna say, so I think we have a very good opportunity here, especially in cybersecurity, but I've mentioned sticks taxi before. And now that it's moved over to Oasis, which is a international standards body, I'm relating this to back. When last year you had a workshop talking about how FAA has international standards on flights and airplanes.
And, but there's no international standards or work being done when information gets transmitted across the ocean. And this could be a great opportunity. I think if we leverage Oasis as a standard for, you know, sharing cyber incidents across the countries, it could be a, one of the things I wanted to come out of this conference was to have a potential project international project between us and another country.
Let's,
Let's talk about that a little bit, that peer-to-peer notion, right? Because we talk about scale and we had different discussions already about local and international. And then the idea in DHS going the mission is both domestic coherence and then international coherence as well, looking up and down.
And that, that what's the role, I guess, in terms of risk reduction for bottom up top down, and then peer to peer, it feels like that the attention when you have multiple levels of risk sourcing, I guess that attention to all three vectors seem important. Do you think that's accurate, Ben, do you have a thought on that or anybody else you look like you were contemplating that?
Yeah,
I, I was, I certainly agree with your, your point about, about peer to peer and I, and I, and I do keep coming back to culture, but it is that that thing of you need, you need to peers need to look after each other. And I'll just give a really quick example in, in my space because we know identity as a service. But one of the things that we've found is we really know security and we really know identity where I work, but we, we give these products to people in government who are in it, but they don't really know the space. They don't really, they don't really gro it.
We, we need to, we've seen them fail at security. We give them a really good product. I think the phrase I like to use is best product does not mean best practice. You can have a really good product, but then you get, and you give it to someone and they completely just don't get it. And they don't get the security and you've really let them down. I think everybody needs to work.
You know, we need to help each other out. Sorry, I got a little sidetrack there started with peer to peer, to peer, you know, risk.
But yeah,
Any other thoughts on that notion? So one other notion I wanted to explore before I ask each to kind of have summary comments on it is the, a different scale issue, which is the people to institution scale issue, and the bring your own device notion realized about it two years ago, that there are five policy systems or legal systems, formal policy systems in the world that cover a billion people or more two, our countries, China and India, three are companies Google's terms of service Facebook's terms of service.
And arguably Microsoft's, I guess, is the U user, but there are other companies that size and the notion is though that you really have a different, their standardization standard policies where they're coming outta different places. And when you have a bring your own device setting, it was very interesting during some of the internet of things.
I was writing some notes that you have in a sense of personal device, which is a very low, small scale, but it's being deployed by very large companies.
So portions of it are deployed by very, if you're on Facebook, you're a single person, but you're in this huge unit. And for the institutions where they're concerned about bringing your own device settings, like if someone's on Facebook discussing business, instead of being on the email server. So you have this kind of mixing of scales that's happening now. And I wonder if any of you had any experience or have any comments on that notion of kind of the individual.
Now, maybe it's a blending of their institutional and personal personas in a sense, because the, because of the use of the devices and that leads to a whole nother level of risk, any thoughts on that ramble that I just shared?
Sure. Yeah.
Well, yeah, several things pop out of that. One is this whole business of persona management.
And again, those of you who've heard me rant about privacy before will know that persona management is one of my big things and that actually the ability to manage personas. In other words, to disclose subsets of information about yourself that are appropriate to a given context, and to keep those contexts separate is a very important part of the way that we, as, as social animals manage our own privacy.
And so there's a risk there in the sense that more and more the systems that we use and the devices that we carry with us erode those boundaries between different contexts, without us necessarily even being aware of it. So that example that you gave of bring your own device in a work context, a fairly explicit one, you know, that it's your device because your company didn't give it to you, you bought it, but then you take it into a work context. So at least in that sense, it's explicit.
But I think there are other contexts where you do things, thinking that you are disclosing something in one context, and actually the data is being used in another one.
So I'm gonna and quit. Let's quit at that and go to the just final statements. Cause I wanna make sure we leave the time for the folks to get set up for the next thing. So any last thought, Ben, and let's go along the line.
Sure. I'll I'll try and make it quick. I've got three. One is on that last point. This device here will, this is B Y O D. This will let me log into my network back in Australia.
And I can load up documents from cabinet that are like protected documents. Like if you stole this and you knew my password you'd be in, so I think it's risky, but we accept the risk and, and we, and we know the risk.
So, you know, risks need to be probably clarified. It's don't just talk about risky. Talk about what are the actual risks you're trying to mitigate. One more thing. I'll say here, here's here. Something you can take home. I would say when it comes to managing risks, penalties don't work and you can try this at home. Sometimes someone says, oh, we can't do that. It's a really bad idea because of this reason. And they're right. You can say, say this to them and see how they respond.
You go, well, a 5% chance of catastrophic failure is a 95% chance of living happily ever after. And I like those odds. I've tried this a few times and it's a lot of fun. It's a really good, that's a good way to start a discussion about accepting, quantifying and, and accepting risk. So try that at home. Thank you.
You have to get that math right though. You say that to your wife, you might not get the right numbers. Go
Ahead please.
Oh, I was, can I join your government? Because in the us, every government employee you'll see them carrying two, two phones, their personal phone, and then their work phone, because I have yet to see a government agency allow B Y O D. Although there are some folks that say, I want, you know, B Y O D is a good idea, but as far as policy goes, I haven't seen an agency that does that.
And I'm not speaking for DHS, but I am one of those that I wish I could have B Y O D because I'm tired of carrying two phones and there needs a way of, you know, the risk well with, you know, policy folks keep telling me is, well, sometimes we have to wipe your phone because you know, there's, there's a vulnerability and we can't, we can't do anything about it, except for wipe out your phone. And we might be wiping out all your photos.
Well, I'm willing to take that risk, but they didn't give me that choice. So it's, it's, you know, a balance. And if they ask, I think if we ask those, if you are willing to take that risk, then okay, then bring your B Y O D. But there is still the question of segmentation and separation and that's, you know, definitely not a technical problem. It's definitely a political policy problem,
Howard.
Yeah, I won't repeat or reiterate my, my remarks before. I think those down for themselves, I do wanna respond to your I'll close by responding to your, to your thought. And three things came to mind. One was the image of Hillary Clinton running her emails out of a server in her basement. But I refuse to talk politics until there's a ferment beverage in front of me. I'm having to have that conversation later. And number two, that there are different personas.
My wife is also my CFO and it is a running battle to have her use business email for the business communications and her personal email for the personal communications, cuz she's on her personal com. She's on her personal email all the time. And I'm thinking, honey, I love you. It has to be@alternativeresiliency.com please. And the third item was you said something that was also actually the, one of the most interesting things I've heard today.
And that's the, the five jurisdictions. And it comes back to what I said before.
Everything is local because you have those five jurisdictions, but only two of them have the power of enforcement. So if you break Chinese law, the Chinese have the ability to punish you very effectively. I might say, if you break the laws of the state of Google, they don't, you know, the Google army is not going to come and invade you. And the Google circuit court of appeals is not gonna rule against you. They have to rely on the local enforcement and adjudication process. So even there, it's still local because everything's local
And a very quick one for me.
So we talked about zero tolerance for risk. What happens if you have zero tolerance and in the commercial environment we've seen how, if you won't accept any risk, you're unlikely to make any sensitive, sensible, or appreciable change. So in one sense, zero risk tolerance can re can lead to no benefit. But I think there's a, there's a more pernicious effect of it. And that's more what we see in the policy arena.
I think, think of how many things have been justified because politicians cannot openly say, I accept non-zero risk, except non-zero risk of, of, of terrorist attacks of bad fallout from this, that, or other. And therefore this policy is justified. And I think in that context, if we're not careful, zero risk tolerance can lead to 100% intrusion.
Well, thank you panelists. And please join me in thanking the panelists for a great discussion.
Thank.