Thanks so much. So we're going to have a a romp through hidden dimensions. Today we're gonna talk about revealing hidden dimensions of security beyond data secure, beyond data security and 13, excuse me, 13 emerging information risk trends. My name again is Scott David. I'm at the University of Washington and again now a fellow Analyst ENC Uping your call and it sounds like I'm getting a little rubbing there so I apologize. So where do I aim this?
Can you advance the slide? There we go. So in the summary of the presentation here, you can't protect what you don't know.
That's actually a sign I have in my workshop, which is kind of anxiety producing, right caution, no warning signs. So in here we're gonna review the origins and limitations of currents, cybersecurity measurements. We're gonna identify the 13 emerging information risk trends and then suggest 13 patterns of bolts based risk mitigation bolts is business, operating, legal, technical, social. Just a nice way to remember to think about all those different vectors.
So we all know that budget pressures and resource allocations at deci, at AT in any kind of organization, demand measurement and data security measurements naturally emerged from technical security test, technical system performance metrics while technology deployed in the real world encounters non-technical risks from business, operating, legal, technical and social interactions. And those are new and unmeasured forms of cybersecurity threat and vulnerability from the environment that surrounds the tech.
When it leaves the lab or production floor, the lack of measurements limits investment, mitigating any of those ambiguous risks. And the problem is if you have no metrics means no ROI. Justification no ROI means no budget allocation. And no budget means no risk mitigation. So our systems are still tuned to rely on existing metrics for security evaluation. So why are we still experiencing security and privacy problems online? Well unfortunately the focus on data and technology is necessary but not sufficient to assure interaction reliability for information security.
The sole focus on these data tech metrics is like when you're staring at the finger that's pointing at the moon. Well, how do we measure risk beyond technical systems, beyond what we've been measuring before we the cartoon, if you can't read it says, and by tomorrow I'll need a specific list, list of specific unknown risks that we'll encounter in this project.
So we hear that all the time. This isn't a technical problem but that's not enough. There's a huge world of interactions outside of the technical world today.
And what are the technical problems that affect information and how can we deal with them? Well, today we're gonna cover 13 categories of unknown unknown information system risks and we're gonna convert 'em into known unknowns, which is at least an incremental advance. And we're gonna do this by naming the non-technical problem and suggesting patterns of solution. This cartoon says, when I said I wanted a risk management system, this is not what I meant. And this is the misunderstanding. The risk means that you'll misunderstand the mitigations.
So this effort will hopefully guide your organization to not making this kind of budget decision. So happily, it turns out that what's unknown to technical folks like computer scientists, engineers is known and measurable to people in a variety of other domains.
Each domain has its own set of rigorous metrics of expected, reliable performance. They have its own V of vocabulary, its own mis mitigation approaches. In our lab at the applied physics lab, we use the term bolts as I mentioned before, business operating, legal, technical and social.
To invite consideration of those categories of those different types of challenges, risk is already encoded in those domains. Future cybersecurity approaches are gonna learn to decode these risks and these metrics. So that'll encourage a linking of the different silos that we have. Now that's how you convert unknown unknowns into known unknowns. So this is an, A slide is just intended to grab your attention and raise your blood pressure.
But that's actually in my library I have, it's an old antique sign, but I thought, I don't know what the automatic controls were that were referenced, but that's the problem of the day.
Well, let's get specific. The red text names 13 sources of problems beyond technical problems.
These red, red risk vectors actually cause current cybersecurity problems, but they're not yet addressed in security strategies. As we go through the slides, consider whether your security approach and strategy and those of the companies and organizations you rely on, whether they actually are dealing with these kind of risks and things like narcissism, ambiguity, paradigm collapse, ignorance, hallucination. We're gonna deal with all that today and clear the air. Now don't try to read this slide.
This slide is just a compendium of all the balance of the slides, just the headings of all the slides. But it's really condensed down. And what it starts to demonstrate is that, and this in our lab, we use this a lot. The source of the problems tend to be the source of the solutions. And so it's really important to kind of embrace the entirety of what you're experiencing in your problem set.
Well let's get to it.
No more, no more delay. We're in it. So we have 13 problems. Here's problem one, security bad habits. Secrecy is dead every time you swipe your phone, your information seeking and the combined pressure of our collective information seeking crushed security, we all killed security. But if we rely on security for achieving sec, excuse me, if we rely on secrecy, that miss statement there secrecy was dead, not security. If we rely solely on secrecy for achieving security and privacy, then security and privacy are also dead.
Well it turns out the death of secrecy is not the death of security or privacy like traffic rules. We need to recruit the self-interest of system users into self binding, into patterns of behavior that enables 'em to de-risk in ways that no individual can do alone, period. That's the motivation In the case of information networks that amplified risk recruits a thousand eyes into the neighborhood, watch for distributed online security just like traffic rules enable people to de-risk in ways they can't do alone.
If you're not part of the solution, you're part of the problem in these cases.
And there's also a bonus, the posts secrecy strategies are gonna help to future proof information networks against quantum computing's erosion of cryptographic controls, which will also make a challenge for secrecy being the basis problem. Two traditional system perimeters are gone.
This is a, an illustration from Paul Barron's paper for the Rand Corporation 1966, which showed how distributed communication networks resist nuclear attack. The diagram at the left is a centralized system. If you take out the middle node in a nuclear attack, the entire system goes down. The right hand diagram is how the Internet's structured at the time there wasn't the internet per se Paul Barron. But if you look at that same node, if you take that same node out in the right hand diagram, then the system can work around it.
Well it turns out that the same patterns that allow the resistance for from attack also resist control. So the, when organizations use the internet as all organizations do, the information flows that feed the hierarchy of the organization are disrupted and the institution is rendered blind. Well what does security look like in a post perimeter posts secrecy world? We need to activate and empower the interactions with organizations outside the perimeter into solutions.
So you need to wrap your organization in decentralized blankets of protection through explicit recognition and application of shared bolts, business, operating legal, technical and social interests and standards and language and ontologies that reduce the risk for all parties in ways that no one can do alone. Period. That's a supply chain issue. Structured self-interest of the parties in those systems renders the systems resilient, sustainable and scalable.
The current research net, if some of those of you have heard of active inference, it's borrowed from the cognitive sciences and starting to emerge.
That's actually helping us to discover the processes for assemblage, assembling some of those risk blankets, risk mitigation blankets. Well problem three is complexity itself. Complexity is a sovereign. We use the definition of a sovereign as an entity that doesn't ask for permission or forgiveness. This complexity does not ask us for permission or forgiveness.
Newtonian linear organizations and their security systems depend on Gaussian or normal distributions of past phenomenon for predictions in programming their technical tools and their policy rules, you look to the past to predict the future. Well the problem is that complex systems display non-linear behaviors that are unpredictable, which creates an risky anomalies that defy our expectations. So what do we do?
Well, this is a big deal. Now we need to evolve from static forms of cybersecurity strategy into dynamic forms of information system reliability. And to do this, what we can do is start to view the anomalies as producing comp complex that are producing produced by complexity, excuse me, as a source of fuel for the information engines.
We can manage complexity by feeding the information differentials and anomalies into information engines. Well that sounds really esoteric, like what's an information engine? But we already do this in a ton of other domains.
Contracts, markets, exchanges, our information engines, that's where we de-risk. Essentially information differentials. The information differentials already drive those markets and exchange behavior just like temperature differentials, power, heat engines, the math is the same. The picture shows the floor of the New York Stock Exchange where stock price differentials are essentially combusted within the system and they perform work.
So we're in the process of configuring similar mechanisms right now to help manage emerging information risks, which are information differentials in this rapidly growing online environment. It's different metrics, but the same mechanism of containing and and causing the differentials to be able to perform a new form of work. Information engines help us consume anomalies as high value.
Insight and anomalies are always strong signal of the edge of your system and they also can encode risk signals from other domains as we mentioned before.
So the anomalies are no longer something that's just noise. Well, a unique problem with managing complexity and future information engines is that the systems are sociotechnical. This means they involve people as active components that produce and distribute and consume and act on information. People are a unique and independent source of risk in these systems.
As the picture illustrates in this picture, the phone could be working private perfectly, the car could be working perfectly, but if you have the sociotechnical system, the human phone car, then there's risk because the person's gonna be inattentive and then hit a pedestrian. So the sociotechnical system has a different risk profile that merges well. How do we enclose people in information engines consistent with security and privacy and human rights requirements?
What do we do?
Well, we need to manage people not just as system users, but as actually components of the system. You know, technology's made reliable with conformity to technical specifications, but people are not made reliable by conformity to specs. We're made reliable with the conformity to rules and policies driven by incentives. We need to design future cyber secure systems with integrated tools and rules for socio-technical information engines.
Now those integrated sets in in identity management as Martin was alluding to before, it's called trust frameworks and identity management will have similar things, trust frameworks for these information engines just integrating to the technical and social aspects of the system.
Well there's a problem and this is problem five is that including people as system components has a potential for abuse and exploitation and human rights violations considering people as system components made reliable by structured rules leads to the other problem of lack of representation of people within these systems.
Like a labor movement kind of thing. How are we gonna represent the interest, these new emerging interests?
Well, the social networks and the API economy are not, they're optimized for commercial interests and not the fiduciary representation of individual humans. People are not at the table when these systems are being configured and developed and deployed. And if you're not at the table, you're on the menu. So there's no broadly deployed standard structure that represents the interest of people in these system roles. What we anticipate is the solution is to organize structures of representation for the role of people as system components.
There's a need for an institutional layer of individual rights management. We anticipate an emerging layer with services that are fiduciary and non-fiduciary to represent the interest of people very much like you have in a broker dealer and investment advisor relationships in the United States.
Broker dealers are not fiduciaries, investment advisors are. So a broker dealer can hang up the phone and then make a trade against you, but an investment advisor can't. But they both give the similar advice.
They're akin to cooperatives in a way of people as data producers, but not necessarily selling cooperatives. But it's just the, again, a representation of interest like in the grainge system for farmers. The shared interest and the shared risk mitigation strategies for humans will be amplified for those organizations. Well the representation of people is great, but how can we be sure that future information governance is fair for everyone?
Well the suggestion to granges in other form of data rights cooperatives raises the question of the challenges of data and system information democratization. How can we fairly govern systems with massive participation? 'cause the risks change and the people in communities become more abstracted as the scale changes.
And at larger scales, governance depends on abstractions of standards and population-based measurements. And the problem is that the bell curves of resource allocation mean that the you can marginalize this populations at the edge of the bell curves the smaller populations.
Well we think that future organizations are gonna rise in shared information risk commons. We noted earlier that existing institutions were disintermediated when they were rendered blind by the internet. Well a solution it to the fair representation challenge is to re intermediate the governance and representation communities along the lines of those shared interests, including shared desire to mitigate risks. And you're seeing that already that it's like trade associations where risk associations get together when they have shared risk. And each of those can be considered a risk commons.
Usually a commons is thought of as an asset commons like fish or harvests trees. But you can also think of that the risk of not having fish, the risk of not having harvest grain and trees can be, you can characterize the commons as risk commons.
They're effectively self-policing also because every party's behavior in those systems affects the group so that they tend to foster group interest together in a common structure. Well those solutions can address some of the future human variables. But what about the lingering effects of yesterday's security paradigms?
How can we update cybersecurity from where we are now? What's our theory of change? What's gonna guide us through? Well this brings us to a paradox of information, insight and information is dual use. Many other technologies have dual uses like chemicals, guns, nuclear. It's clear that data, oh in the picture, nitrogen based fertilizer can be used to grow trees but also can be used to make bombs as that was the federal building in that was blown up a number of years ago in the United States that so is clearly dual use. It can be used for good and bad purposes.
And our systems of security and privacy that are based on constraining data flows like GDPR or other fair information practice principles when they're implemented for agencies in the United States, they also constrain the good uses of data. So it's a paradox of our own creation by the way we define these things well, we need to move from a sole focus on data security towards a more integrated harms based approach that allows appropriate use of data without undue constraint. Consider we don't make hammers soft so they can't be used to hit people in a head.
We make hammers hard so they're functional. And then we have a rule that says don't hit people in the head with hammers. So the same thing with data we constrain. If we over constrain it, we're going impede function. So a number of privacy and security advocates are actually advocating for moving away from just a constraint based approach towards the harms based approach.
Dan Sov and others are starting to talk about that. The neighborhood watch solutions that we described earlier enhances our ability to detect harms through the networks and communities of interest.
Well relying on neighborhood watch to detect security and privacy harms raises the problem of neighbor misperceptions. If you have a neighborhood watch and people are misperceiving, that's not gonna do any good If people are hallucinating, you have a terrible neighborhood watch. Well a human pattern projection's called paraia. It's kind of a neat little word and it's really functional in terms of survival. In the past if you over modeled a lion hiding in the grass, then you lived to serve reproduce and then that was selected for it's good to over model a predator.
The problem is that humans did that, they survived. But then we also project patterns into, some of you may be familiar with the old man on the mountains.
It was a stone formation in New Hampshire, the state that recently fell down by the way the rocks fell down a few years ago. Or we see the picture of Jesus on a tortilla. You know things like that where we over project these patterns into the environment. Well hallucination is really in the news with ai. So let's stay with this for a minute. So consider moth eye spots for a second.
The eye spots are decoded by predators as the eyes of a large animal. Many of you look at this and you see it looks like eyes, right? Well the predator leaves the moth alone.
It sees, it says, oh this must be a big animal. I don't wanna get attacked by it. The predator decodes the spots as eyes. The predator is hallucinating. They're not eyes.
Did the moth encode that message in its wings?
No, the moth has no idea that it has eye spots or that they're functional for its survival. It just happens to survive if it's has the eye spots. The signal was encoded. So it's decoded by the predator. Where was it encoded? It's encoded by evolution and mutation effectively hijacking the perceptual apparatus of the predator. And why am I talking about moth ipo? Because LLM outputs are moth I spots LMS are computational intelligence. I'll invite you to start using that word instead of artificial. I've started using computational intelligence. The LLMs have no idea that the output is functional.
We humans decode the outputs as having meaning. Humans project hallucinated. Meaning when they decode LLM outputs. This is absolutely remarkable that the mimicry of authorship is so convincing that the they can pass medical exams, past law exams.
It's a very convincing mimicry that we're consuming. Humans complain about AI hallucination when LLMs make up facts. But it's really we who are hallucinating when we look at that as authorship. And those of you who know the copyright law know that there's challenges to the ability to copyright AI output.
'cause it's not a work of authorship which is required for copyright. So what are the hu security implications of human parado projected onto AI fueled misinformation?
Well, what can we do about human hallucination? We can do what humans have always done. We're gonna cultivate shared meaning for socially situated systems of distributed cognition. That's what we do. We'll manage hallucination and anomalous meaning and enhance situational awareness by putting our heads together the in systems that can be called synthetically intelligent.
In fact, those of you who know some legal background contracts are called deemed to have happened when there's a meeting of the minds.
Same kind of concept and that's what goes back a long way. So there there are systems that enable us to synthesize our different intelligence together. Together we're gonna do this through systems of shared language paradigms, norms, it's like standard setting, but for meaning.
Well, reliance on human thinking patterns is fine for illuminating, excuse me, aligning humans with other humans in synthetic intelligence to mitigate human hallucinations and other subjectivities. But human intelligence is not the only form we need to worry about. There are many other forms of intelligence system that are gonna challenge our security. So security and privacy systems have have been designed to deal with attacks and accidents caused by humans and human organizations.
But myriad intelligence systems are now emerging that are organized and operated very differently than historic human and other human intelligence systems. So how can we secure against attacks and accidents from intelligence systems that don't follow familiar human patterns?
Again, synthetic intelligence offers us a helpful pattern of risk mitigation. To deal with risks from non-human intelligence systems. We need to get maximum leverage from human intelligence systems. We really need to synthesize human intelligences in the loop rather than just saying we're gonna place humans in the loop.
And that's essentially what we mean when you say it's not just a human sitting there, it's the system of human thinking and that we need to do that as a counterforce to the challenges of computational intelligence, which some people still call ai, but I'm gonna call computational intelligence. There's nothing artificial about it. So it is post touring thinking. So that prior reference accidents and attacks raises another security issue. We need to distinguish attacks, accidents and acts of nature. If we're gonna mitigate risk, the harms can be similar. These are three different pictures.
One's an attack, one's an accident, one's an act of nature.
Tell me which one's which. The harms can be very similar in physical systems and that's true in information systems also. So if we don't know what the nature of the attack is, then we can't really mitigate it. Think about it this way. You mitigate an attack by increasing your defenses. You mitigate an accident by training your employees. It's different vectors and different persistence of attack, very different things. So we need to know what's going on.
Well our current security paradigms tend to focus on intentional attacks and less on accidents. I love that as cartoon is great, isn't it? And and that's a confusion of active God, right? And an accident. So this is a big blind spot in cybersecurity. Most human suffering is caused by accidents, not attacks is my assertion that from being a lawyer all those years, it feels like that's the case.
So resources again are most effectively applied if you know which of the categories of security threat is is happening to you. So the reason for the delay, so why aren't we doing that?
Why don't we just come up with better? If we're spending all the time with attacks, why don't we start thinking about accidents?
Well, the problem is that negligence, the culpability for accidents is evaluated based on negligence. And negligence is measured by the degree of conformity or non-conformity to the duties of care. Well we have lots of historical experience on evaluating and creating duties of care for physical systems, but not so much on information systems. We don't know what does good look like for us and that's emerging.
So those lags, the lack of duty of care undermines our ability to start to deal with the accidental harms and the, and we don't have it for bolts and it reduces our ability to do the right thing, which means that we both can't reduce the accidental harms and we can't manifest a neighborhood watch because neighborhood watch means you're looking for the right kind of do behavior to happen against attacks.
So we can't do attacks and accidents if we don't have the duties of care.
Well the last couple of slides here, the mention of human duties of care raises the issue of how are the duties and rights, how are they allocated online? Are online systems optimized to sh to share reliable human-centric duties and rights? Turns out they're not. I'm a former corporate attorney. I can tell you they're not.
In fact, we convene in private commercial spaces. Most of the online environment is owned by commercial enterprises. They're optimized for dwell time and economic agendas, things like that.
Well, the degree of private ownership of the internet is not that different than other private infrastructure like electricity or transportation. Most of it's privately owned. I'm talking from the United States perspective, but I think it applies in many other countries. The problem is that when you talk about information systems, it's existentially more profound to have it be owned by a commercial entity than electricity for instance, because it affects our communications themselves.
So how can a human agenda for synthetic intelligence be effectively and appropriately pursued in commercial space? Well one pattern of solutions is to view the internet as a set of nested infrastructures like other shared infrastructures, online information infrastructures can be analyzed as natural monopolies. A natural monopoly is a situation where the overall efficiency is reduced if you have multiple separate systems competing for customers. So you don't put 10 water systems in a single town, you have one water system 'cause inefficient have 10 systems.
So what do you do when you have one provider in a natural monopoly? You don't have competition that curbs abuses like price gouging and things like that. What you do is you have regulation and, and regulation, tariff setting, et cetera. So future legislation.
Now again, inter jurisdictionally, you'll do that through contract, but it's the same thing. Uniform duties of care and a future legislation we believe should consider competition law safe harbors for organizations that are offering fiduciary type services for people.
So another big challenge is language information is intangible and non-rival risks. Making it even more difficult to make it secure than physical assets. You can't put a fence around it. So what's the object of our security and privacy efforts? Is it data? Is it information? Is it the same thing?
Well, the ambiguity of the terms data and information is reflected in statutes and policies and just the way we talk about it. We use them interchangeably very often and that's inherited from a much simpler pre-internet period where the distinction was much less important. We believed at the time that achieving data security essentially manifested as information security and privacy. And that's the basis of 1970s thinking, which was basis for GDPR and unfair information practice principles in the United States and other number of other jurisdictions.
But unfortunately the that conflation of those two terms creates gaping security holes and it really eliminates the consideration of the space that's offered by managing the vectors of integrity beyond data security.
While huge new solution space for cybersecurity has opened up with a simple recognition that data plus meaning and context creates information. We mere data without meaning and context can't properly inform a party receiving the data communication on what to do in its future interaction. So it's not information.
As a result, we need to organize and operate networked structures that reliably identify and convey meaning and context. We need meaning and context. Plumbing for the internet. What the heck is meaning and context plumbing. It's not really that mysterious. It's what humans do in shared language stories, laws, norms, folk ways, beliefs, material culture. It's shared meaning and context that makes us human not data. Internet security has historically forced focused on data security as if it were information security. And that's something we can correct.
We can address that the security of our information systems can be enhanced if we encode and decode communications with awareness of of context and meaning.
Last problem, the cybersecurity risks from the dynamics of the institutionalized centralized information networks themselves in a sphere, the surface area increases with the square of the radius while the volume increases with the cube of the radius, the surface area increases more slowly than the volume in a bureaucracy.
The functional interaction surface of the bureaucracy increases more slowly than the internal bureaucratic interaction volume. The math of spheres captures the well-known phenomenon of bureaucratic overgrowth. As a result, centralized perimeter based command and control approaches to security are overwhelmed and they're rendered uneconomic by the tsunami of interaction volumes. This is a manifestation of the universal exponential increase in interaction volumes that I referenced in my keynote in Berlin and EIC 2023, which is available for video.
I think we can't stop the rising tide and that's why centralized systems can't address distributed security. Well, once again, the source of problems is a source of solutions we need to ride.
The wave of interaction volume increases by inter instrumenting the interactions themselves. This recognizes that the threat surface and the interaction surface are one and the same. Accordingly, existing bolts metrics, business operating, legal, technical and social represent existing sensors that are waiting to be rewired into future information engines for cross silo arbitrage and risk mitigation.
It's just, it's all out there waiting to be wired in the self-interest in de-risking more efficiently with others recruits self binding to be behaviors that reduce risk and the sharing in like sharing information in the agora on the left slide and left picture. And in the future distributed information fusion centers, it amplifies situational awareness system predictability, interoperability, and risk reduction in ways that no party can do alone.
Again, the the basic incentive it creates a virtual mesh network of sensors collects infuses data from across bolts domains and enables a correlation and sharing of risk across those domains.
Distributed security activates the nodes with incentives across bolts so the proliferation of interactions feeds more informative sensory awareness rather than just inert and scary anomalies.
Well, in summary, you can't defend assets that haven't been properly identified or characterized, and you can't de-risk threats that are unknown. Security arose from systems that are undergoing rapid and constant change and it's better to be a leader than a follower in this domain. Hope the presentation has provided some food for thought and I'd love to continue the conversation with those of you who are interested. Thank you very much for your time today.