Hey, awesome. So we have heard a lot today and earlier during the conference about incident response and kind of getting ready to it, the whole idea of preparing in advance. And one of the ideas, how you can train and prepare for such a disaster before it happens, is exercising, training. This whole idea of tabletop exercises was probably mentioned multiple times. So this is what our guests are going to be talking about today. And let me introduce them properly. So we have Navroop Mitter, who is the CEO of ArmorText, a leading provider of secure out-of-bound communication solutions.
His company serves hundreds of customers across government and high-profile and secure industries. And of course, he also has a long history in working with different companies such as Accenture Security Practice and other high-profile security organizations. And of course, he is currently serving on multiple advisory boards as well and has experience with advising presidential campaigns, congressional workers, and stuff like that. He is joined by Matthew Welling, who is a partner in Crowell & Moring Privacy and Cyber Security Group.
And he actually heads those actual incident response and preparedness courses, solutions, technologies. So we have our true expert who has not just had a long career in software development and actual incident response, but comes from an academic background. And he is also teaching at the Johns Hopkins Information Security Institute. So we have experts on stage. And I guess before we actually kick up the panel, they want me to draw your attention to the guide for cybersecurity tabletop exercises, which you can download after the presentation if you are interested in more details.
So, Matt, my first question would be to you. So can you dive a little bit deeper on the very notion of tabletop exercises? Why? Like what are they and why companies actually need them?
Yeah, thanks. I'd be happy to. So just to start at the beginning, a tabletop exercise is a simulation of a cybersecurity incident and then the company's response to it. So whoever your participants are, they all act in their roles and they walk through how they respond to an incident. Gets its name because historically these often occurred literally around a table with everybody there. And more modernly, sometimes they're virtual, but still the same idea.
And this is all going on with a facilitator or moderator who's going to kind of step through different kind of factual, simulated factual scenarios. And then everybody responds to them as if that's kind of a new update in your investigation. Getting to the why, there's a quote by a famous samurai, I believe, Miyamoto Musashi, we can only practice how we fight. And it's the same idea here that if your organization has never actually practiced responding to an incident, how can you fairly think that you're prepared is the idea.
And in going through these tabletop exercises and these practices, you develop kind of a muscle memory on how to respond. There's also some additional value as a lawyer, always keeping that in mind that it is a demonstration of why you're, excuse me, a demonstration that you are actually practicing these things. There are some other values there we'll talk about in a second. But just from a practical standpoint, it also builds a lot of trust in your organization that not only do you know how to do your job, but you're seeing other people actually doing theirs and enacting their roles as well.
So I actually think there's a point I want to add here, right? So there's some additional ways of looking at the value, one of which was really well summed up by IBM in their latest cost of a data breach study. And it showed where organizations were spending their money after a data breach. So they've just been through this challenge, and they're now having to elect where do I put my limited budget and financing next. And more than 50% of them, the highest spend was actually on incident response planning and testing.
So if that's where they go the minute they've had a breach, right afterwards, that's what they're reinvesting in. It's telling you that they all felt underprepared and that they want to practice how they plan to fight all over again. Right. Great. So that guide, that's actually that incident response guide, which you discussed earlier and which you're asking me to mention specifically. Can you maybe kind of dive a little bit deeper? What was the original inspiration for creating it?
Yeah, so I think by now it's no secret, right? At this conference, everyone's been mentioning NIST 2, DORA, changing and evolving SEC requirements. The reality is globally, what we're seeing is this movement towards emphasizing the role of executives, boards, and senior leadership in oversight over cybersecurity, and in particular, incident response. Right.
And so when we were first setting out to, you know, when we were first looking at the field of what was available for executives, one of the things we quickly realized is, is in keeping with that historical tendency to relegate incident response exercises to the technologists, this is something the IT folks go and do in the basement, that by and large, the materials that were available were also principally focused on IT and security's role during an incident.
They were missing questions for the CFO, for legal, for finance, for HR, for all of the other non-IT and non-security functions that might be impacted. On top of that, we noticed that a lot of the material that was there was really stale. The scenarios and modules hadn't really kept up with the times, right? They were based on attacks that we might have seen 10 years ago, but they weren't covering what was actually happening today. There was also a lack of dynamicism, right? So the questions themselves were highly formulaic.
You might see a question that said, hey, do you have an out-of-band comms? Yes, check the box. But there was nothing that dove in deeper and that challenged that assumption to say, well, wait, is your out-of-band comms really the right out-of-band comms for an incident?
And then to top it off a little bit further, ultimately, we decided that given that we are adopted by 700 plus organizations now for security operations and incident response, if we were going to give back to our collective defense somewhere, it made sense to invest in an open source set of capabilities or tools that everyone could leverage.
And so for this particular project, we proposed to Carl Mooring a collaborative effort to co-publish a guide where we would bring our technology expertise to the table, they'd bring the legal side in, and then we would then release this under like a creative commons license so everyone could rip and use this out as like tear sheets, use them like Legos and rebuild into the scenarios that they need to run that are right for their organization. Yeah, and if I could just jump in on that point, I just want to highlight something that Navroop just said about the creative commons license on this.
We're literally giving it away, not just to take it, you know, go home and read the pamphlet, but we want you to tear it out and use it. And just to highlight, it wasn't the last time you saw lawyers give anything away. That really, this is our view, is our contribution back to the security community. We're participants in it too. And even if you're not engaging us to help you with your tabletop exercise, we still want you to have them and to have them be high quality and useful, because that's a net improvement for everybody in the ecosystem. I got lawyers to be altruistic. Right.
So you've talked about checkboxes just a minute ago, and it makes me thinking, like, yeah, for many organizations, compliance is still that kind of process, ticking checkboxes, but a lot more often, and at this conference especially, we hear that this old school approach is no longer valid. Like compliance, regulatory frameworks, I mean, they become more complicated. They require much more sophisticated approach to solve them.
So Matt, can you talk a little bit more from the legal perspective? Yeah. So tabletop exercises aren't just a good idea. They're a really useful tool for managing your legal risk. And really, kind of all things considered, it's an exercise that you get a lot of bang for your buck out of. That what they are very useful for is they are evidence that you are kind of walking the walk on a security program. Right. It's not just that you have policies and procedures. Those things are all very important and really strong on their own.
But if you have an incident or if you have a review or for whatever reason, you have law enforcement regulators, litigants, who you need to be able to demonstrate that you're actually doing these things, a tabletop exercise, even if you've conducted it under privilege, which is kind of a very big deal, especially in the U.S. and increasingly elsewhere, you can still say that it occurred, right, without getting into all the details about who decided what and all of that. It kind of stands alone as evidence that you're doing the thing that you're saying you were.
In addition to that, it really is helpful as you're establishing kind of a culture of security. It holds people accountable to be prepared for their role.
It forces, it's kind of a forcing function to get your different stakeholders into a room to work through these things. And as you mentioned, right, in the past this might have been a check the box activity where it was very linear and everybody kind of says the expected thing. The questions were very formulaic and you've just said that you've done it.
Instead, what we want to encourage is to use these as a real learning opportunity and to create an environment where people are not, you know, afraid for their jobs or anything like that to really establish trust and to really work through what are increasingly difficult issues. You know, for example, all the things that Emily and the panel before us just mentioned, these are, you know, new use cases and, you know, you want to be prepared for those.
Right, right, absolutely. I mean, AI, it's like, it's not just a buzzword, du jour, as some French people would probably say. It's really a huge challenge and we just learned that it also already has some really practical implications, not just in compliance but across the border, across like every aspect of getting prepared for instant response.
So, Rob, can you probably elaborate on that aspect? Yes, so when it comes to how we adapt our instant response strategies, how we think about instant response in the wake of AI, I think there are two things to keep in mind.
One, for a number of areas, not much actually changes, right. AI is being used as a force multiplier. It is speeding up our ability to launch certain types of social engineering tasks. I can better monitor the CEO's feed to know when they're traveling. I can use the context that I'm picking up from their Facebook posts to better launch a message internally. I can use AI to write better phishing emails. I can also use AI to better evaluate what I've already stolen from you, so I can then see where I might gain greater leverage in my ransomware negotiations with you.
All of those are scenarios that we are likely, or at least should be, already preparing for in our tabletop exercises. Why?
Because, well, you should already know that they were going to look for the crown jewels. You should already know that they were going to try to send in some sort of phishing email. You should already know that they were going to try to impersonate an executive and do some sort of BEC type of compromise and then try to get you to wire them money or something else. These are already scenarios we were preparing for, and as Matt always loves to say, you know, new facts, old patterns, right? That's really what that comes down to.
But then there's an aspect that actually does have to change with respect to incident response, and this actually goes back to some of the stuff that Emily was talking about in her presentation. When you have the ability to, in near real time, mimic an executive and potentially their participation in incident response, how do you tell the real McCoy apart from the other real McCoy, right? Because they both look like the real McCoy now. There isn't an obvious fake to be had. We actually just tried this exercise not too long ago at a briefing in the U.S.
with a cybersecurity agency, and people actually could not tell the two apart, the real from the fake. Some of them are getting that sophisticated. And so when you're going to start to see that occur in real time, we're going to have to start to think about new procedures for how do we authenticate or validate who this person is? How do we validate that this is the person for whom I should be taking orders versus this might be potentially a suspect of impersonation of some sort?
So there are things that are going to stay absolutely the same, and there are things that we actually have to start to think about that are going to completely upend our existing procedures and policies during incident response. And for those who think this is all kind of far-fetched and futuristic, I can tell you we recently in various tabletop exercises simulated almost everything that Emily mentioned.
So, I mean, hopefully you take notes during ours as well, but definitely hope you took notes during her talk. Right, right. So I imagine, as you mentioned, you have hundreds of customers from the technology, probably even more legal use cases. They're all from different industries. They're all from different countries. They're facing different challenges. So I wonder, can you maybe talk a little bit about how can this entire guide, in a way, it's also one document. How can it be modularized or adapted, rearranged to address all those broad multitude of use cases?
Yeah, so hopefully the way that you take this guide and incorporate it or use it to inspire your own tabletop exercises, it's really getting organizations to have the opportunity to work through these challenges together. I'd say kind of if I'm looking at themes across the exercises, in my role is legal and often kind of planning and facilitating these. I think the one aha moment is almost always around connecting the dots, right?
That as you have these different roles and not just, you know, the tech folks, but communications, HR, your leadership, people from your business, and they're working through it, that they start seeing how dots get connected. Because, you know, there are very often things that, when in isolation, look very small, right? We've had four user accounts compromised. Probably happens all the time if you're a large organization. But then kind of simulating out the different things that may have occurred using those four accounts can turn into a big kind of snowball effect, right?
If one of them is being used to impersonate a CEO and another is being used to farm information for, you know, fake announcements, all the different things that you can do with these new tools, the aha moment is seeing how something that very small in isolation played into something much larger as it played out. And having that visibility across your different functions and all of the experiences that they bring to the table and their insights. Because your technical folks shouldn't be expected to have deep communications experience. That's their role.
Similarly, your HR folks are not technical experts and wouldn't be expected to be. So, by having that environment where you have that opportunity to work through things in kind of a non-punishing environment and without the stress of a real incident, really gives an opportunity for a lot of, you know, thought leadership within your company, a lot of chances to discover gaps. And really, all that comes together often because at some point in the exercise, it'll have been something that seemed very small that turned out to be something very big once all those pieces came together. Right.
So, Navrop, I guess, just like Matt, you probably had a lot of learning kind of moments during those exercises. But can you maybe share something like you were not expecting to learn at all, something surprising? Yeah. One of the interesting things when we first set out to help write this guide was that we were encountering a lot of, particularly counsel, actually, that was coming back and saying we received some unexpected responses that, too, only by chance during our last tabletop exercises or simulations around how it is we would communicate during an incident. Right.
They were receiving, you know, this almost like there's this general assumption, and it's not hard to understand, almost like there's this general assumption, and it's not hard to imagine why. There's a general assumption that redundancy in our communications capabilities, the fact that we have email, Teams, Slack, Zoom, and 14 other applications meant that we wouldn't have to worry about how we would communicate when an actual incident occurred.
Well, back in 2017, you were up against two types of adversaries. One that was just kind of passively listening in, and they weren't making their presence known. And the other that was principally looking to bring your communications down. Right. NotPetya was an example of the latter. They brought it all down. Mondely suddenly had to turn to WhatsApp and phone trees to re-engage the business and start operations moving again. But today's adversaries are actively mining your collaboration applications for credentials.
They are actively observing your playbook in former, you know, previous responses. And they're actively listening into your incident response communications. Right. Whether it's on chat or they're joining your calls and now taunting your executives or your incident responders. And in so doing, they're inserting, creating more chaos. And what you have now is a bad assumption around your ability to communicate on these kind of capabilities, because you won't be able to actually use them for incident response under those such scenarios. Right.
And so that false assumption was one of the things that worried counsel, because then they asked, well, what would you turn to instead? And the answer they got back from the IT folks was, oh, well, we would go to WhatsApp, Signal, Telegram, iMessage, some litany of consumer oriented.
Well, we actually had a real life example of a ransomware attack discussed earlier in this room. And they ended up with nothing because everything was either corrupted or unavailable or simply never planned before. And if I remember correctly, they used some test instance for Microsoft Teams because they accidentally had it rolled out previously for some developers. And then they spent days in rolling it out for the entirety of the workforce. Probably not the great example to learn from, but at least a real life scenario that, yes, this absolutely happens, not just during tabletop exercises.
Right. You need something like that in place. And the challenge is if you do happen to turn to those consumer applications and this is where counsel was suddenly saying, wait a minute, we now have a problem. There's no centralized user management. There's no centralized policy enforcement. There's no centralized governance and audit trails. You can't go later back and prove who said when to whom, when it was said or how it was consumed. And that becomes quite problematic under those scenarios. Right.
And so that was one of those interesting lessons learned is what are you going to do in those moments? How do you put your incident response plans into motion and continue to operate under those conditions when you can't use the in-band technologies that you thought would be there for you because they've been compromised? And when your consumer applications really aren't technically up to snuff, what do you go do? Right.
And that's where vendors like us come in because we actually do produce a technology that is designed around secure communications collaborations under duress, right, for incident response, for your security operations, for your threat intelligence sharing. And one of the last things I'll end on here is that when you look at the regulatory drivers then, you know, whereas when we started writing this guide, not all of these were quite there.
Now, they definitely are. NIST 2, Article 21, Paragraph 2, J literally starts to define what an out-of-band collaboration capability has to be for the entities that are covered under NIST 2. Then in a separate area of NIST 2, the CSERTs are also being told they have to be prepared with a federated secure information sharing capability effectively that they can operate internally within their member state for their companies, but also operate amongst each other to federate with the other member states in the EU.
And so there's a lot more attention on this today than there say was even a year ago. Okay, great. So do we have any questions from the live audience in the moment? We do. So a lot of problems with game-based scenario training like this is that you're preparing for yesterday's attack, because this is the input that goes into the scenarios. What are you seeing and what are you guys doing, especially in light of the, you know, the generative scripting and stuff that we just heard about in the last talk?
How can we start to prepare for tomorrow's attack faster than our adversaries are writing tomorrow's attack? So I'll take the first crack at that one, yeah. So I can speak from our experience. So we're often working with kind of technical vendors, Armor Text is one of them, and other experts in the field to try to understand sort of what's coming. And also because, you know, my day job is I handle incidents. Nivarupa's gonna joke I've been doing a ransomware negotiation all day. This is my day job. So we're taking those lessons and seeing that iterative nature of these attacks, the new twists.
You probably saw maybe that, you know, Black Cat, one of the ransomware operators, filed their own complaint with the U.S. Securities and Exchange Commission today about one of their victims. So that's a new wrinkle, if anybody had that on their bingo card for the day. Seeing these new twists and turns, but incorporating these new facts into the old patterns, because a lot, if we're looking big picture at the attack, a lot of it doesn't change all that much. It's the twists, the turns, the tools, and, you know, the targets.
But, you know, the core, they're after something valuable, right? Either access somewhere else or something that they can monetize that you have. And it's just understanding how to put these things together and how to push and pull with an organization. Because I'm not a technologist by training originally, but nobody wants my hands on a keyboard anymore, you know, setting up their shop. So what we're doing is focusing on the organizational preparedness, right? If you learn to code once upon a time, you learn the theory of coding, but you probably didn't focus all that much on the syntax.
It's kind of the same idea here. You're learning how to respond, how to work through these issues, and building that capability in addition to all your technical response pieces. So I'll take the follow-up to that, though, which is, to your point, the modules themselves are going to go stale if they're out there for too long. And so from our perspective, this is something that we want to, frankly, be publishing a lot more often. This is not a one and done.
We actually plan to stay ahead of the card on this by thinking like the attackers and dreaming up what we would do if we were the bad guys, right? So I have a list of predictions on where some of these will go, and unfortunately, two of them came true this morning, so I can't reuse them, right? To your point. But a whole bunch of the others have not yet happened. But they do set up novel scenarios that you're going to have to potentially adapt to, right? To this new facts, old patterns still adapt to, but they do force you to think about the problem in new ways.
And so we plan to actually make this an iterative thing. The goal is to actually do it not just iteratively and release more regularly, but also make adaptations of this kind of work for specific nuanced use cases, right? Like the processes that an MSSP is going to have to go to or think through when they have to do joint exercises with their customers or joint incident response are very different than your one company alone, right? Thinking about the impacts on supply chain and secure information sharing or threat intel groups, also very different.
There are adjudication problems that come up from a legal perspective that none of them have thought through yet. And so there's a lot more work to be done and we plan to continue to give back. So I want to ask a question involved in the random acts of budgeting category.
So, and this goes to the question of encouraging companies to do this because of budget, it's always going to be a push, right? Especially in the U.S.
So, you know, Dodd-Frank financial reform legislation put the CEO being liable and it didn't make the CEO smarter, just made the CEO yell louder at the financial people so that they make sure things happen. And cybersecurity insurance is not so developed now and having trouble getting that and what does it cover? Has anybody talked about DNO, directors and officers insurance as a vector to try to like do what Dodd-Frank did, but not do it statutorily, do it through the DNO category?
So you're basically saying, look, if you want insurance in this, we give you cheaper rates if you go through this so that you encourage the C-suite to push this or do it through banking. We bank, if I want to lend to you, if you're doing this, I'll give you a better rate because I know my security is better. Just speaking broadly, we're already seeing that trend among counterparties, banking, we see it in kind of M&A context. So buyers are definitely looking to see if you've done this.
Within insurance, better premiums, better coverage, better whatever, in order to access that, there is some request for a demonstration that you're prepared. And again, a tabletop exercise, why we've been so focused on it, why I'm very focused on it in my own practice is for a relative small amount of money, and some of these are different levels of complexity and tailoring and all that, but relative to other spend, it's a relatively limited spend with a lot of value.
And that also, in many cases, helps focus the rest of your spend to the things that are helpful or things that were a gap or to reinforce things that were working or to wind down things that weren't. It's really kind of an exercise that can be very much the glue of an organization's culture of cybersecurity. And we're seeing that rewarded from all of these other counterparties. Do I think it'll start being required? I don't know. Do I think it's a good idea?
I mean, we clearly do because, I mean, we're lawyers giving it away, and it's really a priority for us, and we think it should be for organizations out there. You know, with my former turban on, right, from my consulting days, in security advisory, we would keep trying to drum home this point that if you run these exercises correctly, you can actually leverage them to better prioritize your spend, right? No one's got infinite budget, but we can start to produce that matrix that helps us understand what solutions are actually going to help us across the bulk of these scenarios, right?
Probability and impact as well. You start to put those things together, that prioritization is gold. But I also think there's something else happening right now. The personal liabilities that are coming down the pike for executives, right, the very real financial liabilities, the potentially being blocked from serving on another executive board here in Europe for some period of time post a breach, is going to drive a lot of change and a lot more respect for that process.
And then that's actually a part of why this guide itself is so heavily focused on the executives and not just the technologists. It's actually, if you look at the bulk of the question material in here, it's actually focused on the executive tiers and audiences and the non-security, non-IT functions of the business. And one of the things with these exercises is it can help organizations get out of that mindset of, you know, either I have to CYA for myself because this is my responsibility, and instead change the focus.
This is an organizational responsibility and an organizational role, not just your CISO, not just your IT manager, not just legal, whoever, that it's encouraging all these functions to work together, which again gets to the organizational response, not the responsive executive A, B, or C. Okay, but the last one. Yes.
Yeah, it's definitely an emerging trend. The focus on supply chain risk is, I think, isn't news to this room. It's huge and growing. It's certainly a focus of a place that we're, you know, planning as we kind of iterate through this, you know, hopefully this partnership will continue.
But yeah, it's definitely a focus. It's definitely growing, and we're seeing a lot more of those interactions between the supply chain. The government contracting community in the U.S. and kind of the flowing a lot of that liability down the supply chain has driven a lot of that, and we're seeing it picked up elsewhere as well. Matt's laughing because I have a master matrix of all the different guides we want to write like this and different nuances, and one of them is absolutely supply chain because it builds on something that we already do, right?
We work quite heavily with threat intelligence sharing groups. A lot of them are interdependent parties, typically within the same sector for collective defense, but the lessons learned there are directly applicable to your intelligence sharing that needs to occur within your own individual supply chain. And so taking those lessons learned and feeding them here and then raising some of those same questions around liabilities and adjudicating different types of requests that then come up can absolutely be reapplied, and so it's definitely an area we want to focus on.
Okay, awesome. Well, thank you very much, Navroop and Matt. Thank you.