Hello and welcome wherever you are to our webinar today. Today's webinar is supported by Beyond Identity, and as you can see on the screen we're going talking about secure security among DevOps and more specifically securing the software supply chain. So my name is Paul Fisher, I'm Lead Analyst and I'll be joined on the webinar by Jason Casey, who's the CTO of Beyond Identity. Just before we get into the actual content, just a few housekeeping notes. You as the audience are muted and you don't need to mute or unmute yourself.
We'll do a couple of polls during the presentation or during my presentation, and we'll look at the results during the q and a session. And of course we also, that is your opportunity to ask particularly Jason some questions around this topic. And finally, the webinar is fully recorded and will be available on our website pretty soon after this live recording. And the slide X will also be available for download to registered attendees. So that's the where we're going and now the agenda. So I'll be talking a little bit about the culture that we find ourselves in, in, in it at the moment.
And then after that we'll be talking about some software supply chain attacks, risks, and, and then Jason will be talking to protect against this new, well, relatively new threat factor.
So culture, before we get into that, let's have our first poll, and you should see it on the screen now. Do you think that your DevOps or your coding teams are outta control or perhaps not fully in control, might be a better way of putting it? So the options are, yes, we have no idea of what they're doing, yes, but we put in PACE policies to manage DevOps.
No DevOps, no risk to our business or the business. We don't have a DevOps team on what is DevOps. So if you could start answering that and we'll, we'll take about 30 seconds for the votes to come in. Good to see a good amount of view here today.
And while that's open, just so the options are yes, we have no idea what they're doing.
Yes, but we put in place policies to manage DevOps. No DevOps, no risk to the business. We don't have a DevOps team or what is DevOps.
So, okay, let's move on now to the actual presentation. So this is a quote from a book about coders, which I actually have with me here on my desk. I wanna show a little bit of a prop here. I dunno if you can see this, but this is by a guy called Clive Thompson. Very excellent book that delves into coding and coders and the culture and the type of people they are. And this is a quote from that book where it says that CODOs quoted as saying you're at it for hours, meaning coding, you're not eating, you're not sleeping. It's like you can't stop thinking about it.
So, you know, it seems like for some of these guys, it's not just a job, it's an obsession and an obsession that they like to get right.
Sorry, there is actually an attri, Rushi sang Sangi, who was one of the Facebook founding coders, said that back in 2006. So these coders are, you know, they're kind of a special breed in our organizations. They tend to work increasingly sort of not the normal nine to five hours that ASME mortals do. They are target driven, paid on a, you know, a pro ratter basis. They're used to working in multiple coding environments.
So quite a lot of them are polyglot or they're multilingual in, in terms of the code languages that they can use and understand. And increasingly they work at home or remotely. And that has become the case obviously along with everyone else since Covid, for more people to do their job at home. And they are big fans of GI and GitHub and tools like that and something that Jason will be talking a lot about and some of the vulnerabilities in that.
But they also love to collaborate. They love feedback.
They're probably, you know, some of the most collaborative individuals in modern organizations that, but that also can sometimes lead to risk because their collaboration means they openly share stuff because they know that sharing stuff gets job done. They're problem solvers, they like to innovate, but traditionally, and they're not really ones for security, not that they don't think that security's important, but they kind of feel like it sometimes gets in the way of their work or their job.
They would rather that security is taken care of by either some kind of automation or other sort of software platforms rather than themselves having to think about it. Now, of course, this actually is, is a controversial stance who some people say that everybody in an organization should be responsibility for, should be responsible for security. Others say exactly the opposite, which I've just mentioned.
I tend to favor the latter version in that the more that we can automate, the more that we can secure through dedicated bespoke platforms, the better it is and the more people get on with the jobs that they are paid to do, such as coders. And I think the last one there that they are an energy now that fuels pretty much everything else that happens in the IT environments and and wider than the IT environment in, in sort of the business environment.
We're talking about DevOps a lot, but they, DevOps devs originally, as many of you may know, was coming together of two kind of cultures, a developer culture and operations. Previously the two had sort of worked in silos and didn't necessarily get on or trust each other. The idea was that if the two could understand and work together, it would improve its security, it would improve productively productivity.
How that, that concept is now I think, a little data, the idea that everything was improved by DevOps or DevOps culture is not necessarily the case, particularly in our complex cloud environments that we have now.
We, we, we have a, a, you know, environments where clouds are managing other clouds. We need to, you know, and much of this is being automated, et cetera. It's very difficult now for any centralized body to keep the full handle on this highly dynamic and highly important part of the business business.
So again, DevOps has now just become almost shorthand for developers or rapid development and or coders that are putting software out, putting new applications out, updating software on a, almost on a continuous basis rather than a a culture. The culture as it is is pretty much, as I described just back now, is pretty much confined to what developers and coders like. So where are we going from that?
Well, just one, this is a few months old, but Lego for example, is said to be tripling the number of software engineers as it changes its business towards online world and probably things like the Metaverse, et cetera. So we've established then that code is hugely important. Code is highly dependent now on cloud infrastructures for its life cycle. And it's highly dependent on the coders and developers who are right at the heart of all this. So what are the software supply chain attacks and risks that exist right now? There are a number of, of risks in the software mix.
So open source code is one, the usage of third party libraries, bugs in commercial software deployed, and of course in-house software deployed, which is what we've been talking about. And software that the business itself creates as business and sells to customers. But we're also, there is danel in software that is being used, perhaps unaudited or as shadow applications, et cetera, purchased not by central IT or authorized by it, but by lines of business themselves.
And finally, we are seeing the, that poor identity and access management hygiene, or at least poor reference to identity and access management means that the wrong people or even malicious outsiders have access to what the devs are doing to containers, to cloud, et cetera. So we need to improve access management as well as securing the code there. There was a, whenever people talk about software supply chain, and I, I apologize if you've heard this before, but the Solar Winds attack did really bring home why this is important.
The SolarWinds software had what was called a dependency confusion and malicious public libraries were crew for 18 projects instead of the trusted private libraries. So when their software was being sent out or it's been updated, it was corrupted by this basically. And it meant then that the software that the solar winds was sending out to its own customers was damaged. And so it carried on.
And the same thing happened by a similar attack to where the software supply chain was attacked and manipulated and malicious code was squirted into the software and that was also sent out to third parties. So this shows that supply chain attacks don't affect just your organization.
They also, if you send software out to customers or your software, if your business is software, there's a great danger that you'll then pass it on. And it's not inconceivable that this is also in the imagination of hackers, that they would actually be able to then infect the customers of that software and so on. So this is a serious business, it's a serious threat and we need to take it seriously.
So right now I think that our current security policies of tools and practices have not really up to speed with the threat of supply chain attacks.
And again, Jason can give you more detail about this and, and how dedicated new types of cloud native software can do that. Traditional vulnerability tools cannot detect supply chain attacks, which exploit the trusted software artifacts rather than what we know as traditional vulnerabilities.
Crucially, and this is crucial to this presentation, establish continuous improvement, continuous dev development and DevOps pipelines reply on implicit permissions to enable rapid deployment implementing security calls at the end of this process. Now that means there is a high level of trust in this whole process that any code that has been delivered by the developers needs to be proven to be authentic, needs to be proven that it was created by people who are trusted and authorized within that company. So without those things, these types of attacks are surely gonna increase.
So we need new protective models to address what are the unique characteristics of, of such supply chain attacks.
So before I get into the last section of my part of the webinar, this is a, a poll which is relevant to this because it is actually the cloud, which is, well, the usage of the cloud and the proliferation of cloud, which have made software supply chain attacks more likely and more serious. So we're asking you how many different cloud service providers do you have or did you use? And we're talking about public clouds here.
So do you only have one, do you only use the AWS as Azure and P, do you use more than those three, but don't include aws, Azure, and gcp or more than three, including AWS as Azure and gcp. So you might use the big three plus perhaps Oracle or perhaps IBM or perhaps some other smaller cloud o, vh, et cetera. Or you perhaps have no real idea of how many cloud services are in use at the moment. So let's give that a time for you to vote.
So how many different cloud services do you provide?
Sorry, cloud services varies. Do you use? So just one, only AWS is Azure and GCP more than three, not including those three or more than three, including AWS is Azure and Google, or do you have no idea? So I think we've let, let run long enough. So we'll look at a, the results those polls, as I said at the q and a section at the end of the webinar. So here's another interesting quote from a guy called Brian, sorry, Brian Con, who was a creator of Bit Torrent.
He said, I tell people always take pride in your code. You should always be refactoring it, you find a bug, you hunt it down and kill it. And that I think is like likely idealized way of seeing things. I think that in an ideal world that would happen.
In an ideal world, it probably doesn't happen so often in as much that code does go through with bugs, et cetera. Or as we're seen as we're talking about it may have bugs added to it deliberately.
So we need to work more closely with developers, get in, get inside, kind of get inside their minds and, and understand what drives them, how they see their role in, in the organization and so on. So your application security and IT security teams need to work with dev development teams and coding teams, not against them as in sending instructions or security instructions, which don't take account of the way they'd work or the environment they work in or the culture that they work in.
So you need to understand how they work, understand their techniques and how they use code, source code, et cetera. And don't blame the developers.
The supply chain attacks, that is a truism of I think any part of security in, in that we, we, you know, in the cybersecurity community sometimes tend to scapegoat the companies that suffer attacks or the individuals that was supposedly in charge of that department, et cetera. Let's not forget then it's always criminals that are to blame for malicious attacks on business. But anything that we can do to reduce that has got to be good.
And I think that these four working points are some way towards that. Something else that is increasingly apparent in organizations, particularly United States, particularly if you work in such areas of defense or government, that customers are starting to ask for a software bill of materials, which shows exactly what kind of software is being used in a software supply chain across the organization.
If you can't, if you can't verify that, if you can't fulfill a bit of materials, then you probably will find that you will not be able to work for certain customers.
But it can also help speed up a response time of the security team or remediation team to get rid of vulnerabilities or bugs when they're discovered. But an SBO m is something else on the sort of compliance scale that at the moment is kind of voluntaries, kind of not legislated, but it's probably only a matter of time before it will be put into some kind of law that companies must be able to account for the software and the code that they use in the end to end software supply chain.
And that would include things like software that is sent out to update and that is the very part which is vulnerable to attack. So you really must think about the end to end software supply chain of your organization, every part of it.
So before I hand over to Jason, just some quick things to think about from it. Software supply and checks chain attacks are a growing problem. There's no doubt about that. Attackers feel that they have struck a rich and unprotected scene and for them it seemed like a new frontier.
It it's it at the moment it's relatively undetectable, it's it's probably less expensive than, and there's brutal. So we're gonna see an increase in this type of attack. You should think about SPO m as I said, it may soon be written into new compliant regulations and investigate more secure endpoints and remote workflows flows for developers and others.
You know, the endpoint is no longer a temporary solution, it's often a permanent solution. An endpoint or a remote desktop or remote laptop is, is still part of your organization and it should be treated as such in, in many ways, including things like sandboxing and siloing and isolating events that could easily be attacked from outside. Finally though, ensure that all of this doesn't impact or limit developers productivity and creativity because they are the ones that are driving our business in terms of new products, new solu services, new solutions, et cetera, new insights.
So work with them, don't limit them. And I'll hand over now to Jason and as I said, we'll look at the Paul results after, after that.
Thank you Paul. So my name's Jason Casey, I'm the the CTO here at Beyond Identity and we have a product that we call secure DevOps that's very much focused on the, the software delivery life cycle and really kind of helping you secure the software develop delivery and development life cycle. But to start with, we kind of have to have a couple concepts in our mind.
So for those of you at developers, I apologize, I know you already know all of this, for those of you that aren't, there's a couple key things that you really need to understand to really kind of dig into Providence and custody of code and, and, and where, where these attacks kind of open up. So first and foremost, any and all modern development involves some form of repository. When you write code, you're almost always collaborating with other people, even when you're not collaborating with other people, you still want to commit your code to a repository.
So you can have revisions, you can go back in history, you can create branches and experiment with new ideas that may not work out and kill the branch and go back to the original. And the, the, the, the repository technology that's really kind of become synonymous today is based around this, this, this tool called gi. And I'm sure you've heard of GitHub and GitLab and Bitbucket and Azure has a new product, or not a new product that's been around for a bit called DevOps. AWS is the one with a new product. And these are essentially sa hosted get services.
So what is get, how do we, how do we conceptualize it? The best way of thinking about GET is it's a database and it's a database that I commit my software into and every time I make a commit, it's kind of like the databases keeping track of the history and, and I can create branches of history, I can merge history, et cetera.
But the key idea is all of my code is in this repository.
Now, something that's not always obvious to people, and this is kind of important to understand the rest of the talk is GIT is a distributed database. It's not a centralized database. So while these cloud instances are hosting GI for you, it's, it's probably not proper to think about all of your code living in the, in the cloud and all you're doing is just pushing to that cloud database. In reality, the way Git works is every developer has a complete replica of the database on the local machine that they're actually developing on.
And when they're making commits, when they're changing code, they're making changes to that local database, that local repository. And less or more infrequent, what they do is they'll synchronize that repository, they'll push it with the cloud instance. So this is a key idea.
Number one, all of your software is in a repository. For the most part. These repositories are based on a, a protocol and a technology called get, and you can think of it as a database, but it's a distributed database.
Every developer has a complete replica of the entire database that they try and keep up to date on their local machine. Okay, Paul covered this a little bit, so we'll go through this a little bit quickly, but remember when I write code that code originally a source that source is run through a build process.
You probably heard CIC d or build tool chains or whatnot where we've seen adversaries, I'll, I'll, I'll give you a more formal framework in a minute, but where we've seen adversaries interact with the ecosystem is by injecting their code or variations on the developers code prior to the build process.
Now, what comes out of the build machine typically has gets linked against third party dependencies.
That typically goes through something called distribution where it's signed and it's signed by a code signing certificate of the software vendor, and then eventually it's distributed to, to its customers. Where we see adversaries are starting to exploit the system is, it's kind of like the old movie when the prisoners escape the prison by essentially riding in the, the, the laundry basket out of the prison, right? The the prison doesn't do their own laundry, the prison has a laundry service, the laundry service is a viable pathway in and out of this, this barrier or the security perimeter.
So essentially the prisoners use that established mechanism as a way of getting a free ride out of the prison. Well, in this case, we're not escaping a prison, we're actually getting inside of a target or a victim's infrastructure.
And one of the easiest ways of getting inside of a victim's infrastructure is to essentially ride on the clean laundry into the victim or ride inside of the software that the victim has already vetted and has decided to publish in their sys or deploy in their system. So what does that actually mean?
Or, or, or, or put in a different way we think about security, we can no longer just think about securing our environment. We also have to think about how our environments are intrinsically interconnected, right? When you're deploying software from a third party software vendor, when you're connecting to a fast service, you're effectively connecting your environment to theirs. So the software they are delivering to you or the software you are delivering to them now has to come under consideration of, of your actual security architecture, your security thinking, your security planning.
And put in another way, security has to now exist in the C I C D pipeline and it really needs to exist as early in the pipeline as possible to make sure that the cost of bad things is as low as possible to you and to your customers.
So this is just a, a, a picture showing the standard software development life cycle.
And really it's showing that from an attacker perspective, the attackers are, are shifting left, i e they're, they're riding your, so they're looking for ways of riding your software into an organization or, and we'll we'll talk about this a little bit more later, but they don't necessarily have to implant code in your software to corrupt your customers. They could do something as simple as downgrading a dependency that your software depends upon where they know there's a vulnerability in that downgraded dependency.
And as soon as that target customer deploys this new version of software, they have an exploit that they know works, right? So it's attacks can be nuanced, there's always many paths to actually get to the destination, but the key threat in all of this is kind of positive control over your software development life cycle.
So how do we actually analyze our software development lifecycle? And this is a diagram pulled together, excuse me, by an organization called salsa.
So salsa, it's an open source, an open source effort focused on really kind of analyzing and providing frameworks for managing software development lifecycle threats. And so here you see a, a very simple, almost cartoonish version of a software development lifecycle. I have source code. My source code is committed by developers. My source code is then built, it's linked to third party dependencies, it's packaged up and it's delivered to customers.
Now, the the first version of supply chain threats that we saw really kind of focused on packaging basically before packages were signed it pretty easy to forge the name of a well known piece of software and fish at Target and get them to install it. Then package signing became a thing and then adversaries looked for ways of stealing the signing keys.
And then we started protecting signing keys a bit better and adversaries look for a way of actually compromising third party dependencies that we're going to get built in.
And with Solar Winds and sunspot, we actually see adversaries focused on kind of step C, right? They were inserting their code, swapping out code for the what was actually submitted by a legitimate developers right at the build phase. So there are solutions that you can help establish packaging and delivery, right? Code signing keys and, and and storing those appropriately.
There are solutions that you can buy now for analyzing your third party dependencies and making sure you're only building against well known solutions that in fact and, and even scanning them if they're open source from a a known vulnerability perspective. What, what we're now gonna focus on is really source threats. So a source threat is all about what if my repositories are actually attacked? What if the adversary wants to change some of the dependencies that I depend upon in my source code, right? Or what if they want to kind of inject their own, how do I handle this problem?
How do I have providence? How do I have chain of custody? Or how do I only build things that came from my developers on machines that have the security controls I expect?
So the code providence challenge really kind of requires us to understand, get a little bit. So I remember earlier I was saying Get is a database of software, but it's a distributed at a database. It kind of exists on everyone's machine who's actively developing. There's two phases of operating with gi. There's the the synchronization phase, right?
Making my database synchronized with a remote database, and then there's the commit phase when I'm committing new code to the get repo. So developers typically use these things called SSH keys to basically run synchronization to authenticate and authorize that they're allowed to synchronize their database with the master database or in vice versa.
However, developers generally do not do anything about code commits. So code commits are largely unauthenticated and unverified. And if any of you are sitting at a terminal right now and want to try it out, feel free to open up a terminal, go into a repo and just type GI log enter.
And what you'll see is a subset of the most recent commits by the authors that the the system thinks. And you'll probably see a couple variations on one name in a couple places you'll probably see commits that don't have a name associated with at all.
When you go to synchronize those commits and you use a proper key, the synchronization is going to work. It doesn't mean, and it doesn't say anything about the origin of the code, it just talks about the synchronization of the database. The origin of the code is largely left unaddressed. So from an adversary perspective, it's fairly, it's fairly easy to submit rogue commits to commit code as, as impersonation to impersonate actual developers. And up until recently, the only way to actually authenticate and, and, and integrity protect in a cryptographic way a get commit was with PGP key.
Now, now get recently allowed for SSH keys to do this as well, but it's a fairly new feature. So, so again, the real challenge that we're talking about here is how do you actually prove the source commit Kane from the developer that you expect and on a machine with the security controls that you expect.
And so the, the solution to this that we pulled together, we have a so, so we believe in a couple things. So to start with, we believe that humans shouldn't manage keys. Humans should have software authenticators that manage keys on their behalf.
This leads to less mistakes like private keys ending up in public repo, accidentally getting left on public files, that sort of thing. Number two, we believe these software authenticators that manage keys should only manage keys inside of secure enclaves. A secure enclave is a type of processor that guarantees that key can't actually leave that secure processor. So rather than checking the key out to do something with the key, you send a little piece of data to the processor saying sign this data with that key.
This reduces the surface area of attacking that key in a, in an incredibly small sort of way.
And then there, there, there are other controls that you can introduce as well. So we do this and what we call our platform authenticator. The next thing that we do is we will manage and generate g g keys on behalf of that developer. These GPG keys are cryptographically related to that developer's identity, and we insert ourselves in the get configuration as a commit signer.
So this means every time a developer would actually sign a code commit, I'm sorry, would go to make a code commit, our signing agent would actually use that g g key to sign that code commit. So now in my local repo in my local directory, I have a series of commits that are cryptographically signed and they're signed over and sealed relative to my corporate identity. And if we wanted to, we could even add a device policy at that as well.
So not only is it proving the chain of custody to the developer, but it could also include information about the device.
Did the device have CrowdStrike running on it? Was it running the CrowdStrike policy that we expected? Was the OS patched at the appropriate level?
So, so all of these things can get simply tied into the, the software development life cycle without actually changing workflow of the software developer, which as we kind of talked about earlier, is very important, right? Developers are a finicky bunch. They don't want, they have a workflow that works for them, they don't necessarily wanna change it. How do we make security more automatic as opposed to making them do what natural steps relative to the workflows that they've already established.
And we think introducing platform authenticators, assigning agents is one of the several things that we can do to really kind of help secure the front end of the SDLC process.
So this is just a quick list of like what does this even look like from a log perspective? But quite simply, when you, when you have a signed commit, it's not just signing the hash of the commit, it's also attaching some metadata that helps you answer those questions of who was the developer and what was the state of the machine that they were actually doing the development from.
Some of the things that we're adding soon is today we support GPG keys, we'll be adding SSH keys, which will automate the database synchronization function so the developer doesn't have to manage or move or copy or worry about permissions of their SSH keys. So again, reducing the surface area.
And, and we're also now starting to provide some package scripts so that it's easy for you to plug our scripts into your DevOps environment. So when you're going through CICD and beginning kind of preparation to build the code, you can get very, very simple flags and notifications and alerts when integrity is violated or authorship may, may, may step outside of the policy that you are actually expecting on the, on the packages that you happen to be building.
So just kind of in, in, in conclusion, a couple things that you get out of this.
Originally we talked about what's the threat model across sdlc. I have source threats, I have build threats, I have third party dependency threats and I have packaging threats. We're focused on source threats. We help you answer the question of who developed this code and on in what circumstances today that doesn't necessarily exist. And that gives you a chain of custody. So when you build software, you know exactly where it came from under what circumstances, almost much like a, a manufacturing industry, if you will. And that's the, the end of my bit.
So yeah, thanks again Jason for, for that. Let's have a look at it quickly at the poll results. The first one that we asked was, do you think your DevOps or coding environments are out of control?
Well, actually a hundred percent of people said that yes they are, but we have put in place policies to manage. So at least that's something. So thanks for that. And the second poll, how many different cloud services, 50% use only AWS as Azure gcp, 25% more than three, including those AWS as your GCP attorney. 25% have no idea.
So Jason, just quickly on those results, it's to be honest, that's, we've done that question before and it it's kind of typical that a nu a number have no idea on their cloud, but perhaps more pertinent to to this is that they understand that perhaps what their DevOps or doing or they don't understand what DevOps are doing, but they've, they're trying to put some policies in it in place. But I guess in your view, that probably isn't quite enough.
It's if, if you don't know, you don't have positive control over your sdlc, right?
Yeah, absolutely. So we have also, what what would you say are the common vulnerabilities in stlc?
So the, the, the simplest, the the first step everyone has to take is signing keys. Like, do you actually sign your distribution, right? And do you actually protect your, your signing keys properly? The next step that you have to take is actually analyzing the third party dependencies, right? So one of the, one of the, the hallmarks of kind of modern software development is the, the growth and third party dependencies that we all use to actually get things done. It's also the, the Achilles heel of modern software development in terms of where all the vulnerabilities lie.
So having an SBO m as you talked about before, and actually having analysis on the SBO m So that's kind of step number two. Step number three, and this is kind of where we focus, is on knowing what you're building, like and knowing who it's coming from. How do you actually know that the code is coming from developer X, Y, and Z?
How do you know that it's coming from a machine that has the security controls that you actually expect? The only way to do that is to commit signing and you, you need to do a little bit more than just commit signing. But that's where we're going, right?
So when you think about packaging is the end of the process, right? I'm wrapping the present and I need to know what goes into the present. I kind of need to know who it comes from when we're dealing with, you know, CRI critical presence. The analogy breaks down, but hopefully that makes sense.
Absolutely. We obviously talked about commit signing a lot through, through this has supported that since 2011 and other repository hosting services, support verification. And yet as we've seen only a few developers sign their commits, even less organizations require sign commits.
Maybe it's because the distributed nature of GI is not clear to everyone and the requirement to sign in before pushing does not ensure that commits are made by that author. So where, where, where is commit signing going?
So
What Karen?
Yes,
So I was just gonna say, so it, you probably got diminishing returns for a while on using commit signing for two different reasons. Number one, developers, for the most part, people man are managing their own keys today, right? And when people manage their own keys, you get, people have, you know, people are humans, humans have errors, error rates are, you know, they can be low, but they're still there. So when humans manage keys, they leave keys around and, and, and you know, leftover keys are kind of the gold of of, of doing bad things.
So number one, I guess it really is, if I have to manage SSH and I have to manage GPG keys, it's a hassle. It increases my surface area.
The, the benefit isn't necessarily there with, with mechanized key management. I think, I think now it becomes in the realm of feasibility, especially when you're not asking the developer to change the workflow. The second part of it, and this is kind of the education bit that we're at right now, and this is people don't, you know, get, is a sophisticated tool that most people don't really understand, especially around the difference between a commit and a push or a poll or a synchronization operation and, and the authentication and authorization model in play.
So most people have the misconception that the SSH keys they're pushing into a get repo are authenticating their commits, and that's just not true.
So what kind of companies are adopting commit signing then? What would you say that typical financial services or or something like that?
Yeah,
Where we see the, the initial interest is in technology driven companies, defense related companies. And there's some early interest in like public, not public safety, but critical infrastructure.
Yeah, I mean, I would imagine software businesses or tech businesses are the ones most at risk of passing on problems to customers directly. So I imagine that they would be looking at this
Contractor heavy businesses. We see this a lot in as well, like the, the more the, the larger the proportion of contractors in a workforce, the more interesting this becomes to the management.
Yeah, okay. So one thing that people might say is this would slow down, slow down the life cycle development, et cetera.
How, how have your customers actually experienced this in real, real life?
Well, it doesn't, it doesn't actually change the developer workflow. So the developer is unaffected.
The, from a c d, CD c I D perspective, there is some setup work. So there is a, a small cost the group has to pay, but what they get out of it is rich detailed audit and controls that now lets them spend less time when they're interacting with audit, when they're actually interacting with the security team, when they're actually trying to answer questions about providence of, of software and, and control positive control of the SDLC itself.
So some people might say this is all great except that what happens if a bogus developer commits code, which then seems like it's authenticated but actually is not. So our customers, what would, are they using some kind of privilege access management or IDP to authenticate the user in the first place?
So, so the authentication in the first place, it's actually based on just the, the workforce IDP of the organization, right? So typically you have a, a central IDP that IDP delegates to our system. We then create this, this hierarchy of, of, of credentials.
Not, not unlike a certificate hierarchy if you will, that kind of cryptographically relates an identity a user to each of the machines that they operate from. And so it's, it's, it's very, very difficult for a bogus, if you will, authentication to occur in the first place. You can have naked commits, right? Commits without signatures. And that's more likely what folks run into. And most organizations don't have strict no build policies today. What they're doing is they're actually generating reports when they have naked commits.
And the security team then focuses there to understand why is that happening? Where is it coming from? What are the controls on the machine that these developers are actually working on?
So it's, it's more of a prioritization tool in the, in this phase, this early phase of the product.
Okay. So finally then, even when recommended practices are followed, developers have access to the system. Their tools can be used to remove or modify infrastructure and applications, but losing these credentials poses a high risk in many organizations. So what measures should they take to protect unauthorized access to credentials, secrets, et cetera?
So the, the, the cool thing about a system like this, right? Earlier on we said it's based on signatures. Keys are kept inside of enclaves. The cool thing about an enclave is you can't, you know, the, you, you can't remove the key from the enclave, right?
So, so loss in this scenario is device loss. It's not really credential loss. And the typical pattern there, as I said, like identities typically have a constellation or a hierarchy of, of keys or devices. And so device losses recovered from by them just binding a new device and endorsing a new device using one of their known devices or having one of their peers do that. So that's actually a, a, a much less painful process than the old world of revoking renewing and calling a bunch of people.
Okay, thanks so much. We haven't got any more questions right now on the screen you should see some related research and I should mention that there is an executive view, which we've recently published in association with Beyond Identity, which you can find on the website along with some other related papers, unless we get any more questions coming in, I think that's, no. Is there anything final sort of recommendation or sum up you would like to, to, to give the audience? Jason
Signing or commits is an easy thing to start with a mechanized process.
It's really easy to try the product out and see, it gives you an unheralded level of visibility in the SDLC process of what's going on in your build environment or your development environment. And, and also like the security controls on the devices that, that developers are using. So I'd just say try it.
The, the cost is, it's really only on the security controls group, not on the developers.
Great. Okay. Well it's a fascinating area, one that I think will only increase in, in significance as as time goes on.
Jason, thanks very much for being with us today and thank you also to Beyond Identity for supporting this webinar. Thank you all for listening. And I said right at the start, if any of your colleagues wish to listen to the webinar, it will be available as a recording very soon and this slides will also be available. So thanks y'all and have a, a good day or evening, depending on where you are. Thank.