Please introduce yourselves shortly and tell kind of a single statement. Like, why are you here and what do you want to, to communicate to us today?
Well, from my side, because as an organization in service layers, we use DevOps extensively and that is the culture. So DevOps along with agile is the only culture we have. A lot of the messages really hit home and I'm nodding my head a lot with a lot of the messages because we do them. So from my perspective, if I could say one, one thing for us security, like quality is everywhere and it's in part of everything you do at all layers in a pure DevOps environment.
All right.
Yeah. So Kevin BOIC, I'm from vey.
You know, I started over 20 years ago as an application developer. So developers and thinking about the application-centric approach is, you know, where I grew up, not networks, not infrastructure.
I think, you know, for me it's how do we help, you know, DevOps teams go really, really fast but safely. So thinking in security, more like formula one engineers rather than tank drivers.
Okay. So my first question would be well in the last couple of sessions, we heard a lot of recommendations and best practices and just kind of obvious statements that it's all pretty damn difficult. What's what could be kind of your own views on that tool. What can companies, or where can companies start integrating their op processes with security?
So what's the best, what's the easiest way to get into this whole dev op scheme?
Well, I think the best thing is making it easy and faster, you know, for developers. I mean the best engineer is one that you can use copy and paste. I used to be one of them. So I think anytime that you can help a developer go faster and easier, you've got a win.
And that's probably again where security teams can provide shared services where it's easier for developers, whether it's because they're gonna make mistakes or bring inconsistencies, God forbid, compliance issues or security issues, but just making it easier. First of all, that's where you're gonna get adoption first.
Okay.
Yeah.
I, I completely agree. And we've also seen that giving developers responsibility works really well too. So we've seen this in the unit test frameworks that developers are responsible for creating the code, but then writing the test for their code. We've then seen that developers are responsible for writing their service, but then operating their service. So they're responsible for downtime and they need to resolve this themselves because they created the code that created the downtime.
We can very easily now extend this and say, you're also responsible, not just for downtime and misbehaving service from an application functional perspective, but also a security perspective. So to the, the point of it's difficult, well, it's difficult if you don't embrace it. And it's difficult if security is opposing developers, when security works with developers and they're all part of the same team with the same goal, then it's not so hard. It's just the way you do things.
And you know, Alexa too.
I mean, we looked at dev pipelines just a little while ago. These were not built overnight, so they just didn't magically appear. And so we shouldn't think about bringing security services to these just overnight is just not magically gonna happen with a nice architecture. It's I think taking pointed risks, providing something that's faster and easier, that's gonna reduce that risk as well and building from there. So it's really pragmatic.
And again, we're working with engineers, they're not thinking about security. They're just thinking about speed.
Well, that's funny because the engineers that I have all came from IAM, so the first thing they think about is security. And then the second thing they think about is functionality. So it's the, the pure application developer or the application developer that came up through the security side of the world is a different perspective.
So basically your point of it or developers have to be involved in this whole security game. If you will, in the process, the question is, how would you do that?
I mean, if you push it and say, yeah, now you have an additional responsibility. They would obviously won't be happy at all. Probably try to sabotage it as much as possible. Or do you see a security as a kind of a,
Well, what is the,
What
Do developers want to do? They want to write code that works. And if having security flaws means it doesn't work, it means that they don't get what they want.
So I, I did agree with, with the, the statement made in the last presentation of incentivize the security. So find a way to incentivize the developers other than just, you wrote some code and, and shipped it, shipping it, and then having it not run is a failure as much as shipping it and having it fall apart or, or be full of security holes in my perspective, in my mind.
So I think the best piece of candy you can give any developer is an API because that puts them in control and it makes them, makes it easy for them too often.
I think in, you know, security infrastructure, we think about appliances and processes and systems and the candy for developers is APIs. So giving a developer an API, that's gonna help them secure something.
Hopefully, maybe that's also gonna be common and make it go faster. That's a great way to get started.
Yeah, well, we've seen some, some other suggestions from authorization point of view, which I think the concepts like the service mesh take the developers out of that part of what they need to do. So it makes it easier for them. If the mesh or enhanced mesh would also provide them with the API, the, for them to make the decisions, at least the security is not completely in their hands, but how to use the security remains within the control of the developer.
Okay.
So actually I've been a software developer myself for many years and many years ago, way before all this agile and develop stuff even appear. But still, I still remember that in the traditional development coding sense, they have very well established criteria to measure whether your code is good or bad. Like if it doesn't compile, it's obviously bad. If it crashes, it's not good as well though. Are there any criteria is kind of measurements, which you want to apply in the same way for security?
Are there any tools which would convert the fact that your application was hacked somewhere in the cloud in Australia three months ago into a tangible number or an error message, if you will, for a developer to first understand what's it all about and second, give him some useful hints on how, how to fix it.
So I think gamification actually is a really big deal.
I've seen, you know, developers, one dev team have security scores and on the, the lunchroom refrigerator and there are, you know, different areas where you can have gamification. So one easy way, which is, you know, place I know is things like machine identities or TLS. Those are really easy to score. Are you doing them properly? Other things, again, like code vulnerability scores, scanning, those can be gamified one team versus another. And I've seen that work. You put two engineering teams against each other. One's gonna try and do better.
Well to take that a step further, why don't you have one team work against the other team to find vulnerabilities good to, and, and that would be fun. I mean, to me, that would make a development environment really exciting that somebody tries to break your code and, and you have to make it succeed. And it makes your code more resilient.
You obviously need to have two teams where you had one before.
Well, what is the cost of security? So we can, we saw the ratio of developers to security guys as a hundred to one. So we should have some developers spare then to, to do the testing. Right.
Okay. And by the way, just a reminder to our audience, if you have a question, raise your hand any time. And in the meantime, we'll continue talking, right?
So, or a slightly different question or, or we have learned today, yesterday, and that about every other security that new types of threats appear every day. So what would be a couple examples of the new emerging threats specifically against applications and even more specifically against developers, which other developers have to be aware about?
Well, within our environment, we, we work a lot with containers. So we, we do use a SaaS provider for our get repositories. We do use a SaaS provider for our pipeline to build code test, to code, build containers and, and deploy. So there are a lot of vulnerabilities along the way, everywhere from how do you ensure that the code that went into the repo that is driving everything else because everything is code. How do you ensure that that was valid in the first place? Is that signed by a trusted developer?
And you accepted this in, after a review, is that process complete to building containers that do they, or do they not contain vulnerable programs, which is found through static or dynamic analysis of the containers themselves, are the containers signed? Are the deployment manifests that you're using? Are these all verified and signed? Because at every single step, there's an opportunity for injection of malicious code, malicious programs, malicious containers.
And finally, on the deployment side, when you're deploying into a cloud environment, there's the opportunity for malicious containers in Kubernetes clusters or malicious instances in cloud providers. So there's a whole, there's an entire host of, of opportunities to inject bad code. And in the wild, there have been opportunities and exploits of malicious containers being uploaded to public repositories and downloaded and run on unsecured Kubernetes installations.
So the new technology doesn't prevent us against all of those things, but it does make everything potentially public and not inside the enterprise. So I think that's the, the, the threat profile where we're looking is all of these things are now external. Everything is on, on the internet. What are you exposing? How are you exposing it? But more importantly, how do you measure what's out there and then remediate to close the close the holes? So the that's where I think the, the big risks are in the containerized world anyway.
Yeah, I think we're, we're getting to this new age, which alluded to which is where the identity of machines and machines could be a container. It could be a whole Kubernetes cluster. It could be your cloud service, but the identity of machines now becomes really, really important.
We focused a lot on the identity of people, but not a lot on the identities of machines and identities of machines can come along in the ways of who is signing the containers or which actually machine punched out the code that signed the container through to how are we authenticating one node or one microservice to another, and which brings along then other challenges too, which the previous presenter, you know, referred to, which is great.
We've got now these identities machines, we're creating lots of encrypted tunnels between machines or actually across virtual networks, but all the network controls were never meant to inspect inside of encrypted traffic. And, you know, we saw where one of the biggest breaches last year of, of one organization of over a hundred million individuals was well enabled because security controls couldn't look inside of encrypted traffic and huge failures were there. So dealing with this rise in machines, the identities and what they create, things like encrypted traffic is a new problem.
Okay.
But it's not a problem without solutions. There are a lot of techniques to oppose a lot of these new problems. And there are techniques for trusting code from source to deployment. And at the moment of deployment, ensuring that the containers are deployed onto trusted instances, which run on trusted hardware, which is only signed by the appropriate authority, you know, whoever you trust. So there are some companies out there doing really interesting things with full trusted stacks of container ready infrastructure.
And there are companies out there doing really interesting things which dragon did have on a slide. Some of the, the companies focused on containers doing static analysis, doing dynamic analysis, doing white listing of containers, doing checking and verification of the container content itself and also of signatures on, on, on containers and in running environments. So I think there is a lot of progress, of course, that the new problems mean a new market and we are seeing some good solutions out there and some inadequate ones too.
Okay, great. So the next part of, at least to me obvious follow up question would be like, who is exactly responsible for that? We've seen that graph hundred developers, 10 ops, and one security guy.
Is it, or the responsibility of the single security guy, or is there any way to force his new roles until the 91 other guys, 99?
We think within our organization, it's the responsibility of, of the security specialists to make the developers, his, or her agents. And therefore the developers are the ones that are the feet on the ground that are actually implementing the security that's necessary. So the security expert can say, here's what we need to look out for.
But you guys are the ones that are experts in your application, and therefore you are the ones that need to figure out how to, to protect against this. So we found that that's the most effective way to let the, the application specific knowledge be owned by the people that know it best operations is operations is 10% of what the developer is doing. Security is 1% of what they're doing, but within a devs sec ops environment, the developer or an individual is all of those roles. And the question is what is the appropriate ratio that they spend of their time on each of those roles?
So we really find that not requiring everybody to be a security expert, but requiring them to have enough security knowledge to implement the protections is critical. And, and that's how we think that we are being successful with this. And that's how we successfully pass third party penetration tests and work towards ISO 27,000 certification based on embedding the security within every person.
Yeah.
So, I mean, if you think about it, the developers work for the business, they're building the business applications. It's not gonna be security. Who's fine. Ultimately for some type of GDPR violation, or it's not gonna be security that the, you know, that your data protection, you know, commission comes and, and knocks on the door. It's not gonna be, it's gonna be the business. So that means that the application teams that work for them, developers have to be part of that, that solution and, and have to understand that, that they're part of the challenge and, and the solution.
And I think though, giving the incentive, so it's never a stick for developers. You only push them away. You have to give them the incentives of doing things the right way. And I'd say just again, faster, our job insecurity is to design something that's secure, but we have to also bring now the idea that we've gotta make processes go faster at the same time. That's why I think, you know, think like a formula, one engineer that's designing the fastest race car at the extremes of performance, but also is gonna keep the driver safe. Let them just know that it's gonna go really fast.
And we know it's gonna keep the driver really safe.
Right. And I think it, it actually kind of leads to a very interesting conundrum.
I mean, on one hand, this single security guy is supposed to be kind of the guy because he has to, to lead and teach and control everyone else. And on the hand, as we just mentioned, he, from the business point of view, he is the least entitled one, right? Because he doesn't produce anything useful for the business directly.
So how, why do we find those saints would work so hard for so money?
I would really like to take this one to me, this, this perspective on security being a only a cost and never a benefit for the business, I, I think is the same way that manufacturing thought about quality 30 years ago, and 30 years ago, quality was an expense. And it was a prevented us from making things that was the physical manufacturing world. Now we're talking about the virtual software world. It's the same question is quality free.
Well, quality is simply a cost of what you do. Security should also be this. And I mentioned it earlier, but I'll reiterate if, if security is the way you do things, then you accept the cost just as if having a salesperson is the way you sell your product. You accept that cost you, you don't look at that salesperson and say, well, they're, they're costing me more money.
I, I don't see the benefit. The product should sell itself. The product doesn't sell itself, and you need the support of the people there.
And, and I think just as we need security as part of developing any application or doing any development, putting anything out there, we want things to be secure. We require it, your customers require it, and therefore it's a cost of business.
So I think the, the best thing we're, we're, we're gonna be challenged with, or the O the challenge we have is, is talent. And the best place I think you can look for talent is among developers already.
So the engineers that are coders, they're gonna be the ones they're gonna be the right profile for working in the new DevOps world, not the old school, you know, work on GOs, work on process security administrators. So we have to go and recruit a whole new set of, of the security team. And those are the people that love code that love writing code. And I've seen it done. There can be great incentives put in place to recruit those individuals, and he, or she then speak the language of developers and ops teams, and actually can help with some of the work.
They're just not around to write stuff or check stuff, but they actually can write code.
Yeah. Black hats do write the best firewalls.
Okay, great. And I guess we have time left for one final question. That would be what would be your single most important, critical best practice with regards to application security? What would you give as a sort of last final thought to keep for the audience?
I'll let you have the last word. So I'll go, go for it.
I, I thought about this question a lot, because to narrow down to one single thing was quite difficult, but I thought about what we do as an organization, and what we have determined is the best way for us to not only incorporate security, but many other benefits. And the single, the single phrase that we came up with is if you touch it, you upgrade it.
And what that means is that if you touch the code, then your responsible at that moment for not only adding the feature or the fix or the change that you're responsible for, for actually doing, but you also need to update the libraries and update the dependencies and get everything up to par with the latest security patches, the latest functional patches. So the functional information is not interesting there, but the latest version to get latest security enhancements is important.
And we know that across all of the industry, which is why so many mobile phones and computers automatically update themselves these days is because outdated software is a huge risk for vulnerabilities. So that's you touch it? You upgrade. It is the one thing that has been the most effective for us.
Awesome.
I'll sound like a broken record, but think like the formula one engineer, you know, run fast, run, safe, you know, bring common services that devs teams, ops teams can consume via APIs. You got candy, you make it go fast and you'll start getting wins.
It's not an architecture overnight, and anyone can, you know, run and a DevOps team, anyone it's fun.
Okay. Great. Awesome. Thanks team. Thanks Kevin. And thanks for the audience to sticking up to the end of this track and that's it. At least for me today, we will continue with the keynote somewhat later, but now we have a break. Thanks a lot.
Thanks.