Okay. It's at least in Germany, half past 12. And I think we can start with our session today. Hello and welcome to the elastic security workshop. This is the first part we will, we will talk about unified protection for everyone today. We will talk about how to improve the security of your systems. You will learn how the latest security capabilities in the elastic stack enable interactive exploration, incident management, and automated analysis, as well as unsupervised machine learning to reduce really false, positive and spot anomalies.
We will also talk about the new protection detection capabilities of the free elastic endpoint and how to implement it by using the EQ L the event query language. My name is Christopher Schutze I'm director of the practice cybersecurity and lead Analyst at co a Cole. And I'm very happy to introduce James spery principal, product marketing manager of elastic security.
James, maybe we'll start with some words about you and what you plan to do today with us.
Sure. Thanks Christopher. Hi everyone. Good afternoon.
If you're, if you are in central Europe, good afternoon. As Chris said, I'm on the product marketing managing team at elastic, specifically focusing on our security products. My background is entirely security operations. So I've been doing, you know, security Analyst, security, operations center manager, all of that stuff for pretty much my entire career joined the last take about just under two and a half years ago now, but I'm a longtime user of the stack. So about eight years more or less ever since it became a real project before that, you know, I used every product.
You can think of all the more popular ones, but I ended up shifting to elastic even before we have anything we're about to see today. I actually joined elastic as a solutions architect, so helping our users and our customers deploy elastic for security.
And I only recently moved into the product marketing role just because I, I, you know, it's a bit more natural to me.
So like, I really enjoy talking about the product and making people excited about it. Plus it allows me to influence our, our direction a bit as well, where we're headed. So what we're gonna talk about today is essentially we're gonna have a hands on look at elastic security and, you know, the elastic stack as a whole, we will cover an introduction to the elastic stack as well. So this is your first time actually ever hearing about elastic.
We will, I will introduce the stack as well. So I will be starting from the beginning and that will be the first portion of this workshop. And then the second portion. So we'll have a break in between, but in the, in the second part, we will be looking at our detections repository and also sort of why we decided to make it open source and our philosophy behind it. So that's what we'll be doing today.
Cool. That sounds very interesting. And all the best wishes for your promotion, for sure.
Maybe we start with a little dialogue or questionnaire between you and me when reading the title about unified protection for everyone. This is really very generic. And the name of the workshop.
What, what do you mean exactly was that with a, with a view of elastic?
Sure.
So, you know, traditionally the security landscape is a bit siloed. So you have your, your SIM products or your security analytics products, you have your endpoint technology products, and then you have everything else.
You know, vulnerability management, IM all that stuff at elastic. Now, as of this latest version, 7 0 9, we combined endpoint protection. So we have an endpoint product and security analytics are SIM. So we started unifying those two technologies together. So that's where the unification part comes in. And when we say for everyone, it's because the vast majority of the product is 100% free. So you do not have to be like a, an industry or an organization with a multimillion dollar budget. You don't have to go through a very rigorous sales process.
You as an end user, as a small business, as someone, even a bigger business who just doesn't have the time to go through all the, you know, a sales cycle and, and engage in and budgeting and all that stuff, you can just go and start using elastic for a hundred percent free. That includes the endpoint product and the security analytics. So that's what we mean by the, for everyone part. It's not for someone with a multimillion dollar budget. It's literally for anyone who needs it at the tip of their fingers.
Okay, cool. And what are here, the, the core challenges you want to, or you plan to solve with elastic? What is the idea here?
So the first one is we want to get rid of this concept of having data silos.
So like, we don't want people to keep data in individual buckets or different places. We want everyone to be able to bring it all together because to solve today's and you know, to look for today's kind of sophisticated threats, that's what you need. And it's been shown over time, how, how effective it is when you have everything together.
Secondly, we do want to challenge the, the concept of bringing it, you know, making it actually work on today's amount of data. So it's one thing having it all in the same place, but can you actually develop features that will work across several petabytes of data? So that's also something we're, we're very heavily focused on and trying to ensure that the product delivers, you know, we'll talk a little bit later, but that was one of the main reasons why someone would use elastic just because of how performed it is and how it scales.
And lastly, we want to be completely open.
So we want to get rid of this concept of security through OBS security, black boxing. So we, everything we do is someone can see. So if we publish a detection rule again, it's open and you can actually see exactly the query that's running behind it. So there's no like mysterious magic that's going on. Our machine learning is very transparent.
We tell, we tell you exactly what we do. So we also want to get rid of that element of secrecy and back boxing. And the last thing I would try to say for the commercial side of our product.
So the, if we do enter into a commercial agreement with we changed, or we wanted to change the, the, the traditional sort of pricing models. So, you know, traditionally you would pay per ingest or gigabyte per day or event per second.
And we feel that's not the right way to do it because you're essentially penalizing someone for putting more data into your platform. And that we know that's necessary to be able to detect threats. So where we, we have a, what we call resource based pricing, which is essentially pay for what you use.
So no paper endpoints, no paper, user, no paper gigabyte per event. That's very simple, this amount of resources, this is what you pay. So that's also something we challenge as well, particularly with the endpoints, the per endpoint pricing, because, you know, where do you draw the line? What's an endpoint is the laptop, a server, a virtual machine, a Docker container. So like we wanted to make that a bit easier for everyone. So I would say those are the, the main things that we're trying to, to solve with elastic security.
Yeah. You're absolutely true.
Defining what is an endpoint is also very, or a very complex discussion because many people have different views on that. A thing that I ask myself, I know elastic primary as a search company.
What, what happened that you now start offering a security solution as part of your main product line?
Yeah, and it's a, it's a very interesting story because you know, when our CEO and, you know, founder shy Bannon, when he, when he released elastic search as a, the first version of open source, open source project, all he wanted to do was solve the complicated search problem.
He never envisioned it becoming a security product or observability platform, but as things happened with open source projects, people started to build on top of them and use them for other reasons why they were actually developed. So, you know, it was released initially in 2010 founded as a company in 2012, but even from 2012. So I started using it in end of 2012, more or less people started putting data inside of elastic search because of how easy it scaled and how fast it was to search.
So even though we didn't have detection rules or any of the stuff we have today, it was a way to get that very noisy security data and have it somewhere where you can search it very fast and have it grow accordingly.
So traditionally it was considered a threat hunting platform, or like a cm augmentation platform, where if you had an existing solution, you wanted to offload some of the data and some of the costs into elastic. So that's how it started to happen.
And we, as a company, understood that there were, you know, thousands and thousands of people using us for that reason were like, listen, why don't we, why don't we think of making those people's lives easier? Let's start offering more cap functionality out of the box. We also started to notice that a lot of the security products are being built today are actually built on elastic search, or they have the option to use elastic search as a backend. And so there's so many vendors which offer this functionality using elastic search as a backend data store.
So we're like, you know what, let's, let's try this ourselves and let's do it.
And we had to, you know, there's other things we did on the way. So we developed our elastic common schema. We joined forces with several security companies, you know, more recently end game just about a year ago. And that's how we ended up with our endpoint product.
And we, along that journey, we also inherited a lot of security expertise. So, you know, if you're building a security product, you can't just have anyone just building a coding, they have to know what the industry needs. They have to know, you know, what is required by a security operations person. They need to know how to, you know, if you're building detection rules, for example, they need to know what that looks like and what the attack looks like.
So we, you know, inherited so many talented individuals on the team along the way as well, which brings us to where we are today, which really sophisticated security product. So that's sort of the, the journey started off as a, a quick search for expensive, noisy security data. And then we built on top of there basically
Very cool story. So just listen to, to your customers and how they use your product and do it.
What can you do better than implementing what your customers are doing with your pro product really interesting at the beginning, you've mentioned that you are an open company, an open source company. How can someone in the community then contribute, ask questions about products and solutions? How can he be part of the open source solution here?
Yeah, so many ways, I, I guess the first way and most obvious for, from a code perspective is all our code is open on GitHub, right? So anyone can go ahead and publish pull request so they can submit a pull request literally for any part of the product. We do own all the commits at the end of the day, but everything is looked at everything is reviewed. So anyone can go ahead and, and contribute that way. There's also, of course you can join our, we have an open slack workspace for the community.
I have a link on my slides, which, which will show at the end, but we have, you know, thousands of people on there either using elastic search or using elastic security. And they want to find out a bit more, or they've run into an issue. They can go and ask questions there.
We also have a discussion forum, discuss the elastic dot go, where people can just go and ask and both on the slack workspace and the discussion form.
We have, you know, elastic search engineers answering. So it's not just community members. You have members from elastic answering you.
So that's, that's from the community perspective, join, you can come to our meetups as well. So we have several meetups, obviously this year, most of them were virtual, but we have them in several different time zones for several different locations. The best way I think to see the schedule is going on, community.is dedicated website for it, but there's several ways, or even just, you know, building something on top of elastic search and, and providing it as a project. There are many people who do that as well. And in fact, there's many security open projects.
So we have things like Wazu, we have rock NSM, security, onion. These are all community projects built on the elastic search, which is, which is fantastic. So I'd say those are the more popular ways to contribute to the community.
Cool. Thank you. And the last question I think before we go into more detailed discussion into the workshop part is how can I start using elastic security?
Fantastic question. So there are a couple of ways, the easiest way is we have a, you know, SA solution.
You can go and try it for free for 14 days, cloud elastic dot go, no need the credit card and anything just that's just to try it out. However, if cloud is not your thing, or you're not ready to sort of, you know, put any money there yet, you can, of course download it. You can run it wherever you like cloud Docker, Kubernetes, or a laptop anywhere you can think of.
So that's, that's probably the second easiest avenue, which is just downloaded and tried. It comes with a ton of content out the box, a lot of integrations. So like everything is given to you. You can just go ahead and start using it.
Cool. Thank you. So I would say thank you to the audience and then James, feel free to start your workshop, have a, to, and to the attendees, have a lot of fun and ask questions. If you have some James is happy to answer them. I think.
Perfect. Thanks Christopher. And yes. If anyone has any questions as they come along, I'll do my best to monitor the chat.
So, or we'll take Q and a as well at the end. So let's dive right in. So I'm just gonna go ahead and go through. I have a couple of slides, like I said, and then we have a demo. Okay. So just have to show the slide. That's basically safe Harbor statement, you know, as a public company, I may mention some things which are coming up, so just everything is subject to change.
So I, I, I kind of have to show the slide.
So as we were just discussing with Christopher, you know, elastic searches off and still is a search company, right? Our primary goal is to solve this search problem. So for many of you who don't know, every time you go to Wikipedia and make a search, that's powered by elastic search. If you ever use Uber, that's powered by elastic search. If you use, you know, any of the popular apps on your phone today, those are all powered by elastic search or even, you know, things like Tinder are powered by elastic search.
If you, if you have using that. So you might, you're probably using it every day without knowing, but we started off and still are of course, fundamentally a search company and where you're doing three main things at elastic. So we build, let's say three, what we call solutions on top of our stack, and they allow you to search, observe and protect.
So we have the search aspect of it. So whether it's search for your website search for your application, or even search within your enterprise, sort of what the Google search appliance used to do, we can do those use cases, observability.
So monitoring your application logs system metrics, uptime, application, performance monitoring, that's all under the umbrella of observability. And then we also have of course protection. So security. So whether that's security analytics or endpoint protection, we cover them both very quickly. I'm gonna go over the stack and sort of how we can deploy as well. Just for those of you, aren't too familiar with elastic security, or I should say the elastic stack. So at the core of everything, we have elastic search. So elastic search is the data store.
Basically every bit of data you have is gonna be stored inside of elastic search. And it is essentially a distributed, very highly scalable data storing technology.
So it's meant to run as a cluster. So you typically never have less than three nodes of elastic search in a production environment, unless of course your, your developing and then you scale towards on. So as your needs grow, as your data grows, as your computer requests grow, you just add another note to the cluster and the cluster takes care of everything for you.
So you don't need to worry about is my data replicated is this and that the cluster really does all of that by the default. Now, of course, even taking, especially into the context of security, we definitely don't want to be just searching elastic search via API calls or anything like that. We need a way to visualize and work with all the data inside of elastic search. And this is where cabana comes in. So Keana is your visual window into elastic search and the elastic stack.
So any search you're gonna do any detection rule that you're going to create any dashboard that you're gonna build, any user that you're gonna manage or monitor any part of the stack. This is all going to happen through cabana. So these days, you know, I would say 95% of a, a user's lifetime and the stack is through Kuana. And of course, we'll see this hands on as part of the level. And then we have the ingestion layer. So basically bringing data into the stack.
So again, if I'm a security user and I have firewall logs, I have host logs. I have, you know, my cloud provider data. I need a way to get that inside of elastic. And we have couple of solutions there, primarily beats and lock St. So beats are very lightweight data shippers. So something you would install on a server or your laptop, a dock container Kubernetes, I need to, for example, bring in packet data, turn on our packet, beat, and it send data, for example, very, very easy to use as the concept of module.
So very modular.
So you basically turn on the module for the data that you want to collect. I'm just gonna ask if you're not speaking, just put yourselves on mute, please, just to avoid any background noise, to be really helpful. Thank you. So beats again, data shipper that you install on pretty much anything with an operating system.
We have a, we have a family of about eight beats and hundreds of modules. So we we'll be able to see some of those as well beats. Now we'll be seeing something today where we make the lifetime of a beat even easier. So we're gonna be talking about our new elastic agent. It's not on slide yet because it's still in beta, but I'll be showing you what that looks like today. So we're sort of redefining the whole way we bring data in with something we call integrations and fleets, and then there's Lockte.
So Locke is the more, I would say traditional way of bringing data in.
So Lockte is a, a server site component. So you set up a lock star server, you can stream data to it. You can output it to multiple locations. You can transform it in any way you like it's a full ETL tool and Lockte is very popular and very heavily use this sort of data router. So you would, you know, if you need to send data to multiple different locations, if you need to pull in data from a restful database, sorry, restful API, or make a database call by JDC or something like that, you can do all of this with Lockte.
Then of course, you can send it to elastic search, both beats and Locke can work with message brokers like KAF car, everything like that. Or they can send data straight to elastic search, whatever your organization needs or whatever your network requirements are.
There's a solution for BS. And so that's a sec now, very important. How do I deploy an elastic stack?
So, as I was saying with Chris, the easiest way, and the most straightforward way is just to use the, our set service elastic cloud, and can pick any provider that you like out of Google cloud, AWS or Azure, pick your region, click a button, and you're good to go. And you can either pay as you go, or you can subscribe for a year. You can do whatever you want. If you really like sort of the cloud orchestration functionality and the SAS like features, but you you're an organization which cannot have your data on the cloud. For whatever reason that may be, we do offer you other alternatives.
So we actually have a product called elastic cloud enterprise, which essentially allows you to have the same salike experience that you'd have in our cloud, but you can host it anywhere you like.
So you can host it in your own data centers, virtual machines, bare metal, wherever you want. And for those of you on Kubernetes, we actually have our own Kubernetes operator. So have it's for short CCK, which is the cloud of Kubernetes, but you just apply the operator and then we take care of everything for you traditionally.
I mean, I should say, if you don't want orchestration at all and you want to do all the work manually, install the nodes, do it to yourself. Of course you can do that as well. So if you wanted to download all a components random separately, or via Docker or whatever you want, that is all possible as well.
This is a quick slide where I, you know, it kind of visualizes the journey I was talking about with Chris a bit earlier. So started off in thousand 10 as a search and open source search project.
As time passed, we got enter forces with, you know, opensource projects like lock session Kuana and beats, as I was talking about before, and then we released our elastic common schema. We released the first version of the, with the SIM app back in June, 2019, we joined forces with endgame joined forces with a company called perched, which basically offered training and consultancy on elastic specifically for security use cases.
And where we're at today is a unified application, which brings pretty much all the years of learning onto elastic search all the experience together, former slides in gonna jump straight into the demos. But I did want to show this because we're gonna be talking about quite a few of these workflows within the app and what we can do today.
So starting from, you know, getting data in, how does that administrator managed their security policies? How do they work with data? How do they create a detection rule, exceptions, case management? So we're gonna be looking at all of these today.
It's much easier for me to show rather than just go through every single thing on the slide, but there's a lot of, there's already a lot in the product, even though it's less than, or just over a year and a half into its sort of lifetime as elastic security, there's already a lot that you can do pointed compared to other tools as well. So, and we're gonna talk about quite a bit of these, of course, that being said, I'm just gonna jump right into a demo. I think for questions, the best thing would be just far them off into the chat.
And then I will essentially try and get them towards the end of the session as well. So let's jump into a demo Chrome and screen.
Okay's a little bit all. So for those of you, who've never seen it before. This is Kuana specifically in the security solution. I am on the latest version as of today, which is 7.93. We have a very, very frequent release cycle of elastic. So typically between six to eight weeks, more or less.
So, you know, if, if you're on version seven, six or seven or a bit older than that, don't feel too bad. Like sometimes it's a bit hard to keep up with the, the release cycle. So this is 7.93 as of today. And I happen to be in the overview page of security.
Again, I'm just gonna open up this menu here, so you can see how the, the, the user interface of cabana split out. So you have the three solutions that we mentioned before. So search observe and, and security enterprise search again, is that concept of being able to search any product in your organization, whether that's, you know, confluence or Gmail or whatever you want under one screen observability with logs metrics, APM, and then security, of course.
So we, we kind of sort of designed the menu to make it a bit easier to navigate. And then you also have the, let's say, build it yourself, elements of cabana. So consider it this way. So these three are out of the box solutions, which you don't, you don't have to build anything. You can just use everything as it is out of the box. If you'd like to build your own custom dashboards or visualizations, you can, of course, without any limitation whatsoever still build all of those as well.
Obviously for the vast majority of this, we're going to live in the security page, but I will be showing some other sections of cabana as well, can be used to help us solve security problems in the security overview page, as it's titled, it's just an overview of what's going on in our environment. What I would say though, is that all the data you're seeing in terms of the, the actual raw events that are coming in, these are all using out of the box module.
So I have not done any custom work in terms of, you know, bringing data in, I haven't parsed anything manually.
Everything is what last, it gives me much outta the box, even from the detection standpoint. So all the detection rules are what happened out of the box as well. So I haven't done anything particularly custom from the security side. So in the overview, I can see sort of what events I'm getting in. And you can see they're split by beat type. This is not like an extensive list. There are more of course, but you can see here, there's a nice overview of what we're receiving. We have alerts or detections that happened in the last 24 hours. We're looking at more detail.
And then we have things like our recent cases in our timelines. So we have case management or incident response, call, whatever you want.
And we also have a timeline, which is basically a security investigation. So as you're searching, as you're doing root cause analysis, we call those security timelines. I'm gonna go ahead. I'm gonna dive into the detection straight away. So part of this workshop, I detonated some malware earlier today, and I can show you actually. So I have a, I have a windows machine here and I have this sample. So basically it was a fishing campaign. Those are familiar.
This was a PT 20 NTA, sometimes on a efficacy, or there's a few other names for it. There's a phishing email, which had a Excel sheet. And when you opened it up and run it, there's a bunch of macros that do quite a bit of stuff in the backend. So this is actually ran on this machine and you can see, this is what it looks like.
So this is the actual email that would have been received as part of the campaign going into elastic security. I have data coming from this machine using our end point.
There's only our end point on this machine, but then of course I have other data coming from other other sources as well. And we're gonna go through sort of the, the Analyst workflow of how someone can investigate this. I should mention as well that I also had apart from, so in the detections page, I have the alerts that were triggered over here. Find I also got some alerts in my, my slack, which just disappeared from my screen. There we go. So I have here a slack channel with alerts coming from elastic security and in particular, this one over here.
So this basically told me that we had an alert from our end point. And actually if I click on it, so put it up here.
Oops, it'll take me straight into that alert. Actually, a copy of can use this browser.
So as an Analyst actually get alerts, I link straight into it for, for context, right? I can actually go right into the alert into the right time range. And I also get all the detail about the alert itself. So instead of starting at the overview page, I can have sort of what you call external alerts to third party platforms. Of course. So let's see what we have here.
So we had, if I'm looking at this as an Analyst, I can see that pretty much in the same time range. So between sort of 9:35 AM and nine 40, about five minute window where quite a few alerts trigger as a result of that malware on the same host. So this James windows endpoint demo windows here, and what I like to do as if I put myself in the Analyst Analyst shoes is I like sorting by the risk score here.
So I can actually sort by risk score. And the highest risk scores are for malware detection. Now in 7.9, we released our endpoint, which includes malware detection and prevention.
In this case, we had it in a detection mode. So the malware was able to run. We didn't prevent it if we wanted to, we can switch to prevention, which means the file. As soon as it's detected, just be quarantined and, and the user, it wouldn't be allowed to execute, but in this case, it's ran and you can, here, we have the file, the file name. We have basically all the information we need, actually, if we open up these events so I can go in here and I have all the raw detail, I have the, the full file part, the owner of the final rent was created, or the hashes.
So all of this information is presented to me.
However, as an Analyst, there are several ways I can start investigating this and we can also see if any of these other alerts are related as well. Things like ENCO or the coding files for search till unusual network connections, right around DL 32. So these are some of the out the box rules that we have. The first thing I'm gonna do is let's, let's, let's start off with this first one up here. And the reason why we have sort of two per or for the same file is because we pick it up on creation, but also on execution.
So the way our malware detection works, it's not using signatures or anything like that. It's a system we have called malware score, which we have basically a static machine learning model of about 250 million malware samples. And as a file is written to this, we extract search and features from it and compare them to this model.
And the model generates score and says, Hey, this really looks like malware, or this doesn't look like my too much. And you'll actually find this on virus Toto.
So if any of you are fans of Risto, if you upload a sample there, elastic is one of the sources that you have using the same system. That's why we'll have both for creation and execution. So let's go ahead and we're gonna mark this as in progress. So I'm an Analyst, I'm gonna pick this up sort of added to my queue. So I'm gonna do that right here, mark in progress. And then it goes into the in progress menu. And what I like to do first is I like to use this button.
So if you're using data from our end point, have this option for being able to analyze this event in a graphical way, and I'll show you what I mean by that.
So let's go for the screen. And the event analyzer basically builds sort of a, a graphical process, three lineage view of everything that led up to that event. So it's quite nice. It basically shows you everything that happened around it. So I can see here that we had, and this is where that fire was generated.
So that this line here, it's a bit faint on the black screen probably should switch it white, but this is the event we're looking at. And we can see, we had one file operation where this malware was basically created this file creation event. And we can see pretty much the same info I saw before.
It's, it's a bit nicer layout because we have it's. It allows me to scroll and I get all the information, but we can actually see sort of the chain of events. We had outlook spawning Excel, which spa 30 till if any of you, aren't too familiar with cert detail.
It's, it's a windows utility, which allows you to Incode and decode files. And we can actually see the command line arguments here. So we can see that it was run to the code, this text file here, which ended up generating this mils file. And this was actually one of the vectors of the attack.
Basically, one of the things that there was in the Excel sheet was a macro, which performed this action. And we can actually see the command that was run here. So as an Analyst Analyst, just by clicking on that one event, an event analyzer, I get almost, almost the full picture.
Cause in, in reality, there are more things that happened here and we can see that soon, but at least now I have a, a really good indication of, of what's going on. So one thing I could do is if I wanted to, if I wanted to sort of spawn off of this view, let's do this. I'm gonna grab, I'm gonna use our timeline now to sort of dig a bit deeper. So what I'm gonna do is have this timeline here. Timeline allows me to start a new timeline here. Timeline allows me to look at all my data from everywhere. So right now I'm looking at my singular endpoint data. Maybe I want to look.
So I want to look everywhere within my environment where I have process name as outlook at DXC. So I wanna do that.
You can see sort of auto completes and gives me suggestions as well. And you can see now we loaded up 142 events where the process name was outlook, essentially. And even if an alert doesn't trigger. So not just for an alert for any bit of data, any bit of telemetry that we say is coming from the endpoint, we can still look at the graphical event analyzer. So let's go ahead and do that. I'm gonna click on one of these here and now we can see an actually even bigger picture.
So because we looked at sort of a wider time window of events, I can see even more stuff that's happening around 30 till and that process. So an actual fact, we see what we saw before the outlook Excel and the, and that file being created. But then we can also see something else spawned out of it here.
So once that file, that malware file was generated. It actually ran. So you can see here, it ran over here. It's actually created a few more different files. So there were city Inver, the DL. And if you noticed, I actually had the detection alert for this. We detected this as MES as well.
And there were some registry changes, which we picked up here. And then we actually see that run the L 32 process. So if you remember, there were three main alerts which triggered. We had some malware score alerts. We had run the L 32 making network connections. And we also had the 32 events as well. So coding of virus detail, but by clicking one, only one of those, I was able to get to the wider picture without having to go to each and every single one of them I can actually see as well, sort of the network connections that run the L 30.
So for those of you, aren't aware you should, you shouldn't really have network connections coming all around the L 32, but I can see down to the protocol level and certain formation. So these two events here, this DNS requests to CDN verify.net, these are actual C2 communications. So like if I wanted to, I can grab this and then, and look for it even wider throughout my network. So if I wanted to look for network request for CDM, verify.net to see if any other device did it, rather than just this one, I can of course do that as well, using that timeline view.
So I, I really just wanted to show sort of how we can very easily pivot from one thing to another. And if you remember, we started off from just that one alert and we're able to build this picture before we carry on with our investigation and our follow up, I do want to so show some other features of the timeline.
So I'm just gonna go back to the raw event view here. So before I look for outlook manually, so I actually typed it in, but the timeline has a nice feature of being able to drag and drop.
So I can, if I wanted to, let's say, I'm interested in this IP address here. I can grab the IP address and actually drag it into the timeline, like, like this. If I remember how to drag interrupt today, there we go. And you can see what this is going to do is essentially it's gonna search across all my data sets. So there's no data silos. We're gonna look at everything and it's gonna see any of my dataset, which have destination IP 52 1 1 4 1 5 line 32. So whether that's firewall proxy host, like I have here, anything, and this is using the elastic common schema.
So we have a schema, which is essentially normalizing all the data for us, which is really nice because otherwise I would've had to search for, you know, a million different variants of source IP, or destination IP, I should say so that you can of course keep adding to the timeline.
So if I wanted to destination IP and username is James, and you can see, I can logically and, or, or it makes it very evident what I'm doing within the timeline as well. I can leave notes. So if I wanted to, I can expand this. I can look at the actual events here. So I'll click making those network connections.
If I wanted to, I can leave a note for one of my colleagues, for example. So I can say, Hey, this is suspicious and we support full mark down here. So if you want to leave images, screenshot, all of that, you can do that as well. What else can I do from the timeline? So I'm actually gonna open up the event analyzer again, and now that I've noticed that there's something very bad happening on this machine, especially these network requests from around DL 32, I want to escalate it. I want to include other members of my team or members of another team.
So I'm gonna open what we call an elastic security, a case. So let's give this time. So we're gonna name it, ER gold.
Once I give you the name, I can take further actions and you can see two of the options I have here is to attach to a new case or to attach to an existing case in its graphical format, which is really nice. So as it is, we can attach it to a case. So let's go ahead and do that. And I name the same. Oops. We can leave some tags here. So I've noticed malware and point data. So there's quite a few different tags. We can put anything you want really.
And you can see because we linked it from the event analyzer. We can essentially just have that link in there and to take us straight to it.
However, because it's marked down syntax. I can put anything else that I like. So I can again have images intros if I want whatever I need, let's go ahead and leave it like that for now and create the case.
And now essentially what this allows me to do is have a very collaborative experience with other team members. So you can see here, we have the initial reporter, which is myself and so far, I'm the only participant, but of course you can have as many participants as you need.
You have, it's currently open and it's been open for 12 seconds. Something else that you can do in cases is you can integrate it with external existing incident response ticketing tools, such as, in my example, I have JIRA from Atlassian. As of this version, we support JIRA IBM, resilient and ServiceNow. And you can, if you have multiple ones, you can just pick them from this connector, but what's easy about it is I can just go ahead, push it as a JIRA incident. And it does it for me.
So I, I don't have to manually go and keep two platforms in sync.
I can just go add a comment here and it'll push it for me. It'll also keep track of the state. So if I leave another comment here, for example, I can go ahead, add that comment and it'll tell me, listen, you're not up to date with JIRA. So it tells, requires an update update and that's it. And you can see it links me straight to the JIRA issue. So as an Analyst, I can click on it and to take me straight into JIRA. So I don't have to go look for it. It's right over there. So we looked into one of our detection rules.
We investigated it. We saw there's malware running on this machine. We also did some more root cause analysis. And then we opened up in a timeline and attached it to a case. Once we actually discovered pretty much everything it's doing, let's go back to our detections page just to see what else we can do from in here.
I'll just let this load for a second. And we have, you can see those other alerts for city Inver, the DL that we saw and because this was a DLL, we can see that it was loaded as well. So the library was loaded. It was created, but we have some other detect.
We have, again, those in coding or the coding files for circuit, the unusual network connections, right? So we can see all of those. There are a few more things I can do. So if I have an end point alert like this one, if I go to our context menu, I can add exceptions. So in this case, I'm not going to actually leave the exception because this is a malicious file. But if I did have a file, which was, you know, false positives, these things do happen. I can add the exception. And the nice thing about this is it's fills in the exception details for me.
So yes, you can manually change these values and you can see there's this really nice workflow, but for this, it said, listen, and we're gonna look at the signature. We're gonna see if it's, we're gonna look for, that's not trusted and we're going to whitelist the file name, the fine hash and the event code, for example. So this was populated for me. If I want to add this exception, and then I can choose to close this alert and I can close any other alerts, which match this particular exception. So very nice that it fills it in for you.
And these exceptions for endpoint in particular will actually live on the end point. So we won't validate the file in the elastic. Security will validate it on the end point itself saving you sometime. Let's look into adding your exception. So I'm just widen this the last 24 hours.
Cause I think there were some which triggered, which I wanted to leave exception for. So I have some alerts here for tour activity to the internet, which is just like false positives to me. Cause I didn't use tour and I didn't go, I didn't do any of that activity. Let's see if there's anything else. Yeah.
So I want to add this. I want to add this source IP as an exception for when we see this sort of traffic.
I, I don't want this rule to trigger again if I have this same source IP. So let's go ahead and add in a rule exception this time, rather than an endpoint exception, bring up a very similar workflow, but for rule exceptions, we, we, we do have to populate it ourselves. So what I'm gonna do is I'm gonna do source IP, oops, IP that's better then is, and then I'm going to put in the value I want.
So in my case, those 1, 5, 6 0 2. Okay. So if it matches that basically I, I want it to be an exception.
You can see, we have the multiple conditions here, so you can add and, and, or, and nested, I do want to highlight though that as an operator, you also have the option to compare it to a list. So we do have list support here. So if you wanted to upload a list of, you know, host names, IP addresses, users, whatever you want hashes, just to be able to compare them in these exception lists, you can as well, I'm gonna leave a comment here. This is for the workshop and I'm gonna close this alert and any other alerts that match. And we'll add that exception.
Once they add the exception, you can see the alerts are now closed because I instructed it to go ahead and close them for me. So under closed, we'll be able to see them.
Now, one thing I'm gonna show is if I click on the alert name, you can see, first of all, when you click into an alert, how much detail we provide. So like there's a full description of why we did something for positive examples. Well Microtech tactics and techniques, the queries that we ran, the data that we're looking at. So there's, there's a ton of information you get as an Analyst, you don't have to sort of do any guesswork, but they can also see the exceptions. So here's the exception I just added. It was created by myself on this at this time. And I left the comment.
It can be changed of course.
And you can add another exception straight to here. So really nice to be able to just visualize these accordingly. So I just wanted to show you that example of detections, you know, just having that sort of workflow where you can also leave exceptions, if you notice and anyone's positives, you can also, if you wanted to, just within this view, you can, you know, have this chart stacked with any, doesn't have to be just a rule name. If you prefer looking at it by for example, the MI tactic name, you can do that as well.
So like we don't restrict you in this case, you can visualize it, however you like.
So that's sort of the detections workflow. We did focus on sort of very specific data here, which is the endpoint data or the, the, for the tour that was coming from a packet. One of our beats got back. I do want to make it clear though, that you can use any data source. It doesn't have to be just our endpoint data.
So if you had data coming from your own endpoints or something like CrowdStrike or any of the more popular ones you have modules for most of them, the only restriction for now is the event analyzer view. So this is only working on our endpoint data as of today.
However, the, the plan is in the future to open it up to more data sources, the timeline though, everything that works for basically any data that you like.
I want show a few more features around detection. So in this page, here are all the detections that I have running. So you can see as of this version, I have 206 rules given to me by elastic. I have so much are failing. And for me, it's, that's on purpose because I want to show you that we keep track of these failures as well.
So for someone who's, you know, maintaining this rule set, you have all this information, you can also see sort of the performance rules that you're running. And we say rules, these are queries, right? So you have bad activity, we're searching for it.
So, and they're running on a schedule and I do want to show you as well, what it looks like to create one of these. So, and by the way, here is where you can upload those lists that we're talking about for use in exception, rules, exception in exceptions, right?
So you can upload the list here and use it over as part of the exceptions workflow. And you can also import rules, but you want create a new one.
Really, really simple. Basically just have three choices as of this version. So you can create your own custom query. So I can do something like, again, process.name, run DL 32 dot XC, for example, and event dot category tag is network for example. So if I wanted to look for network connections by around 32, that's the query I can make. Of course I can keep adding to it. I can do whatever I want. You can see it's gonna look at multiple different data sources by default. So you have full control over that. And then I press continue and I have all the options that you can think of.
So I have the name, the description, how I want the severity to be assigned.
Is it statically or dynamically based on the data, same with the risk score, MI tactics and techniques, investigation guide. We haven't spoken about this yet. I'll show you a real nice investigation guide, but essentially you can have this as part of your detection rule. So an Analyst has something to follow really nice, easy steps when they're going through their investigations more or sort of overrides on how he wants to name the timestamp and then the schedule and the actions.
And when we say the actions send down my values, basically I had started off with that slack message. So this is what this is where those are defined. So if you wanted to send an email, send the slack message web group, you know, make an API call combine actions together. So you can do all of that when this rule runs. So that is totally possible just to show you what one of those detections looks like with an investigation guide.
So here I have one, this is actually based off of machine learning rather than query.
And again, it's purposely stopped. So you can see the, the context I have here. So it's very clear if there's a rule, which is not working. Okay. So as an Analyst, if I click on something, I, you know, if it's failing, I know about it immediately. I can follow up. And this one, because it's running on machine learning, it would actually tell you which machine learning job is not running. I'm gonna talk a bit about machine learning in just a second, but essentially, rather than looking at the static query and the background, we're modeling behavior over time. But I'll get to that in just a bit.
What I really wanted to show this one is invest.
You can see here, I have things like gifts, so like, and links to other tools. So as an Analyst is following this, there's no sort of like, there's no like misstep of what they have to do. There's everything is explained to them. So if they've never used Keana before, or if they've never had the opportunity to work on sort of a DNS tunneling attempt, they can see now doing so. And because of support markdown in there, you can basically build these really nice, very rich investigation guides, and they're all in the same place.
So as an Analyst, again, I don't have to stay swapping between multiple platforms.
So that's what I wanted to basically show this investigation guide. Cause I think it's really powerful. It's one of my favorite features because it's just, there's so much data given to you right. In the same place. So I'm gonna just talk a bit more about this machine learning stuff, because you know, there's a big, there's a big thing in the industry where machine learning is gonna solve everyone's problems. Like we're well aware that machine learning isn't the answer to everything.
However, we do have it in the stack, right? So we have both supervised and unsupervised learning and we ship about 30 machine learning jobs. We call them machine learning jobs for security. And what machine learning in the elastic stack does is it models data over time. And just to be clear, the only requirement to run machine learning in the elastic stack, it is a paid feature.
But as long as your data sits in elastic search, you can use machine learning. So you don't need additional hardware. You don't need additional scripts or software.
If your data sits in elastic search, you can turn on the job by ticking a button and you're good to go. You don't have to do anything else. Now we're very open about what we do. So we don't hide sort of how or what we're detecting.
So let me, I load up another one where I have a very concrete example about what that does is off of another malware sample, because I think it's good for you to see and understand. So this particular rule let's go back 90 days, this rule in particular, it's really interesting. What it's doing is it's looking for anomalies in DNS. We're looking for behavior of someone trying to grab data from my network and take it out, right? It's very common technique. It's not a new technique, but it's typically very challenging to get accurate results without false positives.
So I think I have an example here. Just a good one.
Yes. So what we're doing in this case is we're actually grabbing, for example, every single DNS request that this host makes and we're modeling the length of the sub domain. Think about that. So like if I make a request to www.google.com and ask text machine learning in the background, grab that www and checked how long it is and how many of them there were. So something very complex and mathematically, but it's doing it for every request. So it's very performant.
And what this allows me to do is pick up the more common techniques around DNS tunneling, which is a lot of DNS queries to one domain. So like google.com with a very big sub domain.
So what, that's the sort of activity we're looking for? And what we do is if someone wanted sort of the more mathematical detail behind it, because you know, most security Analyst would just care about seeing the results.
But if you did want to see sort of the full mathematics behind it, we do show it to you. So if you click on this job here, actually have it open over here. And this is the dedicated machine learning view. And so this is what they call the anomaly Explorer. So these results will show up in the detection engine.
So by the default, you don't have to go in here, but sometimes it's good to go in to see exactly what's happening. So I want to highlight your attention to this with your face com domain at the top. So this is based off of sample from the helix group, PT 34, which did real D tunneling. So every time we detonated it, we got an event here and I don't have to do anything cuz it's unsupervised. So it learns that it's unusual behavior and I I'll show you exactly what it picked up.
So here's where we can see the full sort of mathematics behind it.
So it's telling me this saying, Hey, listen, on this host James HPO, windows dirty. We just spotted 626 unusually long DNS request with your face.com. The probability of that happening in your environment is 7.8 E to the minus 26. So like almost impossible. So the very high score of 84, and I can see exactly why, because this host is making very long, unusual requests with your face.com via this DNS server. And you can see, it basically tells me it's 18 times higher than usual.
Now this is another interesting case because since I run this demo so often I wanted to make sure that this machine learning job doesn't learn because since it's unsupervised, if I ran it, you know, once a day, once a week, eventually it'll actually learn that it's normal.
So I did put in an exception here.
So I said, listen, every time you notice, sort of with your face.com, which is inside this sort of rule, please ignore it. Do not learn from it. Don't update the model. So like you have that influence as well. So for those of you who are interested, really, really simple setup, all UI bases, you're seeing here very, very easy to get used to. So like I'm not a data scientist, none of that. So like I was able to use machine learning really, really easily.
So those are sort of the unsupervised that the unsupervised nature of the machine learning we offer for elastic security, just, I like to call it another sort of defense in depth tactic to detect this very unusual behavior.
And so that's the sort of mainly what I wanted to cover around the last security and the Analyst workflow and using detections and creating their own detection. Just if in case you missed it, if we go back to detections, that's go to today. And we go to like, we are creating our own to create a detection based off a machine learning. You just click the button.
So like you pick the machine learning job that's running you select when you want to be alerted on depending on the score. And that's it.
So really, really, really simple. And of course we include these alerts out of the box. So in reality, you wouldn't even have to do that to yourself. You just stick it on. And it does it, the last thing I'd like to talk about in the security app is the, how do I manage my policies?
So, like I said, in seven nine, we released the endpoint, which is using elastic agent elastic agent is a brand new concept for us, where basically everything is managed in terms of data sources and data collection has managed sensory through Cubana, including your security policy. So for that machine where I detonated the matter where that was mostly part of this demo, this is where I set my policy. So we can see have malware is set in detection, only mode running on windows and Mac. These are the events I'm collecting from windows, and these are the events I'm collecting on the next.
So really, really simple point and click and I'm collecting security events and very, in a very rich way. So for, especially for windows and Macko S we live in the kernel. So like we're collecting these events at the kernel level. So it makes it very hard for an attacker to, to, to manipulate or stop or something like that.
So I say, it's impossible because you should never say that in the security world, but it makes it much harder than the traditional sort of event logs or typical like logging collection mechanisms.
So that's mainly what I wanna talk about in the security section.
Now, I did say that I'm gonna talk about some other features in cabana, which are just as relevant for security. And this is mainly gonna be talking about some other visual aspects that you can use.
So, as I said, in the very beginning, these are all out the box views, right? So like the security app, everything was on detections host network. That's all out the box. Like I don't have to build anything myself. What if I did want to build stuff for myself, right? Which is a totally valid use case. So you can do all of that elastic. Maybe you wanted to grab this data and put it to a less technical audience. Maybe you wanted to create a management report. Maybe you wanted to do something like that. Of course you can do that.
And I have an example of that here.
So I have something we'll call canvas find in the menu here. So it's, it goes a bit beyond the traditional dashboard, because canvas is more like a work pad where you can do whatever you like, the, the turn on the editing controls here.
So again, very, very simple, totally drag and drop. So you don't have to be a designer or developer to build these. It's the same data that I'm using for security. So I don't have to do duplicate data twice or internet or reach out to another system. It's the same data and works in real time as well. So like in this case, I focused on the sort of the MI attack framework, cause I'm a big fan. And I wanted to see the events based on the MI tech enrichment. These charts are very rich.
I can link out straight into the security product so I can build whatever I want. Essentially.
I can have multiple pages as well, so I can have this on a screen and have it flick between different pages. So very interactive, really easy to set up and like, it, it literally takes a few minutes. Like you add elements, everything is done straight to here. That's canvas. I always like mentioning it, but you still, of course still have the traditional dashboard. So if I just go to dashboard here and I load up one, which I had built for different session, let's go to the last 90 days, maybe 30 that's the 30 days. So this is again using the same data that I have for, for my other solutions.
But I put everything on one screen, including my machine learning results, my detections, I can overlay different data sets. So here this concept of annotations where I can grab two data sets and put them on each other, and we also have the concept of drill downs as well. So if I wanted to drill down into another dashboard, I can, so I can go to another dashboard and it takes the same value. So I can do all of that stuff as well. So like you can build these dashboards to your heart's content. You can do whatever you want.
Very, very important to know about.
And there's some other views as well. Like you can use our graphing view. So I like this because it's another way of visualizing data in terms of using a different train of thought. So these are the same. This is the same. I should say data set that's in the detections view. So this is the, the detections result. And what I wanted to see is the relationships between the detections being triggered the users and the hosts that they're living on. So I can see here, for example, that these two windows hosts are very, very, very popular.
They have a lot of detections and primarily for this James spit host, who the heck is this James? Right? So I can immediately tell like who the culprit is on which host. And then I have a few others here and it's easy for me to spawn off of as well.
So like, I can check if there's any more results. And maybe if I wanted to grab James's MacBook pro and see if there was anything else, any other relationships I can do that in this case, they're all present already.
So like, even after something happens, you can see from this morning because I had more detection stood. It said, oh, there's some new relationships here. And it loaded them up really, really interesting stuff.
So I have about 10 minutes before we, we go for, for a break before the second talk. There's two things that I want to talk about. One is sort of something I get asked a lot about and we have a solution for it. And it's a bit hidden under the observability solution.
And I get asked a lot like, James, how do you detect if for example, you have a certain data stores, which has stopped sending you data again, really valid security use case. And we actually use our machine learning for that. So if you think about that machine learning is a really valid way of determining if a host or a dataset has stopped sending you data or sending you less or more data than you. And you can see here, we have a feature under our logs app called anomaly stream and what this is doing, choosing machine learning to learn the typical data pattern of a dataset.
So you can see here, for example, if I want to look at, let's say my engine X access logs, you can see now how any anomalies in the data collections I can see here. For example, on October 10th, I had two times fewer log messages than expected. So if I expand this to tell me, for example, I typically get 700 messages in a 15 minute period. I only got 261, so you can do all of that. And of course you can get alerts for it. So like the same way we've got alerts for our other machine learning anomalies, I can do that as well. So a really, really nice feature.
And I always like pointing it out because it's very, very valid for security and it's much easier than usual. So instead of me having to set thresholds or Hey, you know, this host typically sends me a thousand events on Monday morning, but Saturday evening it's 2000.
So none of that, it's just gonna learn for me. The last thing we're gonna talk about before we go for our, our stop is something which is really exciting, which is a brand query language, which is the event query language.
If we stop before, just to recap pretty quickly, we do some searches in the, in the timeline, for example, for things like process name, for example, if we do something similar to before, I'll look at TXC, then I can do, for example, in destination, do IP is 1, 2, 3, 4 slash 24. For example, I can do stuff like that. And this is using the cabana query language, which is very good, but it's a bit limiting for certain security use cases. So like a big, I always get asked, like the, one of the biggest questions is what about correlation?
And it was a limiting factor for us, but now we introduced a brand new language called EQL, which is the event query language.
This is something we inherited from joining forces with endgame. So endgame had built EQL to look for malicious activity on the endpoints, and we've brought that to the elastic stack and 7 0 9 release it as an experimented feature via API calls, which is why I'm on this interface here in an upcoming release very shortly. This is just gonna be natively as part of the user interface. So don't worry about the examples I'm gonna show here.
I literally just built these so you can see what I'm talking about, but very soon, you're gonna be looking at something like this, where event correlation is just another part of the detection mechanisms. So, and then you can just type a, what we call an EQL query right over there. But what problems does EQL solve really important to talk about? So I can still do so I'm gonna make some EQL queries on some data sets here, just so you get an idea.
So I can do the sort of very traditional queries where I look for, for example, process name is this and the, it doesn't have, it has command line arguments for example, but the real power of EQL is Ising. So I can look for a sequence of events by a specific entity. So for example, if I want to look for a sequence of events by the process ID, it's as simple as doing something like this, I'm gonna sequence look for process where process name, Richard Richard service, 32, look for this DL name and then any network events at all, really, really easy. So very low learning curve.
That's built with security Analyst and mind. So really simple to follow and, and learn on as well.
I show a very traditional example, something I asked a lot, which is sort of like the traditional three fade logins followed by success. EQL allows us to build those sort of rules. Don't worry about this part.
I just filtered on a specific time range again, when it's part of the user interface, you won't have to worry about any of this, but if you look at the query here, I'm basically looking for, I want to look for a sequence of events against the same host name from the same source, over a 15 second time period, where I had three fade logins and a success for the same username. And just to visualize the data first, this is sort of what I'm looking for. So I generated these events manually. So I logged onto a machine and fade login three times for by success.
So I'm looking for this sort of event here.
I want to see and create a rule for this sort of behavior, very typical, where you're trying to look for sort of root force behavior and in EQL, this is exactly what it looks like. And if I run it, it's gonna run across that data set. And this is exactly what they expect to see. So you can see here in Jimmy J, which was that user from the same source on the same host. And I had three failed logins followed by success. So you can see here failed. And then if we look again, I have failed again and again, and then the last one is success.
So we're able to look for this activity now and create a detection rule. So again, in the, in the upcoming release, I won't have to worry about any of this sort of Jason or like complex looking logic.
I just put like I used to do before and that's it. So this is going to be a valid query, you know, upcoming release. And you can see here, it also does syntax validation. So if I make a mistake, for example, it's going to tell me if something doesn't make sense.
I think this is a development version, so it might not be fully finished, but if there's something which is going to make the query not work, it's basically going to tell me, and obviously this is a bad example for, and there's validation happening here, or there should be. That's why there's the turning circle. So just wanted to point that out. And we also give sort of query previews. So if I wanted to see if I'm going to be building a, yeah, this is this dev version. So it's not fully functional, but we're gonna allow you to have previews.
So if you build a rule, which is going to be a bit perhaps noisy, we'll actually warn you from before, so you don't have to worry about deploying very noisy rules.
And I think that's what I wanted to cover from a demo perspective before I go for a break, just two more slides. So I do information. Okay. So hopefully we're able to follow that, but if you liked what you saw and you had any questions, so of course we can take a few questions now and for the next couple of minutes, but do have a fully open demo instance, which you can go and try, which is demo elastic.co.
Like I said, you can try for free for 14 days on elastic cloud. And we also have our community slack. There's thousands of people on this. You can literally come in and ask any question that you like. There are security, dedicated channels. So there's myself on there, a ton of different engineers as well.
So, and customers and users. So please come in and ask questions if you have, or want to dive a bit deeper into elastic security or elastic search in general.
So I do very much recommended that was our first workshop. So we have some time for questions and then we can go for a 15 minute break before coming back two o'clock for the second session, we're gonna talk about our detections repository. So our public detections repo.
So if anyone has any questions that they'd like to ask now, more than happy to take them so you can pop them in the, in the chat box, or if you want to ask them live, you can as well, if not, we can go and we can come back at two o'clock central European time.
Hello everyone. It's two o'clock German time. Good afternoon. For those of you joining us from the previous session. Welcome back for those of you joining us for the first time this afternoon. Hi everyone. My name is James. I am a principal product marketing manager at elastic focusing on security.
This is our second workshop of the day where we're gonna be talking about our detections repository, more specifically, our philosophy, why we did it and how you can contribute and all the tooling we provide and what it looks like to actually use this repository. As I, as for the previous session, if you have any questions, do feel free to ask in the chat or we can take a Q and a session towards the end. We'll probably finish a bit early. I'll try and target about four to five 50 minutes for my slides and presentation. There's my cat.
But if anyone has questions as we go along, please feel free to, to just interrupt. It's fine. All right. So let's dive right in.
So what is this repo and why, why did we do it?
So, first of all, you know, the age old problem of security, where you have a security vendor, they publish, you know, security rules, detections, maybe now, even machine learning and that sort of thing. And they're not very transparent about what they're deploying. We wanted to change that concept. So we didn't want any ambiguity. So we published all our detections on GitHub. So this is the same repository that our security research team will use to create new detection rules that will end up in the security product.
Not only did we publish the repo and the rules, but we published full contribution guides, developer tools, command line interface, all of that good stuff. And of course anyone can contribute. So because it's open, the whole idea of this is that if anyone wants to contribute to elastic security, in terms of a detection rule, they are fully open to do that.
And is of course home slash elastic slash detection rules. Let's go through a bit of design of the repository. Let's see exactly what there is inside it.
So this is what it looks like even today, these screenshots are tiny at, but it looks exactly the same. So there are a few different directories. And then there are all the guides really, there is a Python requirements for the command line tool, but there are mainly these, these directories where we have detection rules is a, a Python module really for interacting with detection rules and et C contains some libraries.
Keana contains some libraries for Keana because this command type interface can interact with Keana, which is our visual tool for the elastic stack K QL, which is what we use used as the primary detection language RTA is red team automation, which is a fully open adversarial behavior testing framework. So as we create detection rules, we also create what we call an RTA, which allows us to, to test them really, really nice. Anyone can use it. It's essentially little Python scripts that simulate the behavior. We have the rules themselves now have a ton of tests.
So we have actually end-to-end tests for syntax validation, use of the common schema, making sure the rule is synthetically. Correct. So it's all in there.
Okay. Just went through those. Perfect. All right. So now within the rules themselves, so we have, you know, at the moment, if you go to the repository, there's over 300 rules. So within the sub directory for rules themselves, we split them out into the sort of the, the category or what data target. So you can see there, for example, things like AWS, Linux windows.
So depending on sort of either the provider or the data source, and we put the directory there, that being said, they are data agnostic. So because of our use of elastic common schema, which we get to in a second, just because we have windows there, it doesn't mean we're expecting them to come from a specific sort of ingestion mechanism like NX log or win log beat, or our module. It could come from any, any network, any data set for windows, which adheres to our common schema.
So if you're collecting things from like another endpoint vendor, like Rike or carbon black or central one, or, or any of those, you'll still be able to use them. They are vendor agnostic.
This is what arru looks like as it lives in the repository.
So the, the configuration style is we decided to go for toil as a market language and just gives us really ease an easy way of maintaining them, but as well, also pushing them out into an environment. So you can see basically there is a section for metadata, then there's all the rule description, the field, the name, what risk score you want to give it sort of, it'll give it an idea as well. So there's a lot of information within the rule itself. It's not just the query, there's all the metadata about the rule too.
So as it moves from here to stack, it uses these elements within the tunnel file to basically build the query. So for anyone looking to contribute, this is sort of what you, what you'll be doing. And this is sort of what we want to accomplish with this repository.
So again, open source is big for us. As we all know, elastic as an open source company, we do not believe in security through OB security. So we had some people telling us like, aren't you going to make all your rules available? So an attacker can go and look exactly what you're doing.
Of course I, attacker can go and look at what we're doing, but if we made them open source or not, they would still do it. So there's nothing stopping an attacker, you know, buying our product or using our product and looking at them that way, you know, hiding them doesn't make sense because people were just get around the, the hiding elements. So we said, let's open them up and have everyone use them. Right. Plus they're also scrutinized when they're open like that. If we make a mistake as elastic, someone can go in there and say, oh, this query is incorrect.
If you run it this way, it's not gonna detect this behavior. So that's a very big advantage of being able to do that. Like any other open source product, it's open up to scrutiny. And for anyone to check, we wanted to share best practices. So specifically for high fidelity rules. So we have a certain methodology when we create rules. So I wanted to be able to share that as well and sort of the way we think we don't create rules just for the sake of creating them. There's a lot of thought that goes into it.
So we wanted to basically share those best practices and want to provide a really good experience apart from developing quickly. Yes, but we want to provide a really good experience. So if someone has an issue with our detections, you don't need to take going through support or anything like that. You can just go to a repository, open an issue and ask questions there.
So very, very open about it. And we just want, we literally want users to be successful more than anything else using these rules. So we do want them to spot bad behavior. So the fact that they're open allows us to accomplish this.
Now let's look what it means to actually create these detections, sort of, we put this philosophy page in there, which is sort of how we approach creating detection.
So when I say we mainly are security research team, so have a, a very vast team of security researchers who is their full-time job, creating these detections, some of the most talented people I've ever met in my life. I'm very honest about it. And they are really, really good. And not only are they good, but they're very humble. So like if they create something, they say why, and this is like, sort of like you can ask them any questions at any time, a really pleasure to work with these people.
And so reword experience is key.
So like these are basically built based on what we would have seen either in our practitioner life or working in other organizations where, you know, security was what we did. So, you know, or malware we saw on the wild, active campaigns, real attack factor. So these are based off of experience as, as a sort of the main component of being built. And we focus a lot more on a behavior. So you'll see an example here soon where we create a rule, we don't just look at sort of, you know, looking for a specific process. For example, we look for what would have led that process being spawned.
So we'll see an example of that in a second, but behaviors are very important to all of us. And you see this here, I highlighted it already, data agnostic. So like independent to the data source is very important.
So it should work depending on whatever way you get data in for windows or whatever way you get data in for AWS, it should be totally independent. We don't want you to have the right specific detection rules for specific technology, like, like or something like that.
And of course, one of the biggest problems and challenges we have in, we want to detect all the bad stuff, but limit the number of false positives. It's a big problem in the industry. It's called alert fatigue, or an Analyst. You sit down and you presented a screen with 50,000 alerts and, you know, half more than half of them are gonna be false positives. Like where do I start? How do I begin with it? So that's also something that's, we really like a challenge for us when creating detection roles, but also sort of the approach we take us to try and avoid them as much as we can.
And even more importantly, improving the performance of the platform that they're running on. So like, since everything at the end of the day is gonna run on elastic search, we can, of course write a rule, sort of like a, I like to call it brute for style of detection where you basically say star element star. So you're looking for a word and a wild card, but we know that's gonna have performance implications.
So whenever we create a rule or whenever we accept a rule from the community, we make sure that if someone's gonna run this in a production environment, it's not going to have a, a huge cost implementation on their elastic search cluster when it comes to behaviors, this is sort of what we focus on. And it's very similar to the philosophy of MI attack.
So for those who aren't familiar, MI attack start out as an organization to help we're looking for specific, like I should say, listing out specific tactics and techniques that malware typically has, or a bad actor typically has really great framework. So we, we sort of have a similar philosophy to that where we focus a lot on the technique, not just not indicator.
So again, not just, I'm looking for this process, I'm looking for what might have led to that process, for example.
And of course we make exceptions where it makes sense. So there are certain rules where we know that a certain behavior is normal for a specific process. So if we see, for example, windows, you know, Microsoft office, for example, typically does a few schedule tasks when it runs. We add those as exceptions, just to make sure they don't fire for nothing. We're not just gonna say, okay, we're gonna look at everything that creates shared task, cause that's gonna be very noisy.
So we do have the concept of exceptions where we know, and they have been validated to make sense.
So here's an example of difference between having just an indicator or a behavior, right? So this is a very typical of mini cats and mini cats are the more popular tools on windows for stealing credentials or being able to harvest credentials and things like that. So we could just look for the process name and the arguments in the command line, which is that's very common having secular LSA.
However, we could also look for the behavior and what does mimic arts do is injection into, into LS, right? So we, we sort of look for that now here, we're I, I said before, we don't focus on specific technology here, we, we specifically look formo because of the way it does process access, right? To the target imaging, which is bit unique. Tomo does it, which is why it's so specific to on there. But typically they're way they're a lot more agnostic, but you can see the difference here.
Like, yes, we would've caught it with the first one, but having the behavior makes a lot more sense. Having seen things injected into LSAs is typically way more indicative.
Now ECS is key here, right? So ECS in case you're not aware is the elastic common schema. What does that mean? So elastic common schema is essentially a fancy name for a field naming convention. For any of us who have worked in the security world with multiple different vendors and vendor technologies.
We know that for some reason, no one has ever sat down and said, okay, we're gonna name have a common field for source IP, which we're gonna call source IP. No, we've no one across the industry has done that. Right? Every vendor names it, whatever they like. So there's no control over it, which makes it really, really hard in security analytics. So what we do as elastic is we develop the elastic common schema and for all of our data sources, all of our modules. So we provide hundreds of modules for data ingestion. We adapt to the common schema.
So if we have a Cisco module, a Palo Alto module, an IP tables module, a checkpoint module, and a module, they're all sending source IPS in different field names, but we bring them all into the same field. So when we are writing a detection, we know that we should be looking for data inside that singular field name.
And it also defines categories as well.
So ECS, apart from the actual feed names, we put everything into categories, so authentication identity and access management cloud, and network connectivity. So there's categorization for pretty much everything and it's extensible and open. So if we've missed something, which is very possible, of course, at the last we don't have access to every technology in the world far from it. Cuz we live almost entirely in the cloud. We allow members of the community to say, listen, I created this ECS feed list for, for example, vulnerability scanning data, which has happened.
And we added to the common schema. So it is totally open. And even if you want to adapt it internally. So if you have your own schema, internally, ECS is as open as you wanted. So you can adopt to elastic standards, but add in any of your own custom fields as well.
And this is sort of the example we said before. So imagine we didn't have the elastic common schema. And I was collecting the data from, I think this is blue code, some other data source Apache and some other data source. These are all referencing the same IP in the same field with ECS. This is what the query looks like.
So it's, it's less expensive as a query because we're looking at one field, but also allows us to have really easy logic. And as an Analyst, it makes it much easier. How do we sort of limit the false positives? Right? So generic detections, it's probably the worst thing you can do. So like having just a generic, like I said, schedule task created, that's going to, you know, it's gonna have cost a very wide net, right? You find everything, but you have a lot of false positive.
So generic detections is not very good at all.
And you can see here, this is the term we use is, you know, tip the scales for the through positive. So like look for specific things that happen when you're creating a detection rule that makes it look more like a through positive. So if there are, for example, let's take this Microsoft office example, which schedule tasks. If you know, when the schedule task is created, there are certain arguments which are used then look specifically for those arguments in the exceptions, if it calls the same process, but uses different arguments, that's indicative of it being potentially that's re positive.
So, and this is what we mean by over there and testing your own evasion techniques. So putting your detection rules to, to test basically. So if you create a detection, try and get around it. So we do that very actively. We create detection rules and we see, Hey, what can attacker do? Try and get around it. And this is actually real fun. And this is moving into the, you know, whole purple team concept where you have a red team and a blue team, red team is trying to get past the blue team's detection.
So, and keep testing your logic until you, you, you manage to find something which provides a good balance between two positives and, and, and, and two negatives and positives.
So this is, this is one example, really, really good example. So you have, let's go back here. So we want to look for like this minor tech technique, which is created modified system processes and windows services or system services, service execution. So we have looking for just SCT XC, which is system services it's way too vague. So like it's not enough for sure.
Looking for, you know, something which is going to create or, you know, SCTC with those arguments create or configure. It has too many false positives, cuz there's so many things which create a new tasks. This is too easy to evade because you can basically in the command line, you can put anything around that that you want to get around it. And the same with the second example, and then you have something which is a bit too tight. So too overfitted so like you're not gonna find much at all.
If you look for just for example, command prompt, launching services with those particular arguments.
So if you look at the last one, there's a good balance between them. So we have the process name, not username system because we know system that shut you tasks a lot, create or configure and look at those arguments. So like there's a lot of sort of Staes in there, which is very indicative usually of that sort of behavior. So that's an example of where you do rigorous detection. You go through certain processes and until you find good balance between false positives and through positives, you can see that example here, there's the actual detection link in the bottom.
And so like you can go to the whole issue of when we're creating it and how we, it it through the whole process.
And like I said, if you see those flashes, they're basically very indicative of later improvement. And then of course, privilege escalation when it comes to the performance.
You know, especially if you're not used to working with elastic search could be pretty fundamental. So there are certain things, you know, you know, when you're using a particular data source, we're looking at specific what we call index patterns, which are generated by the modules. If we have a, an, a field like process arguments, for example, we don't need to parse it. So like we have process command line, but then we also have process arguments which are parsed. They have all the arguments in an area. So it's much more efficient to look for data in there rather than doing a wildcard.
For example, on the command line, again, exact matches are always better or trading wild cards rather than just star something star. And lastly, don't be afraid to ask like, this is that's the whole concept of an open repo.
So if you, if you want to contribute and you're not too sure what the syntax should look like, or, or what is the most efficient way to create it, ask us like, we're very happy to help if you have an idea for detection, but you're not too familiar with elastic search. This is why we are here. And this is exactly why we created the repository. So it's really important.
Here's an example of where we have a detection with very unnecessary white cards because we have the process arguments parsed in an area rather than you having to put white cards in the actual command line itself.
So this is going, the one here where it says use parse field is going to be, you know, order of magnitude a lot more efficient now, how do you contribute? So there's a whole contribution guide here. So it'll show you exactly what you, what you should do. So open up an issue first. So just to let us know, Hey, I'm working on this detection, you basically fork and crown the repository.
So if you're familiar with it's, same thing you do with any other project, use our CLI and I'll show you what it means to create a rule very important because when you use the CLI it creates the, the structure for you. And we can see it. We have the local tests and we're gonna look at this in a second and then you submit a poll request. So once you go through all of that, you have the detection ready, you're ready to push it to our repository for final approval.
Now, why do we create an issue first and then request? Because like I said, we can discuss it early in the process. So especially if it's around, Hey, what should I use? Like I have this idea for detection. I previously built this for example, on our site. And I'd like to get it into elastic now, but I'm not sure what it should look like. So that's exactly why we open up initial first and we have a ton of templates.
So like, if you want to build a specific detection, we have some templates you can use already. And if you're building a rule that should be licensed, we have guides for that as well. And it also improves our productivity and this is what it looks like. So even if you go to the detections repository today now, right on the score and you see a lot of open issues and that is rules that either our team internally is discussing or the, of the community have contributed and, and want to put rules.
There, there are many open issues that we have, right? So new issue buttons right there in case you're not too familiar with press the four buttons or you need a account of course, very easy step by step here. So you can see you clone the repository. Again. Maybe you're not too familiar with the top, but get clone. Essentially. You're gonna grab the repository as it is on and put it onto your local machine or your server or wherever you're developing. You enter the, the, the directory locally. So once you clone our repository, it comes to your, it it's labeled as detection rules.
And then you add the upstream, which basically means as you're making changes, it knows that you should go to the upstream repository. We're gonna look at this slide. So I'm gonna show you sort of like what it looks like to create this. So we have the full CLI available and the also running the tests and submitting a pull request. So let's look at that right now. I'm gonna open up my terminal here. Now I've already, you know what, let's do it from scratch so everyone can see. So imagine I wanted to go back and I'll remove the repository.
So let's say I wanted to start contributing.
So the, the repository lives here, so.com specialized tax detection rules. So this is the live repository. So we've seen the, the screenshots there. This is from this page and you can see there's 106 open issues, a 56 put request. So there's 56 rules pending just indicate here. So this is either what our team is working on and pushing out or what the community is pushing out as well.
So a lot, you know, 56 currently open. So it's a good amount. And then there's 50, there's 106 issues. So rules being discussed at the moment. So let's go ahead and clone this repository. So I'm just gonna go here and you can see, I have a couple of options, but I'm gonna ahead and go for the, the HTS link for simple, as long as you have get installed as a, as a client, I'm on a cents machine here, doesn't matter.
You can still get pretty much anywhere, get clone that repository. And you can see now I am I've cl that repository so I can enter detection, rules, repository.
And if we look at it, it's basically the same exact structure that I had before inside the inside of hub. Now you can see here, there's quite a bit going on, but in reality, you're going to be using this rules, RTA, which is the red team automation or the detection rules functionality.
So, and very important is file requirements at TXT. So if you're gonna be using our command line interface, which is Python based, you do have to install the, the necessary Python libraries. Very simple. If you're used to Python, it's using PIP.
So Python, PIP install minus are requirements. These are all in the guide. So you'll have to memorize anything. If you are interested, it's all listed out in terms of what you do, but my requirements are already installed. So it'll tell me already satisfied. So like this I'm actually ready to start.
So if I, let me see if I can find the create rule command again.
So imagine I wanted to create a rule and I wanted to give it a name, defensive agent, Microsoft build child. And I give it my name here. So you can see basically we're gonna reference button module to create the rule. And when I do this, you can see basically, it's going to ask me what the type of rule that you're creating is. So we have several types, so you can create a custom query machine learning, threshold based alert.
So we'll do query in our case, we can do, we can leave the rule ID blank as it is there, or to generate one for us. And then we can have the action. So if you wanted like webhook email, so you can put all this action metadata, and then basically everything that present in that to file, you'll be putting in here. So instead of you having to go in manually edit the to file, it's basically asking you for every single thing.
And when you're ready, it'll save it in that file.
So if we had to look at a ready one, for example, let's look at, let's look at that one without my, any of my changes or a slightly different one. And this is what looks like when it's created. So it's basically the full tunnel with everything in there. So as you're going through the process of creating it, it's putting the structure together for you. You don't have to do it yourself unless you want to do. Of course.
Now, after I do a rule, the best should test it. So we provide the testing framework. So by the default, you don't, you don't actually specify a rule file. It will test everything. So when we say tests, this is going to do things like checking the timestamps, checking the signature, sorry, not signatures, the ECS schema. So there's a of things to test for.
So if I do test, it's gonna go through rules and test them. So really nice. So essentially instead of you having to write tests, they're all in there.
And the nice thing about it is this is really handy for sort of detection engineering as code. So if you wanted to automate the creation of a detection rule from a command line, as, as detection engineer goes through sort of a C I C D pipeline and makes its way into Keana, that's all something you can do.
In fact, that's what most teams do with this library here. You notice the warning here. So it's basically complaining that one of my timestamps ISN incorrect, which is what I expect. So you can see sort of the detail that goes into it says, Hey, this time is not good. It won't work properly. For example, there's a lot more you can do.
So if we look at and just load up the help menu here, so you can see we did, we showed, create rule.
We have, we did this well test. I can also view rules. So if I wanted to view the rule, instead of me looking at the to file, you can hold it up there. You can import rules. So if you have a rule coming as adjacent BL, so someone exported their cabana rule, you can actually look at that inside the command line, you can validate, you can even search for rules. So this is really nice. So imagine, you know, because instead of you having to go through cabana to search through the 300 or so rules that exist today, I can do something like this. Let's do actually. Oops.
So I'm actually going to say, okay, I want to look inside the rule repository for all the rules that elastic provides, which are machine learning. So I want to see what I'm going to get when I deploy these rules. So if I run this search, it's basically gonna show me, and these are the rules that are going to be using machine learning as their detection mechanism. So really easy to list out, really easy to automate. If you wanted to check, you wanted to create reports on them.
So really, really handy. And you can see it. It's all in there. You can actually specify the columns if you wanted to. So if you wanted different columns from the rule ID file or name, you can as well. And if you notice the language I used to search, this is K QL. So basically if you're used to and the query language, you don't have to learn anything new.
So really, really easy. So you can also search like that. Another thing you can do. So if we look back at the, the help menu here, we can also, for example, upload them straight into Keana.
So this is where for the automation would come in for your, your detections as code or the sort of pipeline. So what we could do is let me, so if I wanted to upload all these windows rules, for example, so anything that lives in the rules, windows directory. So imagine I created my own directory and I want to upload all these custom rules to my Koana after I validated them, tested them and all that good stuff. I basically give them my Koana URL, my credentials. I would go ahead and upload them for me.
So obviously I fit in my command password here, in my case, they'll fail because it tells me already exist. So actually, because there's some, I haven't updated the dates version of Kuana because this is actually before it actually comes out. So it said right now, those aren't supported, but if everything was okay, it'll go ahead and upload them. This is pretty much expected. So that's good. What that's gonna do within here.
There's a few more things you can actually, if you wanted to load up documents from elastic search.
So if you wanted to test your detection rules based on data from elastic search, you can actually do that. So really, really easy and really, really handy. So that's example of the CLI again, everything is within here. So if you go to the, to the repo and you go to cli.md, you get the full readme of how to do pretty much everything I showed you just now and, and even more so to show you, cause we didn't actually go into a poll request. So if I wanted to contribute and look into a poll request, basically this is going to have the files we expect.
So for example, this is David on our team here, the text attempts to delete an Okta application. So this is data coming from Okta.
So one of them module, we have a Okta and you can see here, he has this to file over here, which contains the rule. So after he used the CLI to create his rule, this is what ended up going on to GitHub once he filed the Porwal request.
So really, really nice. And it just does a lot of the hard work for you. I think this is great because it, it takes away a lot of, sort of, let's say vendor specifics, because instead of going to a particular UI, you can implement all your automation tools. You can do whatever you want, use any APIs you want, and this will just work really, really easy.
All right. So before actually, I just want to show you what it looks like within the actual stack.
And then, so for those of you, aren't aware of, let's go back here. So this is Keana and this is the detections page. So whenever you end up pushing those rules to Kuana, they will go into the detection rules, right? So these are the ones that exist in my cluster. So I currently have, you know, 206 rules and 10 custom rules. So if there was anything else they would of of course show up here, or if I pushed anything, they would show up onto the custom rules and you can see, and then once everything is parsed by the terminal and all that stuff, this is what the Analyst sees.
So there's a very nice structure in here. So like, everything is done for you. Like all the, all the parsing, all the metadata, this would just show, this is what the Analyst sees at the end of the day.
So you, as a detection engineer, you don't have to worry about my, my contribution being to difficult, or maybe Analyst Analyst, isn't going to end up seeing T it's gonna be, see this really nice interface over here in reverse. If I wanted to off from, in here and load them up into the CLI, this is why we have the export and import buttons. So if I wanted to export this rule as a Jason file and then use the CLI, that is totally valid and, and also for import. So if I wanted to export the rules as Jason and import them, that is also something you can do.
Okay. So just head back.
So just to recap, in terms of the actual repository itself and why we did it. So we just really, and again, there's our whole philosophy over here. So I do encourage you to read it, but we just wanted an easier way for people to contribute and have a really good experience using elastic security with help from the experts. So both the elastic search experts and of course the security experts as well. So there are, there's a lot of reasons why we, we did it the way we did. And this is also for, it gives you a sneak peek into what's coming up as well.
So like with each version, we of course release new rules. So like, if we, you can actually switch what we call a branching. So you can see in the seven to 10, which is the, the next release that will come and there were new rules added, of course.
And you can, you can actually go ahead and see what's coming out in seven point 10 because the, all the rules will live over here, which is fantastic.
And again, you can see how they're spread out. So Azure, so there's all these rules coming out for, for those of you on Azure, there's quite a new, a few rules coming in. We have windows of course, machine learning, macro west Linux GCP stuff, which is cross-platform for example, there's zoom, which will of course work on macros and windows and even Linux. So they're all split out nice and neat and AWS.
And so on, one more thing I wanted to show you in the actual repository itself is the red team automation tool. So we spoke a bit before about how we should be testing our, our rules. So one thing we do is when we create a rule, we typically create a red team automation file, which basically allows us to test the detection of the rule.
So basically these are little Python scripts, which simulate the adversarial behavior, which when we run them, we should be triggering the actual detection inside security.
So if we run one of these and it doesn't trigger, then it means for some reason or another something has failed. So we should probably look into that. It's probably easier to see them here if we go RTA. So you can see here, there's, there's quite a bit, right? So basically if you wanted to test, for example, when those events tell, log clearing, you can just run this. And if I click on it, it's basically a very simple part in script. So you can see exactly what we're doing, but it's much easier than you, of course going in and really clearing doing those event logs.
So these are, you know, they will change things. So don't run them in a production environment, but it's, there's no like malware involved there, we're just simulating the adversarial behavior. So you just literally run it's on, on the machine that that's your, let's say your test or your, your test machine for detections and there's, as you can see a lot, depending on the rule type.
Okay. So that's the repository and the philosophy and all of that stuff, please do use it. Like it's really, really important for us that we get contributions.
Not, not just so we have more rules, but we, we learn from everyone. So like for us, it's really important that we open up our net as wide as possible, just so we get the experiences of as many different users as we can. And there's also here, we have our slack workspace, so just open it up here.
It's very, very active. So if we go into, for example, this is our slack community workspace in the detection rules, one alone, there's 249 members in, you know, the entire workspace.
There's 4,300 people using elastic for secur, for any reason, whether that's security detection rules or whatever it may be. So you get a lot of help from many members of the community.
Our devs, our research team, a lot of people are actually on there.
Okay, think we're gonna finish a tiny bit early. So I'm just gonna, I have a few more slides and then we can, and that's here we go. So we always like showing the slide because it really shows sort of the breadth of our community ecosystem.
So, you know, ever since, especially since elastic was and has been used so heavily by so many different vendors and community projects, it's pretty much everywhere. Even if, if you don't realize that you're probably using it in some way or another. And this is probably my favorite box down here, because even before we ever had our or security solution or detections engine, people were making the effort to build on top of elastic search because it was a technology which allowed easy access to anyone.
So whether you were an individual at home or whether you had an organization that you wanted to protect, these were the projects that made as possible.
So massive, thank you to all these projects. And these are not, I hope, I dunno if any of you ever used them before, but these are not sort of like your weekend coding projects.
These are, you know, almost full-time work. Some of them, especially ones like Wazu and the health and security onion, these are big projects, but then you also see, for example, some of our partners like stock prime. So prime are probably the first to sort of like a rule translating mechanism. So they have this tool called encoder IO, which is primarily designed to translate rules from the stigma framework into detection rules.
So we're very happy with sort of the work they've done and, you know, we thank them for that very actively, but there's also some of our other partners in their as well. And then you have also have our, you know, people who I should say, organizations have been building security products for the past 20 years or so.
So like our site be partnered with us to give their, their users a better experience. So like this slide really talks about how important community is and, and sharing. We're very big believers of that last again, last reminder. Okay. Just in case you need the links again.
So if you liked what you saw and you wanted to get started with detections and using our repository elastic cloud is probably the easiest way to try like the upload. And I think all that functionality it's 3, 4, 14 days, no commitment whatsoever, no credit cards needed. Ask us questions on slack. So there's a short URL for our slack channel and ask whatever you need, ask whatever you want. There is no such thing as the wrong question. And if you just wanted to see what it looks like using detection rules in a security context. So this was more like my first session earlier today.
It's demo elastic.co is a fully open demo instance with some, with some data in there that you can experiment with as well. So really, really handy and gives you a really good place to try. And we also have free on demand training. So we've been offering this since the beginning of the, of coronavirus. So like we all know how hard it is to, to actually, you know, developing new skills and things like that. So we have started offering new free training for some of our more popular training courses. So please go ahead and, and make use of that.
If you're interested there's security, specific courses as well. So that link would take you straight to the training and that's it again, last, last couple of links in there just in case you need it. So that's the link to the detection repo. So get up com slash detection rules, and you have the slack workspace with the slack channel specifically for the detection rules.
So again, don't hesitate to find us on slack. You can ask us any questions that you like. So we we're quite early, we're a bit 15 minutes of schedule, which is great.
I know there's a lot of information taken, especially if you've joined both talk. So I appreciate you staying on for so long. We can use this for anyone to ask any question they may have either on the detections repository or on the previous session. I'm totally fine with that.
So please, don't hesitate to ask any question you can ask either in the chat or you can ask them live. If you wish, if you're not comfortable asking them in a public audience again, there's the, the slack channel. That's probably the, the best way to reach out if you have any questions. So I'm gonna open it up just in case anyone has any questions they'd like to ask now.
All right, seems people are very quiet, totally fine. Like no need to ask questions again. If you do slack is the best place or even throughout the rest of the sessions today, you can join us on the elastic lounge.
So if you need to go to the lounge or some of us are answering questions there on the, on the called live platform. So feel free to ask throughout the rest of the event. Thank you again for joining either this session or the previous session or both. If you have always a pleasure and thank you very much for the Cooper call team for putting this together and inviting us to, to join. So thank you all, have a great day and I'll see you all next time.