Great. Thank you very much. Can everybody hear me well, just to thumbs up on the back row, if that's good.
Okay, cool. Well, this is super exciting. I know the title of my presentation is a bit sense. So a bit sensational and I want to talk about machine learning basically, and mostly in the offensive sense. So I love where the previous discussion went to with emote as an example, and the deep, deep fakes. And I think my key takeaway was that we're currently training mice with AI to solve our skills gap. Maybe like the skills gap can be solved by the combination of AI and mice. If it comes to it, I guess was my key takeaway, but I want to talk about three things mainly in the next 30 minutes or so.
One is to give you an example of how AI is being used already in practice in cybersecurity.
So I think the morning was really good, challenging us in the thinking about the problems that AI brings, how to implement AI, AI, limitations, trust in AI, ethics and AI, and rather theoretical topics I'd say. So I wanna bring it down to practice and show you some things around that. That's the first third of my presentation. Then I wanna talk about where we think a dark trace that attacks are going. So using AI for attacking in the short term.
So right now in the next few years, and then moving forward for the long-term perspective. So where's all of this going. We heard it's gonna be an arms race, most likely. So where was going? Not on one to three years, but five to 15 years. And of course of the next paradigm shift, because I always try to think about where we are going as an industry.
I think the last paradigm shift was when somebody finally weaponized worm malware, like wanna cry and not pet.
So everybody knew they were worms up there, like the Morrison worm or the, I love your virus and all these things that could literally move around, but they weren't fully pay loaded and weaponized. Now 20 17, 20 16, 1 cry came around with a heavy payload, ransomware encryption software, and the ability to move around literally. And it took the industry in the world in storm, right? A still companies left right and center who are struggling doing the cleanup work. So that was the last paradigm shift.
And we think the next paradigm shift is gonna be when the bad guys start actually using AI for attacks. So I don't want to talk about attacking AI systems. We heard a bit about that earlier this morning, how to attack AI systems. I wanna talk about how AI can be used in attacks.
Now I work for doc trace. My title is director of threat hunting. That means everything and nothing. I'm basically using our AI solution every day. Work with our big strategic clients. I used to be pen tester. So consultant pene actually used to be a member of the German chaos computer club ages ago.
And now I've been with this company three for three and a half years using our solution every day, still heading up a team of 30 threat hunters. So I'm very hands on and wanna convey some of the benefits that an AI solution can deliver already today. Now I think we're all aware that AI is actually everywhere. So if we just take a brief look into what AI is doing already today, there's many examples. And I think the whole morning session was really good illuminating this. So AI is a thing, right?
AI is being used every day in delivering benefits in many solutions, being it voice recognition like for Siri and Alexa, be it other things like autonomous cars and computer vision, be it in healthcare where some supervised machine learning is used for sepsis detection B for G. So we use some supervised machine learning, not we as dark trace, but the healthcare industry does to detect sepsis cases earlier and actually self save many lives by doing so. So it's a thing it's being here left right? And center chat bots. Everybody knows those.
You go to some random website and in the bottom right corner, something pops up saying, Hey max, can I talk to you, offer you some advising guidance? So that's natural language processing. We saw some other examples earlier in the previous presentation and Alexei talked about cognitive AI. So very nice segue into that.
Of course, the things like Amazon, Netflix, and Spotify, which suggests you, this is based on your taste. What you should look into similar series, you might want to watch or might not want to watch.
And there's other solutions and other applications in the cybersecurity area like dark trace. Now this is the only slide. Really. I want to talk about ourselves because some people might not have heard of us. We've already been mentioned twice today, which is quite cool in the earlier session about ethics and then by Alexei. So we've been around for six and a half years. I've been with the company for three and a half years and we apply machine learning to cyber security, not just that, but we try to be at the bleeding edge of what we do here. We came around from three different parties.
One is cyber experts from government entities saying, that's a problem.
We're trying SIM we're trying to build socks. We're trying oxide Splunk. We're trying everything by the books and it doesn't work, right? We're throwing more people at the problem, but the bad guys are always one step ahead. They always have new polymorphic malware, new business, email compromise, really sophisticated spare fishing. And we've been trying this for 30 odd years with our current approach, more firewalls, more network segmentation, more social security awareness, more penetration testing.
And look, where's gotten us right. We're steady, compromised. Even the best companies in the world with all the budgets, they get compromised by single misconfiguration like capital one being hacked by what's basically an internet jerk. That's what Patrick Gray from the very famous podcast bus risky business caught is being compromised. So best companies in the world, all the budget compromised by single individual. So it doesn't work. So we need to look for something different.
So these people with the understanding of cybersecurity went to Cambridge university machine learning experts and said, look, let's try to do something new. A lot of funding came on top. So that's the three parties and Darktrace came around now, fast forward when I joined, we were 150 employees, three and a half years ago. Now we 1000 and we've got over 3000 customers using dark trace every day.
Now, why do I talk about this? Because I want to give you a practical example of how AI is being used. Not right now in thousands of companies in defense. So we can then pivot over to offense and look how it can be used in the future there. And afterwards, we're gonna discuss it in the panel here for a bit as well. So what we do is we use unsupervised machine learning. So cluster detection and anomaly detection out outlier detection, many different algorithms to detect what is normal for every single entity.
This could be in networks.
So my laptop duct trace would see it over span, port mirror, port, or tap part. If I see the traffic and would say, look, Max's computer normally looks in at these times of the day, sends a lot of data to Reddit and YouTube, maybe around lunchtime because I serve these websites. I use HDP requests, H PS requests, LD LS, what kind of HDP requests, maybe post requests. So all these different features are being looked at. And basically what we do is build this pattern of life, understanding for every entity what's happening, what's going on. And this doesn't have to be in the network.
This could be on the email side. So we heard load about business, email compromise, especially in the last presentation, which was really cool. And what we do there is look for things like CF or fraud or CX or fraud, not by saying, look, this is based on previous spam or phishing.
So we don't use supervised learning and try to train a classifier machine learning entity based on previous data, because then you have all the problems that Alexei mentioned. Like you can poison training data, you can skew it. There's gonna be biases. We take the opposite approach.
We use unsupervised learning to say, okay, what is normal for this mailbox? What is normal for this user? So does max myself normally send loads of emails to random people on the internet? Do I send emails with specific mind, types and attachments? Do I receive a lot of emails from people I talk to? What's the normal pattern of life, the normal communication pathways. So once there's a really good spare phishing email coming to me, maybe based on a phone call from before we say things like, okay, look, there was no previous communication pattern.
It comes from a domain that's not dark trace, but maybe dark trace.
So there's a typo in there. Somebody justed that domain, it looks similar natural language processing to what our actual domain is. And the sentiment of the email is very shouty. Maybe it's very subtle, so maybe same or different normal communication patterns. And based on that, not based on signatures or previous spam or blacklisted, IP addresses and domains, we detect these things.
Even if there's no link attached, even if there's no malware, even if it just please wire through these $250,000, because the deviation and behavior is there. And we are firm believers that the core problem of cybersecurity is complexity. We saw in one of the previous presentations, this nice slide about what a current enterprise looks like, like loads of connections and complex software and all that stuff. And we can't keep throwing humans at the problem.
I mean, we have to do security awareness trainings.
Absolutely agree, but no human understands what the network looks like, what the cloud environment looks like, or what attacks against these things even look like. So we can't keep throwing humans at the problem. We need to start using more clever algorithms, more automation and that kind of stuff. And I love what in the first presentation, one of the first presentations with the three challenges earlier this morning, one of the challenges was don't give the skill way to the AI. Now we're gonna have a panel in a few minutes.
So we're gonna have a heated discussion around that. I hope I'm actually of the opposite opinion. We have to hand more to AI. Why should I train people in crawling through P caps or becoming experts, understanding what windows event lock 5 37 means that doesn't scale and that's super expensive. And these experts don't grow on trees.
So I'd rather have the machines, do they have a lifting and do all of this analysis and augment the human in the end? So they just have to make the decision on top.
So I think that's hopefully gonna be a discussion, but this is what we do at dark trace, take digital data. This might be a network data intent of things, data, and look for the outliers and anomalies. And if we want to take autonomous action, and I wanna give you a few concrete examples before I go into the major chunk of our presentation, which is using AI to attack.
So some examples are, instead of saying like you would in SIM solution more than 500 megabytes to a IP address on the internet within an hour, which is a hard rule and hard threshold, which loads of, you know, if you put this rule in saying more than 500 megabytes from a server to the internet, you're gonna get hundreds of thoughts positive, but maybe you find some interesting DLP cases there to loss prevention cases.
Well, I used to work in a, so I used to be one of these level, one Analyst crawling through that stuff or almost burning out because it's super repetitive.
And what we do instead is because we know normal behavior, we say an unusual amount of data being sent to a red destination on the internet at an unusual time of the day. And we already know what a server looks like because Darktrace auto classifies devices.
Again, try to elevate the human element. Don't have security experts sit in your system, try to codify the assets and say, that's our server landscape. That's Viap telephones doc trace. And similar solutions can detect this automatically based on the traffic. If it serves DNS, if it serves curbs, it's most likely DNS controller, right? DNS server, if it's only offering some IP act traffic and maybe some S sip traffic, it might be an IP telephone.
So we ought to detect these things and can use it for detections. And that's a huge thing. I think I hope you can agree on that.
If you have a solution that's not based on static rules, but fluid thresholds just saying unusual amount of data based on the normal data transfer done from this device or it's peers or the overall network going to red destination, instead of saying, this is a blacklist that IP addressed because that's static threat Intel, and doesn't really scale an example of a business email compromise. And Alexei mentioned that earlier, we found one, which was at a film studio in LA, one of our customers where we look at the email traffic in office 365, and one of their suppliers got hacked.
And the hacked supplier sent an email to this film studio with a fishing link. And that was fully in the normal pattern of life because, you know, it's a hacked, compromised, normal business email from a third party to this company.
And we still picked it up because the way of writing the email, the sentiment was very different. The time of the day was different. The compromised email didn't send it to one person, but to two people which never happened before. And the link attached, the phishing link in there for entering the credentials was rare for the network.
So we correlate the email data with the network data. So we, we could actually say, hang on. This email looks really weird and different, right? From many perspectives using the classify and the anomaly detection and the network traffic based on the fishing link is also really weird. We've never seen that kind of stuff before, so we can catch these things. We can use machines to do that already. And why is this interesting because it scales, right? I'm all about scalability and using machines to do stuff.
So we have thousands of customers in every single industry vertical because we always learn on the job. So when we deploy anywhere, the data stays local. Another challenge that Alexei mentioned earlier is that loads of the AI providers send all your data to some brain in the cloud. And some magic happens in the black box. And then you might or might not get some detections or some help. Now we don't do that. Everything happens locally. We plug in locally to your network virtually or on premise. All the learning happens on site, right?
There's no prior learning done, no supervised learning, it's all unsupervised. So the learning happens on the job. So every network looks different. What's normal for maybe Facebook is very different for LinkedIn, very different for copy our call. So we always learn on the job, which is super important. And all the data stays local.
Nothing goes to dark trace cloud or anything like that. And we don't stop there. What gets more interesting is we also stop. So we heard the term autonomous response earlier, just in a case of ransomware.
And I hate that ransomware example because it sounds like fear, uncertainty and doubt, but ransomwares everywhere. Right? We see just left right. And center again. And we see quite a few cases every day.
Darktrace, can't just spot this anomalies, but also flip it around and then false normal. So when we see something like ransomware attack from the behavioral level, what it looks like normally is classic case is emote going into trick bot going into Rio ransomware, just a very common attack chain of malware and emote is just spare fishing coming in malware. Loader trick bot is for manual access to allow the hackers to move around literally. And after two or three days, the radio ransonware is fired and everything is encrypted.
Now what that looks like in dark trace, even if it's zero days.
So there's no threat Intel and the malware hashes are unknown because you know, it's a targeted campaign. We see the initial download of executable from a rare website because that's often the initial payload from a compromise website. So a website, nobody ever touches comes down, no signature to detect this. Then we see beaconing starting from the infected computer, which means it's not, you know, based aesthetic IP address, but we've blacklisted, but the behaviors repeated connections.
So we say if a certain frequency occurs to an unusual destination and it's out of the normal pattern of life, that's likely command control traffic, then that infected device normally starts downloading more payloads, really unusual, again, scan the internal network, maybe using something like Nmap maybe scanning for eternal blue, port four for five SMB and stuff like that.
And then it starts to encrypt things. So now we talk about behavior and not signatures. And these are things TRAC spots all the time and similar solutions of course, because it's very out of the norm.
And once we see that we can enforce normal. So this is what we mean by autonomous response. Once we see these anomalies occurring, we can have our system just enforce normal behavior. So if my laptop, I'm not an administrator, I normally never do these things. I don't download weird executables. I don't scan the internal network. I don't do these things. So if Dr sees this, it can make an intelligent decision to enforce what I normally do. Instead of saying, we quarantine the device, we gonna apply binary block. We let him do what he normally does. Go to Reddit.
Use Microsoft teams go to some VPN providers, upload some data to YouTube, but the, we things, the scanning of the environment, the encryption of data, the download of further payloads and the letter movement and the quantum control traffic, which I never do is gonna be interrupted for five minutes to hours, three hours.
So the security team, again, augmenting the human can catch up. So this sorry for dwelling so long on what we do. I just mentioned it because we talked so much theory earlier this morning. What I really wanna talk about is how we think AI is gonna be used for tax.
And Alexei said earlier again, sorry for mentioning you so often, but I love your presentation that the hackers are gonna use it, right. I used to be a pen tester and ethical hacker. And it's not just about, you know, when it gets too hard, they're gonna jump to AI, but also if it delivers a bigger return on invest. So if I can just use some open source methods to increase my, my victim profile by 500%, I'm definitely gonna do it cause it gets me more money. So the first thing I wanna talk about is the short term perspective.
We are dark trace, we've got 50 or 60 people who only do machine learning and Cambridge where I'm based. So they're all AI people, right? They all do. We throw problems at them say, look, we need to find a way to detect unusual beaconing better. And they go away and look at loads of algorithms and find the best way to implement this. Maybe augmented with some cyber knowledge and then come up with better detections. But having those people enables us to think about the future and how attack is gonna use AI. So we do loads of research in that area.
And the first thing that I wanna show us, what we think is gonna happen already on the short term, what you see here is what we call the AI toolbox. And I'm a big fan, right? I'm offensive security minded. I used to be a penetration test as a set.
I always think about kill chains or I think about the attack lifecycle. So on orange, you see the generalized attack lifecycle steps that every cyber attack goes through and every hacker goes through. And on the other side, I put currently existing open source software and, or currently existing open source tools that could be used.
Now, either adjusted a little bit or just taken out of the box to augment every single face of the kill chain by attackers right now. And there's examples like earlier reconnaissance, I wanna hack a website, do some autonomous fuzzing.
Maybe, maybe I wanna find a web vulnerability like the OWA plot 10, but I'm sometimes stopped by captures right? Verify you're human. So for years and years there have been capture breakers using computer vision, technically that's AI, right? To break those captures and move further.
Now, if we look further, the initial intrusion, this is what we just heard about is soft and sparing, or it could be fuzzing based things like finding zero exploits.
Now the DARPA 2016 ground project is exactly doing that. Trying to find autonomously, patchable exploits without humans in the loop. And there's another really cool tool called snap, underscore R and it's a Twitter Sping tool. So we all know about spa fishing emails, but what I wanna do these days, if I still do some red teaming is I go to social media and I'm gonna stalk you on social media.
And I'm gonna check out on Twitter, what people you follow, what kind of people follow you? What topics you talk about? What kind of tweets you send out, what people you tweet to, and then I'm gonna insert either. I'm gonna create from scratch a profile. That's gonna similar to the ones you talk to, or I'm gonna send you a tweet with content that you're definitely gonna look at because I know, Hey Alexei, have you seen this latest offensive AI research left your presentation at KC summit?
Take a look at this. That's gonna be Bitly link.
And I'm sure you and me, we both will click on that because it looks legitimate. Now this tool snap underscore R from 2016, does exactly that, but autonomously that wasn't us who created this, but it's a prototype out there. You give it a target. Max. He Meer. It goes to Twitter to its APIs, understands using machine learning. What kind of topics I talk about NLP, natural language processing.
It looks at what people I talk to, what topics I talk about with what kind of people, and then automatically creates a spare phishing tweet with a link that sends me to website, which might contain export kit or just some phishing credential stuff. And then it's gonna attack anybody. And these guys did some testing and they found out that their tool snap under score R is better in most cases than a human higher click through rate for the victims.
And of course, faster, right? I might take two hours to create this spare fishing tweet.
Whereas this tool takes seconds, just crawling everything, creating it on the fly and sending it out. And it goes further. I could go on and on. I wanna give you one more example at the bottom mission accomplished is often data, right? It could also be ransoming data. So encrypting it or maybe manipulating data, but let's talk about data Fil. Most people here with Germans, right?
We might remember early in January, there was this hack against German politicians and German artists where a German script, kitty living in his parents' basement, basically hacked hundreds of politicians and released by public emails, their private emails publicly, and their Dropbox, their private Dropbox comes access publicly. And he pointed out, I think it was him pointed out various things like, oh, look at this politician, he's having an affair because the hacker went through the private Dropbox and found some adult images.
Now let's get nasty.
What could this hacker have done with the 30 gigabytes of data, instead of going through it manually and picking our cherry picking bits and pieces, reading through it, looking for juicy images, they could have pushed all the image data through yahoos, not safe for work neural network, which is open source and easily available. Now this neural network auto identifies violence, adult material and stuff. Humans don't want to look at normally.
So instead of manually going through TRO of data, after the data expectation, he could have pushed it through this open source algorithm, automatically flagging all the juicy stuff and then bettering his victims. Now you said before, the previous presentation was very dark. So apologies to keeping the tone of this, but it's not all lost. So this is the short term perspective. It's all out there. And we all know there's nation states who are working with on this kind of stuff.
And if we, as a private company in our lab can come up with these things, I'm sure, sure.
It's already happening somewhere. Now, this is an example of the capture breaker I mentioned before.
So just, you know, it sees the captures and it automatically fills in what it sees and auto recognize with this has been available on the dark net for at least seven or eight years for like 150 quid or something rental business model. So we can rent these things. VA pay after a while, even the hacking tools and something due to my heart. I was so happy when you mentioned the emote example earlier and the dynamite fishing, right? Emote started scraping outlook data in October last year. And then these dynamite fishing came around.
It still, it could be done better by the bad guys, right? What they currently do is look at the email chains and then sent something at the very end of the email chain and the phishing mail saying after they store these emails saying, please see attached or in response to our current email chain.
So it's super generic. What they could do is they could use an open source tool like this one here from open AI. So most people here might have heard of open AI. They do loads of research into malicious and positive benign AI solutions.
And they talked about an AI solution, an AI algorithm, which can spin up news or content that looks like it was created by a human. So they deemed that too dangerous for general release and just release the water down version. And this is available on the internet. You can go there after this talk, please, not now. And it's just on talk to transformer.com and you can enter any type of free text and it fully completes and spins up more content based on your short input. So I just put in Maxin my works for doc trace and it comes up with that stuff that looks quite convincing on the first look.
I mean, I work not from manufacturer of cybersecurity, but we do have cybersecurity product and I didn't work for consulting company Daytona, but for Hewlett packer, I used to be consultant there and I'm, I think I have a high profile. So they put out Twitter there at dark trace. This was all done on the fly automatically. Right now think if the bad guys, the emote guys with a huge trove for phishing emails, don't just write static sentences saying please see attached, but they take these long lists of emails, push it through this kind of API.
And the last bit, the fishing bit, doesn't say, please stay touched, but Hey max, nice to talk to you again, love to your presentation. What do you think about this advancement in offensive AI, look at the PDF attached. Everybody's gonna fall for it.
And I, I fully at my end agree, we need to push for human side of things, security awareness, but that's not gonna cut it right.
If this is gonna become reality. And in some cases it might even be already, we gonna lose that game. So I'm a big believer. I walked across its a in nuremburg it's a Nuba this year and I tried to get a feeling for wheres, the industry going and 95%, even if not even 99%, I still talking about firewalls. I segmentation volume scanning and antivirus, and that's all stuff we need, right. But that's not where we need to move towards.
So I'm trying to point these things out because I genuinely think that's the next paradigm shift, which is gonna happen at some point. And once a genius out of the bottle, like eternal blue, like the SMB exploit, once it's out there and available, it's gonna be used by everybody. And then we're gonna have a, a hard time. So that's the, let's say the short term perspective.
The next bit is the long term perspective where we think this is going and it's a bit speculation. So apologies for that. But I think the general ideas are quite clear.
Now I'm sure everybody's seen alpha go or heard of alpha go. So Google steep mind, one of the biggest AI think tanks and companies out there created a reinforcement learning system that can beat the best human go players. The game of go, it's an ancient Chinese game. It's quite complex for a board game, but still a board game. So this reinforcement learning classifier alpha go played against itself. So all that I'm saying, all that they did is, but they did is they defined the goal gain as much space on the bottoms possible, remove the other player stones that was for goal.
So the score, then the AI got inputs. It can move stones in certain ways.
So what Google did besides creating this classifier, they defined the universe. And then after defining the universe and the problem, this is the input. That's the goal. This is plus and minus for the score. They let it play against itself, billions and billions of times to see what works and what doesn't work. And when AlphaGo finally came about and was challenging the best human players, there was one game in particular where it didn't move in turn 37 or something. And these games normally last 8,250 turns.
So what's quite early on in the first quarter or first, first, third, where AlphaGo made a turn where all the human judges and best players fought. Hang on a minute. That's a mistake.
Now, finally, the AI did something stupid and it lost a bit of ground, but this turn, this divine move paid off a hundred turns later.
So played the long game. It did something, the human didn't anticipate even the best human players. And this is where we think AI attacks are gonna go right now. It's all about automation and making things faster.
But once we hit the stage of cognitive machines and cognitive fourth layer and Alexei is permit, so to speak and even further where it's all about natural language processing and not just automating the human, but going beyond what a human can do, because it can crunch more data in more quicker ways. We're gonna all have a really hard time with our previous approaches. And I have two more examples where AI already goes beyond what humans can do in attacks or in, in games. So to speak. I'm a massive video gamer myself. This is the game of StarCraft too. It's a very popular online game.
That's played on ladder.
It's very well defined in South Korea. It's as popular that it runs on several TV channels. So it's really big, right? And this is much more complex than the game of go. The game of go is static. It's based on turns and you only have a few inputs. The game of StarCraft is real time strategy. So it's all in real time and you don't have just one input, but you have to move around your units. You have to build a base and then game lasts for like 30 to 90 minutes. And it's short term strategy, midterm strategy long-term strategy.
So you might make decisions in the short term that might benefit you in the long term, but it's super complex, right? So again, there's a reinforcement system called deep star. I think where similar mechanisms have been applied to the game of StarCraft.
And again, it started beating the best human players.
It isn't as good as alpha go. So I think it beats 80% of all human players, but it still has a tough nut to correct for the best players.
However, the point is, you know, reinforcement learning where you just tell something, this is inputs, this outputs versus scoring. Isn't just applicable to very basic things like the game of go here. The inputs were things like move units, bird units, the goals could be, have as little resources as possible. So spent them defeat enemies, defeat enemy basis. So much more complex, right? And what it did is super cool. Actually here plays against one of the best German players. The little one, sorry for the geekiness here. But what I wanna point out is alpha star.
That's what it's called made moves that no human anticipated, but the human pro players copied afterwards. So this machine learning system came up with moves for the matter game, like building weird units or making weird moves, not just because it's so fast, but in the tactical sense, like doing builds and doing moves that pay off 20 minutes later that no human ever tried.
And it worked so well that the human best players adopted these moves for me, that's completely mind boggling, but it goes further.
So another game is the game of open AI, which is not, you know, it's again, rate time strategy, the game of data, sorry, open AI created a system called open AI five first five, and it's not just one system against one human. This is five autonomous agents against five humans.
And again, it works in the same ways. So where does this us imagine? We don't define a game like here complex game, but cyber security, the inputs could be command line inputs. The inputs could be PsExec RDP. The goal is move five servers to that system and exfiltrate 500 megabytes to that system. You score points if you're faster and if you're less detected and you lose points, if you get detected and you take longer, let it train in a virtualized environment for 5 billion times and see what it comes up with.
That's gonna be definitely scary.
And I personally think that's not happening right now. Maybe it is, but that's where stuff is going, right. And I'm almost at the end. But the last thing I wanna say is this is already happening. Not the long term perspective, but the thing is it's not big bang. I don't see why the phishing stuff, which we talked about. It's not fully autonomous, right? It's not a flip that you switch. And everything's ultimate by AI attackers. It's things that attack us are gonna adopt increasingly over time. If I could boost my fishing rate by 500% by changing one bit of algorithm, I'm gonna do it.
Maybe even not autonomously to let it fish automatically, but just suggest the fishing text I'm gonna use. Right? Why not? So that's really everything I want to talk about. I think when I have a panel in a second, you can heckle me and talk about these things and say, that's ridiculous. If you go to Twitter at Haldeman or a dark trace, we also have a stand over there. But the only thing left to say for me is now, thank you very much. Hope there's gonna be questions.