Benjamin Franklin, the famous American statesman (happy 4th of July, by the way!), once claimed: “in this world nothing can be said to be certain, except death and taxes”. Well, nowadays, there is another certainty for every organization in the world – eventually it will be hacked. If yours has not yet been attacked by cybercriminals, then it can easily happen tomorrow or even worse: it might have already happened, but you just haven’t found out yet.

Statistics from around the world clearly show that data breaches, ransomware attacks, and other kinds of cyberthreats are on the rise, and the cost of dealing with their consequences is increasing as well, despite companies investing more in cybersecurity solutions. Combined with the ongoing shortage of qualified staff to use those tools, it is completely unsurprising that everyone has high hopes for Artificial Intelligence (AI) technologies, which promise to reshape the entire field of cybersecurity and finally tip the scale in favor of cyber defenders. According to some predictions, this market will grow nearly tenfold by 2030.

With the latest developments in areas like generative networks and large language models, some of the new tricks AI tools are able to perform look so impressive even for seasoned IT experts that the public perception of AI has quickly shifted from “I’m sorry, Dave, I’m afraid I cannot do that” to “Can ChatGPT help me be nice to my coworkers?”. And while I’m not going to comment on ethical implications of that, at the very least we need a small reality check on the current and potential capabilities of AI tools in cybersecurity.

First, let’s get one thing out of the way: there is no such thing as “The Artificial Intelligence”. Even leaving aside the philosophical dispute about the nature of human intelligence (hint: we still don’t really know how it works), the very idea that AI is one thing, technology, or process is profoundly wrong. The academic field of AI research is very complex and diverse, with numerous methods and algorithms suited for different applications. Perhaps the only common thing among all modern AI technologies is their enormous appetite for hardware resources and demand for huge datasets for training.

So, what are the most relevant applications of AI-related technologies to the field of cybersecurity? Well, the lowest-hanging fruit is pattern recognition and anomaly detection in a stream of security events (such as the telemetry collected from endpoint devices, server logs, or cloud services). Even the most basic unsupervised machine learning (ML) techniques can find outliers and thus identify potentially suspicious behaviors, thus massively reducing the number of statistical noise and improving the productivity of security analysts. Clustering methods help group events together based on their similarity and thus recognize ongoing attack timelines instead of single data points.

More sophisticated methods rely on training ML models with data from past incidents and do not just identify anomalies but align them to known malicious tactics and techniques (like MITRE ATT&CK), thus helping classify incidents and measure their risks much better. The biggest downside of such methods is collecting enough quality data for training – thus, large vendors with massive global telemetry networks usually lead in this field.

These AI techniques are perhaps the most popular in modern cybersecurity tools – most endpoint solutions use them in conjunction with traditional methods to detect malware, identify suspicious user behavior, and even to stop destructive activities of ransomware. Whenever you see a “Powered by ML” label on a product box, this is what most vendors actually mean. Unfortunately, their exact capabilities can vary significantly – some early AI-powered antivirus solutions could easily be tricked by hackers.

A more recent and sophisticated class of AI techniques is based on neural networks originally designed to perform tasks like image recognition. These deep learning-based solutions can be trained to identify other kinds of complex patterns and features in, for example, binary code or network traffic instead. They have been applied successfully in modern intrusion detection systems or application security solutions that can identify obfuscated malicious code without the need to execute it dynamically first.

The area of e-mail security requires a completely different approach. While even the early anti-spam solutions have utilized some primitive kind of machine learning to identify unwanted mails, preventing modern social engineering attacks and business e-mail compromise requires a much higher degree of sophistication. This is where natural language processing technology (NLP) comes to the rescue. NLP has been used, for example, to identify the sentiment of an e-mail or to check whether it is likely to have been written by the person it appears to have come from. The same technologies are also widely used by brand reputation management solutions.

More recently, Large Language Models (LLM) – the technology behind tools like ChatGPT – have been the focus of public interest. Trained on enormous datasets from various sources, they can generate texts in multiple languages that not only look like they were written by a human, but actually make sense, at least most of the time. Combined with NLP, they enable the creation of chatbots that are potentially indistinguishable from humans, easily passing the famous Turing test. These intelligent chatbots enable truly unique possibilities for cybersecurity as well, allowing analysts to perform tasks like threat hunting, incident analysis, and even preventing future attacks just by talking to an AI assistant. It must be stressed, however, that general-purpose LLMs like ChatGPT are not sufficiently trained to perform mission-critical jobs like this. Specialized solutions for industries like healthcare or cybersecurity are being developed, with promising early results.

And yet, however impressive these developments are, ChatGPT is not the holy grail of cybersecurity automation. What many analysts are dreaming about is a solution that is not only smart enough to identify and analyze security incidents, but can also deal with them autonomously, with human involvement reduced to a minimum (ideally, to making only critical decisions, completely spared from the tedious job of reacting to every boring alert). Interestingly, this is perhaps the most controversial area of AI applications, with many people still afraid of letting AIs make their own decisions. However, most people usually agree that letting robots take over the most boring stuff like patching, performance tuning, and maintenance is perfectly fine, and some vendors already have working solutions for that.

So, what should we expect from the latest AI innovations in the field of cybersecurity? Will ChatGPT soon replace all human security experts? I seriously doubt that. If your job can be completely replaced by an automated tool (like the job of a journalist writing for a tabloid newspaper), you should probably reconsider your career choice, but I do believe that in cybersecurity, the risks and priorities are still somewhat different. Instead, you should be looking for tools that improve your productivity and leave you more time for critical decisions and creative tasks.

If you are interested in learning more about the latest trends in cybersecurity and in meeting real experts, we are planning a conference devoted entirely to this subject. Check out cyberevolution this November – perhaps we will see you then in Frankfurt, Germany!

P.S. No AI tools were used for researching, writing, and editing this blog post!