For the last few years, we have been inundated with messaging about Artificial Intelligence (AI). AI is no longer a term mostly used by academicians, IT professionals, or sci-fi fans. Those in the IT security field have seen AI, ML (Machine Learning), and Generative AI (GenAI) proliferating in marketing, while product developers look for ways to incorporate these technologies into products. Vendors touting some variation of artificial intelligence in their products have garnered more investment. There have been productivity gains. But has “AI/ML” as a marketing term peaked?
A recent study in the Journal of Hospitality Marketing & Management, titled “Adverse impacts of revealing the presence of “Artificial Intelligence (AI)” technology in product and service descriptions on purchase intentions: the mediating role of emotional trust and the moderating role of perceived risk” shows that consumers are put off by the use of “AI” in product marketing. Some of the reasons cited include a lack of trust for AI, a lack of transparency about AI usage, and concerns about privacy. Although this study focused on consumer goods and services, do the lessons learned apply to IT, and specifically cybersecurity?
I recently returned from Black Hat 2024 in Las Vegas. While there was plenty of AI, ML, and GenAI signage in booths on the show floor, how vendors are marketing these technologies in products seems to be shifting a bit. Security practitioners are and have been aware of the presence and need for machine learning in products for many years. An example isthe use of ML detection models in Endpoint Protection Detection and Response (EPDR) products to identify new variants of malware. It is infeasible to build an EPDR solution today that does NOT use ML, given the volume of malware variants discovered every day. AI/ML is not new in the market, and it is not new to those of us working in the field. Perhaps this realization among product marketing teams is another reason why the messaging is changing and needs to evolve further.
2023 was certainly the year of GenAI, with large language models (LLM) capturing not only the attention of the public but also becoming mainstream tools. Vendors large and small rushed to find ways to get GenAI into products. Such objectives are innovative, and can result in improvements in usability, but not always. Customers of IT security solutions may be skeptical about unqualified claims of how GenAI improves those products.
Continuing with the EPDR example, several vendors have natural language query interfaces powered by GenAI, guided investigation tools for analysts informed by AI, and executive level reports drafted by GenAI. These have the potential to save time and improve organizational security posture for customers. However, there are concerns about the quality of the output. Can it be trusted? AI outputs have explainability problems. Moreover, since the outputs from AI tools depend on the quality and relevance of the data in their models, how are security vendors getting a sufficient quantity of relevant data, and how do they assess the veracity of the outputs of their LLM functions? How can customers be assured that data governance and security policies are applied to the data from their organizations?
In discussing LLMs, how they work, and answering questions about whether LLMs lie or hallucinate in the Journal of Ethics and Information Technology, Hicks, Humphries, and Slater state that LLMs are “not designed to represent the world at all; instead, they are designed to convey convincing lines of text.” In the proceedings of the 2022 Conference on Human Information Interaction and Retrieval, Bender and Shah said about LLMs: “No reasoning is involved […]. Similarly, language models are prone to making stuff up […] because they are not designed to express some underlying set of information in natural language; they are only manipulating the form of language.”
At this point, IT (and especially IT security) vendors and their product marketing teams would be better served by providing more information about their use of ML and GenAI in their solutions. Assume you have a tech savvy audience, because you do. What kinds of AI technology are you using? For which functions is it being used? Where are you getting data for model training? How are you doing quality control on the outputs before releasing it customers? These are the kinds of questions that buyers of security solutions have.
Join us in December in Frankfurt at our cyberrevolution conference, where we will continue to dissect how AI is used in cybersecurity.
See some of our other articles and videos on the use of AI in security:
- Cybersecurity Resilience with Generative AI
- Generative AI in Cybersecurity – It's a Matter of Trust
- ChatGPT for Cybersecurity - How Much Can We Trust Generative AI?
- Asking Good Questions About AI Integration in Your Organization
- Asking Good Questions About AI Integration in Your Organization – Part II