From Luddites to AI
Legend has it that in 1779, a man named Ned Ludd, angered by criticism and orders to change his traditional way of working, smashed two stocking frames. This act of defiance became emblematic of the “Luddite” movement against the encroaching mechanization that threatened the livelihoods of skilled artisans during the early Industrial Revolution.
Throughout history, workers have adapted to new technologies, from the complex machinery of the Industrial Revolution to today's sophisticated AI systems. Initially, industrial workers had to master mechanical operations to support mass production. Later, the digital revolution demanded proficiency with computers for a variety of tasks.
Now, the integration of AI in workplaces emphasizes skills in managing and leveraging intelligent systems to boost productivity and decision-making processes. This ongoing evolution demonstrates the need for continuous learning and adaptability, underscoring the increasing complexity of skills involved in today’s jobs.
The Evolving Role of Cybersecurity Analysts
Building an AI-skilled workforce requires not only equipping professionals with the tools and knowledge necessary to leverage AI technologies, but also addressing the persistent challenges of the human factor in cybersecurity by implementing the right tools, cultivating a cybersecurity culture, and fostering new skills.
For example, the art of prompt engineering is a relatively new and useful skill. This discipline allows analysts to develop and optimize prompts to use Large Language Models (LLMs) efficiently. These prompts are designed to optimize the language model's performance, ensuring that it produces the desired output with minimal computational resources. For security analysts, generative AI offers a remarkable leap forward in the effectiveness of their work.
The integration of generative AI into Security Orchestration, Automation, and Response (SOAR) platforms has the potential to change the role of Security Operations Centre (SOC) analysts. This technology automates routine tasks, allowing analysts to spend more time on strategic aspects of their roles, such as planning new defensive strategies, identifying emerging threats, and formulating proactive mitigation plans.
Balancing Innovation and Responsibility
However, the potential use of generative AI goes beyond simply automating tasks or interacting with a chatbot. For instance, SOC analysts can now use generative AI to craft detailed playbooks that document the steps taken during an incident response. This documentation process not only automates responses but also builds a knowledge base that can inform future responses.
SOC analysts can also use generative AI to create alerts and perform tasks such as threat detection, incident analysis, summarize events, create reports, enhance decision making, suggest playbook templates, etc. While the integration of generative AI into SOAR platforms offers substantial benefits, there are several challenges that need to be addressed.
Generative AI requires access to vast amounts of data to learn and make decisions. Ensuring that this data is handled securely and in compliance with privacy regulations is a significant challenge. In addition, there is a risk that AI models may develop biases based on the data they are trained on, which can lead to inaccurate or unfair outcomes.
Therefore, the use of generative AI must be accompanied by thorough quality control on the part of the vendor, to ensure that the information provided is indeed useful and accurate. This balanced approach reflects a careful consideration of both the opportunities and the complexities involved with integrating new technologies into security operations.
While some vendors are highly optimistic about the transformative potential of generative AI in SOAR solutions, others remain cautious, choosing to monitor the industry's development closely. These cautious vendors prioritize understanding how to align with customer expectations and carefully evaluate the practical advantages and potential challenges of implementing generative AI.
Great Expectations
By harnessing the potential of generative AI, however, SOC analysts can broaden their scope within cybersecurity practices, cultivating new knowledge and developing new skills. While Ludd's reaction was to destroy the machines he feared would replace human craftsmanship, the challenge now is not to resist technological advancement, but to integrate it. This approach reflects a broader trend in AI development, where the goal is not to replace human endeavor, but to augment it.
As a result, vendors should prioritize transparency in their marketing to demonstrate the practical value of generative AI, rather than relying on hype or jargon. This approach not only educates customers about the capabilities and limitations of generative AI but also helps in setting realistic expectations. For more on this, see my colleague John Tolbert's blog post on Some Direction for AI/ML-ess Marketing.
Join us in December in Frankfurt at our cyberevolution conference, where we will continue to dissect how AI is used in cybersecurity.
See some of our other articles and videos on the use of AI in security:
Cybersecurity Resilience with Generative AI
Generative AI in Cybersecurity – It's a Matter of Trust
ChatGPT for Cybersecurity - How Much Can We Trust Generative AI?
Asking Good Questions About AI Integration in Your Organization