The pace of innovation in generative AI offers immense opportunities while also introducing new security challenges. Those include new threat vectors, explainability, governance, transparency, and privacy for large language models. As organizations seek to leverage generative AI for innovation, security leaders must take concrete steps to enable rapid experimentation without compromising security.
We will begin our talk by understanding the scope of generative AI applications, based on their intended use and the potential risks associated with their deployment. We will then discuss key strategies for securing generative AI applications, including threat modeling, guardrails, observability, and evaluation of effectiveness of security measures.
Through case studies and practical examples, we will show how to apply these strategies in real-world scenarios, from ideation to production. Attendees will learn how to identify and mitigate potential risks in their generative AI applications, and how to evaluate the effectiveness of their security measures.
This talk is intended for security leaders, developers, and practitioners who are working with generative AI applications, and who want to ensure the security and integrity of their systems. By the end of the talk, attendees will have a deeper understanding of the security challenges associated with generative AI, and the practical steps they can take to address them.