Generative AI (GenAI) has had a rapid impact on nearly every sector, offering unprecedented capabilities in content generation, data analysis, and automation. However, alongside these advancements come significant risks and security concerns that organizations must address to harness GenAI's potential responsibly.
Understanding Risk in the Context of GenAI
Risk is defined as the composite measure of an event's probability and the magnitude of its consequences. In the realm of GenAI, risks manifest across multiple dimensions:
- Lifecycle Stage: Risks can emerge during design, development, deployment, operation, and decommissioning phases.
- Scope: Risks may exist at individual model or system levels, at the application or implementation levels, or ecosystem level. Ecosystem examples are impacts on access to opportunity, labor markets, creative economies, or "algorithmic monocultures," or repeated use of the same model.
- Source: Risks can originate from design flaws, training data biases, operational vulnerabilities, system inputs and outputs, and human behaviors, including misuse or adversarial attacks.
- Time Scale: Immediate harms, such as data breaches, or prolonged issues like disinformation eroding public trust.
The National Institute of Standards and Technology (NIST) emphasizes the importance of understanding these dimensions in its Artificial Intelligence Risk Management Framework (NIST AI RMF 1.0).
Impact on Organizations
The integration of GenAI into organizational workflows introduces several challenges such as data privacy and introduction of new security vulnerabilities. GenAI systems often require vast amounts of data, putting strain on the already over-worked and often under-resourced data management infrastructure. This means that sensitive data may be used to train models or be used in operation.
When integrating GenAI into their workflows, organizations may be exposed to attacks that they previously did not have to contend with, such as the potential for data poisoning, model inversion attacks, and unauthorized data extraction. Other examples of risk include IP loss when interacting with open source models and the introduction of misinformation to organizational decision-making via GenAI hallucinations or model drift. This requires action to understand the risks and implement measures to either mitigate or accept those risks.
There is so much more to be said on how GenAI impacts organizations and the monumental task of risk management. Watch this session from the 2023 Cyberevolution for insight on redesigning risk management in the era of AI.
Recommendations
Organizations must engage with the GenAI security question before they enact broad-sweeping bans, or invite their employees to experiment and innovate to their heart’s content. One of the first missions is to assess the organization’s risk appetite to determine acceptable levels of risk that align with their organizational goals, regulatory obligations, and exposure.
A logical next step is to implement a risk management framework, such as the NIST AI RMF to systematically identify, evaluate, and mitigate AI-related risks that aligns with the predefined risk appetite.
Mitigation measures should be put in place. These will look different depending on the organization, industry, and application of GenAI. These are a few examples:
- Fine-grained, role-based access controls to prevent unauthorized access to AI systems.
- Specialized access controls for agentic AI that reduce standing privilege and rely on ephemeral access.
- Continuous monitoring to detect and respond to emerging threats like data poisoning or model inversion attacks.
- Encryption and anonymization of data used in training and operations as one measure to prevent IP loss and to prevent unauthorized access to data though interacting with GenAI models.
- Train employees on the risks of hallucination, and establish policies to review critical information provided by GenAI.
- Develop policies and procedures to oversee AI model development, deployment, and monitoring.
To explore the complexities of GenAI and effective risk management strategies, be sure to attend the European Identity and Cloud Conference (EIC) 2025. Discussions around GenAI and its security and identity will feature prominently, including tracks like Upgrading Reality and AI Risk and Opportunity. One session to add to your schedule is Martin Kuppinger’s AIdentity: the intersection of AI and identity. Be there to get your questions answered from industry analysts, researchers, and professionals innovating in this space.