Realistic simulations of human voice, video and text created by “Generative AI” systems have proliferated over the last few years. The simulations pose enhanced risks that Identity Proofing and Verification (IDPV) systems will be unable to distinguish real signals from fake ones. Organizations that rely on IDPV services to prevent fraud or impersonation are experiencing higher numbers and frequency of fraudulent attempts.
For this reason, we have established a new Discussion Group at Kantara Initiative to research how IDPV systems could be subverted or fooled by “deepfakes”, “Generative AI”, and other AI-related mechanisms. Group members are collecting and curating factual content about how ID Proofing systems work (and how they are vulnerable), how Generative AI works, the state of the art of spoof detection and prevention, and trends for future protections.
This session will cover the group’s work and highlight current vulnerabilities in ID Proofing systems that are prone to attack by AI-driven actors.