Securing AI Systems: Detecting and Stopping GenAI-Enabled Threat Actors

Generative AI has given defenders an edge, but it's also opened new avenues for enabling cyber threat actors to conduct phishing, social engineering, vulnerability research, and other abusive activities. A cross-team collaboration spent months tracking, defending and learning from threat actors attempting to abuse Google’s AI systems; tactics that can ultimately work across different AI systems. In our talk, we will discuss the types of abusive behavior seen from threat actors, including novel-AI TTPs that haven't been publicly shared before, like jailbreak prompts and prompt injection attacks. We'll then share actionable best practices for how enterprises can be proactive in detecting and stopping abuse and exploitation of their AI systems, based on these learnings. Audience members will walk away with the knowledge of which implementations to prioritize within their environments to stay ahead of the curve and retain their edge.