Attacks on LLMs: How Google Cloud and SAP secure Generative AI

Large Language Models (LLMs) have been transformational, but their increasing complexity and integration into critical systems have opened up a new attack surface for malicious actors. This session delves into the evolving threat landscape of LLM attacks, focusing on how industry leaders like Google Cloud and SAP are proactively securing generative AI technologies. Key Topics: Understanding Vulnerabilities and Attacks unique to LLMs: Prompt injection attacks, data poisoning, model theft, and adversarial examples. Defense Strategies in Google Cloud: We examine a multi-layered approach to securing its LLMs. This includes robust input validation and sanitization techniques, adversarial training to make models more resilient, and differential privacy mechanisms to protect sensitive user data. Preventative and detective policies based on NIST and Model Armor on Google Cloud. SAP’s Security Framework: We’ll highlight Gen AI embedded in SAP’s products (AI tools like Joule) and how those products are delivered securely. Industry Standards: We discuss the evolving OWASP top 10 for LLM , NIST AI RMF, Cloud Security Alliance and MITRE frameworks for securing GenAI.