LLM security – How to move fast and stay secure

Watch On-Demand
Watch O-Demand

Learn about AI LLM security and what you can do to defend your AI systems. Join our experts to learn security lessons from real world deployments of LLM and see a live demo of a ground-breaking attack that can reveal system prompts from AI LLM outputs—without model access. 

In this session, we’ll discuss AI LLM application security, using lessons from real world pen tests, and how attackers are using the latest research, prompting a new type of approach to security. You’ll see how these attacks are discovered, how they work, and why off-the-shelf tooling can’t keep up. You’ll also learn what practical defences you can implement now to reduce your exposure and improve your AI LLM security posture. 

If you're serious about securing generative AI, and not just reacting to the latest headlines, this session is where you start

Why watch

  • An introduction to some of the key vulnerabilities within common AI LLM deployments 

  • Learn directly from Black Hat trainers and pen testers on how they test AI LLM deployments and what they’ve been finding 

  • Hear about the latest prompt extraction techniques - Live Demonstration of a novel prompt-based AI LLM attack - Understand how real-world threat actors exploit generative AI 

Who should see

  • Security leaders and security teams
  • AI/ML engineers
  • AppSec professionals
  • DevOps engineers 
  • CISOs
  • Product security leads 
  • Technical decision-makers building or defending AI LLM-based systems. 

Webinar Outcome

  • Understand security risks with common AI LLM deployments and how they can be exploited. Hear from pen testers on the latest flaw’s they’re finding in real world deployments 

  • See a demo of a new type of prompt attack Understand new approaches to testing these types of applications 

Core content

Hacking LLM Applications: latest research and insights from our AI LLM pen testing projects as organisations race to adopt Large Language Models (LLMs) across applications, attackers are racing faster. In this live webinar we’ll share experiences from pen testing customers LLM deployments and discuss some of the fundamentals of testing AI LLMs.  

We’ll unpack and demo a ground-breaking prompt extraction technique that flips the script on AI LLM security. You’ll see how real attackers use model outputs alone to leak confidential information—despite all the traditional safeguards. Based on cutting-edge research, this session reveals why tools can’t keep up, how these methods are discovered, and what you can do to stay ahead. 

  

You’ll learn: 

  • What we’re seeing in pen testing AI LLM applications 

  • Approaches to security testing of AI LLMs vs traditional pen tests 

  • What the latest research reveals about generative AI weaknesses 

  • How system prompts can be extracted from outputs – live demo of a new attack 

  

Who Should Attend: 

  • Security leaders and security teams 

  • AI/ML engineers, 

  • AppSec professionals, 

  • DevOps engineers, 

  • CISOs, 

  • Product security leads, and 

  • Technical decision-makers building or defending AI LLM-based systems.

Fill in your details to the watch video

By submitting this form, you agree to QA processing your data in accordance with our Privacy Policy and Terms & Conditions. You can unsubscribe at any time by clicking the link in our emails or contacting us directly.

Related Events