Overview

This intensive two-day course explores the security risks and challenges introduced by Large Language Models (LLMs) as they become embedded in modern digital systems. Through AI labs and real-world threat simulations, participants will develop the practical expertise to detect, exploit, and remediate vulnerabilities in AI-powered environments.

The course uses a defence-by-offence methodology, helping learners build secure, reliable, and efficient LLM applications. Content is continuously updated to reflect the latest threat vectors, exploits, and mitigation strategies, making this training essential for AI developers, security engineers, and system architects working at the forefront of LLM deployment.

Read more +

Prerequisites

Participants should have:

  • A basic understanding of AI and LLM concepts
  • Familiarity with basic scripting or programming (e.g., Python)
  • A foundational knowledge of cybersecurity threats and controls

Target audience

This course is ideal for:

  • Security professionals securing LLM or AI-based applications
  • Developers and engineers integrating LLMs into enterprise systems
  • System architects, DevSecOps teams, and product managers
  • Prompt engineers and AI researchers interested in system hardening
Read more +

Delegates will learn how to

By the end of this course, learners will be able to:

  • Understand LLM-specific vulnerabilities such as prompt injection and excessive agency
  • Identify and exploit AI-specific security weaknesses in real-world lab environments
  • Design AI workflows that resist manipulation, data leakage, and unauthorised access
  • Apply best practices for secure prompt engineering
  • Implement robust defences in plugin interfaces and AI agent frameworks
  • Mitigate risks from data poisoning, overreliance, and insecure output handling
  • Build guardrails, monitor LLM activity, and harden AI applications in production environments
Read more +

Outline

Prompt engineering

  • Fundamentals of writing secure, context-aware prompts
  • Few-shot prompting and use of delimiters
  • Prompt clarity and techniques to reduce injection risk

Prompt injection

  • Overview of prompt injection vectors (direct and indirect)
  • Practical exploitation scenarios and impacts
  • Detection, mitigation, and secure design strategies

Lab activities:

  • The Math Professor (direct injection)
  • RAG-based data poisoning via indirect injection

ReACT LLM agent prompt injection

  • Introduction to the Reasoning-Action-Observation (RAO) model
  • Vulnerabilities in frameworks such as LangChain
  • Agent behaviour manipulation and plugin exploitation

Lab activities:

  • The Bank scenario using GPT-based agents

Insecure output handling

  • AI output misuse leading to privilege escalation or code execution
  • Front-end exploitation via summarisation and rendering

Lab activities:

  • Injection via document summarisation
  • Network analysis and arbitrary code execution
  • Internal data leaks through stock bot interactions

Training data poisoning

  • Poisoning training or fine-tuning datasets to alter LLM behaviour
  • Attack simulation and defence strategies

Lab activities:

  • Adversarial poisoning
  • Injection of incorrect factual data

Supply chain vulnerabilities

  • Security gaps in third-party plugin, model, or framework usage
  • Dependency risk, plugin sandboxing, and deployment hygiene

Sensitive information disclosure

  • How LLMs can inadvertently leak personal or proprietary data
  • Overfitting, filtering failures, and context misinterpretation

Lab activities:

  • Incomplete filtering and memory retention
  • Overfitting and hallucinated disclosure
  • Misclassification scenarios

Insecure plugin design

  • Misconfigured plugins leading to execution or access control flaws
  • Securing LangChain plugins and sanitising file operations

Lab activities:

  • Exploiting the LangChain run method
  • File system access manipulation

Excessive agency in LLM systems

  • Over-privileged agents and unintended capability exposure
  • Agent hallucination, plugin misuse, and permission escalation

Lab activities:

  • Medical records manipulation
  • File system agent abuse and command execution

Overreliance in LLMs

  • Cognitive, technical, and organisational risks of AI overdependence
  • Legal liabilities, compliance gaps, and mitigation frameworks

Exams and assessments

This course does not include formal certification. Participants will complete multiple hands-on labs simulating attacker tactics and securing LLM implementations. These labs are designed to assess comprehension, critical thinking, and applied technical skill.

Hands-on learning

This course includes:

  • Over 10 scenario-based labs hosted in a cloud-accessible platform
  • 30-day extended access to all lab environments
  • Realistic LLM threat simulations: injection, escalation, data manipulation
  • Post-course access to instructor guidance for continued learning
Read more +

Why choose QA

Cyber Security learning paths

Want to boost your career in cyber security? Click on the roles below to see QA's learning pathways, specially designed to give you the skills to succeed.

= Required
= Certification
AI Governance
AI Security
Application Security
Cyber Blue Team
Cybersecurity Maturity Model Certification (CMMC)
Cloud Security
Continuity & Resilience
DFIR Digital Forensics & Incident Response
Industrial Controls & OT Security
Information Security Management
NIST Pathway
Offensive Security
Privacy Professional
Reverse Engineer
Secure Coding
Security Auditor
Security Architect
Security Risk
Security Tech Generalist
Vulnerability Assessment & Penetration Testing

AI Security learning paths

Want to boost your career in AI Security? View QA's learning pathway below, specially designed to give you the skills to succeed.

= Required
= Certification
Need to know

Frequently asked questions

How can I create an account on myQA.com?

There are a number of ways to create an account. If you are a self-funder, simply select the "Create account" option on the login page.

If you have been booked onto a course by your company, you will receive a confirmation email. From this email, select "Sign into myQA" and you will be taken to the "Create account" page. Complete all of the details and select "Create account".

If you have the booking number you can also go here and select the "I have a booking number" option. Enter the booking reference and your surname. If the details match, you will be taken to the "Create account" page from where you can enter your details and confirm your account.

Find more answers to frequently asked questions in our FAQs: Bookings & Cancellations page.

How do QA’s virtual classroom courses work?

Our virtual classroom courses allow you to access award-winning classroom training, without leaving your home or office. Our learning professionals are specially trained on how to interact with remote attendees and our remote labs ensure all participants can take part in hands-on exercises wherever they are.

We use the WebEx video conferencing platform by Cisco. Before you book, check that you meet the WebEx system requirements and run a test meeting to ensure the software is compatible with your firewall settings. If it doesn’t work, try adjusting your settings or contact your IT department about permitting the website.

How do QA’s online courses work?

QA online courses, also commonly known as distance learning courses or elearning courses, take the form of interactive software designed for individual learning, but you will also have access to full support from our subject-matter experts for the duration of your course. When you book a QA online learning course you will receive immediate access to it through our e-learning platform and you can start to learn straight away, from any compatible device. Access to the online learning platform is valid for one year from the booking date.

All courses are built around case studies and presented in an engaging format, which includes storytelling elements, video, audio and humour. Every case study is supported by sample documents and a collection of Knowledge Nuggets that provide more in-depth detail on the wider processes.

When will I receive my joining instructions?

Joining instructions for QA courses are sent two weeks prior to the course start date, or immediately if the booking is confirmed within this timeframe. For course bookings made via QA but delivered by a third-party supplier, joining instructions are sent to attendees prior to the training course, but timescales vary depending on each supplier’s terms. Read more FAQs.

When will I receive my certificate?

Certificates of Achievement are issued at the end the course, either as a hard copy or via email. Read more here.

Let's talk

A member of the team will contact you within 4 working hours after submitting the form.

By submitting this form, you agree to QA processing your data in accordance with our Privacy Policy and Terms & Conditions. You can unsubscribe at any time by clicking the link in our emails or contacting us directly.