From £3,116 + VAT was £4,050
To book this course, call us on 0113 382 6276 or get in touch via the form.
Overview
This course provides a comprehensive introduction to AI security and the evolving risks that accompany modern artificial intelligence systems. Participants explore how attackers exploit vulnerabilities in predictive and generative models, including prompt injection, model jailbreaks, denial of service attacks, model theft, and data poisoning. The course examines the full attack surface of AI systems, from training datasets to deployed applications, and equips learners with practical defence strategies using security APIs, structured prompt defences, and robust infrastructure design. Through hands-on exercises and real-world scenarios, participants learn how to build responsible, reliable, and secure AI capabilities that protect organisational assets and maintain trust in AI-augmented systems.
Prerequisites
Participants should have:
- A foundational understanding of AI concepts such as neural networks and model lifecycle stages
- Basic familiarity with cybersecurity principles and common attack types
- Experience working with applications that use AI or LLM functionality (recommended)
- Access to a development environment suitable for practising AI integration (recommended)
Target audience
This course is designed for:
- Technology professionals responsible for deploying, integrating, or securing AI solutions
- Security practitioners seeking a deeper understanding of AI-specific threats
- Developers building applications that use large language models or generative AI
- Organisations aiming to enhance their resilience against AI-driven risks
Delegates will learn how to
By the end of this course, learners will be able to:
- Describe different types of AI systems and explain their security vulnerabilities
- Identify and mitigate attacks such as prompt injection, model jailbreaks, visual prompt manipulation, and denial of service
- Apply defensive methods and security API tooling to strengthen AI systems
- Assess and protect training data sources, model integrity, and supply chain dependencies
- Integrate large language models securely within applications, respecting trust boundaries and common best practices
- Evaluate ethical considerations, responsible AI principles, and techniques to improve reliability and explainability
- Investigate model behaviour, detect potential misuse, and apply structured threat modelling for AI-driven workflows
- Build secure human-AI interaction patterns that minimise hallucinations, misuse, and exposure of sensitive information
Outline
Introduction to AI security
- Defining AI and defining security
- Scope of AI security and the boundaries of this course
- Types of AI systems: neural networks, models, integrated systems
- How AI systems are used across organisational contexts
- What secure AI means: responsible, reliable, explainable, and aligned models
- Human-AI interactions and risks of uncensored or malicious models
- Real-world examples of misuse including deepfakes, voice cloning, and social engineering
- How misinformation spreads through AI-generated content
- Exercises exploring uncensored models and image watermarking
The AI security landscape
- Attack surfaces of AI systems across the model lifecycle
- Components of AI pipelines and why supply-chain security matters
- Models accessed via APIs and APIs accessed by models
- Non-AI attack vectors that remain relevant
- OWASP ML Top 10, OWASP LLM Top 10, and how they apply to modern AI
- Threat modelling approaches for AI-integrated applications
- Sample AI-powered workflows and common security findings
- Exercise: threat modelling an LLM-integrated application using a realistic data flow
Prompt injection
- Overview of prompt injection attacks and their impact
- Direct and indirect prompt injection
- Social engineering through prompts and phishing opportunities
- SudoLang for representing attack logic
- How LLM integration choices influence vulnerabilities
- Exercises translating prompts into SudoLang and retrieving passwords across levels 1 and 2
Model jailbreaks
- How jailbreaks work and common techniques
- Case studies including DAN prompts and AutoDAN
- Tree of Attacks with Pruning (TAP)
- Exercises retrieving restricted information across levels 3, 4, and 5
Prompt extraction
- Extracting system prompts, private data, and boundaries
- Techniques used in challenges and real applications
- Exercises retrieving prompts and boundaries at levels 6 and 7
Defending AI systems
- Intermediate and advanced defence strategies
- Security APIs including ReBuff, Llama Guard, Lakera, and similar tools
- Example exploits seen in public challenges
- Exercise: defeating protections in levels 8 and 9
- Other injection methods including reverse psychology and manipulation techniques
- Categorising attacks and implementing robust protections
- Additional defensive models and structured methods such as the Bergeron method
Visual prompt injection
- How visual prompts manipulate multimodal models
- Trivial examples and advanced adversarial attacks
- Examples affecting self-driving systems and image classifiers
- Exercises using OpenAI vision capabilities and creating adversarial samples
- Protections against visual attacks and dataset considerations
Denial of service
- How DoS attacks manifest in LLMs and chatbots
- Prompt routing challenges and resource exhaustion
- Practical defence strategies and system-level mitigations
- Exercise: designing prompts that halt or degrade model behaviour
Model theft
- Threat landscape for model extraction
- Risks of dataset exploration and query-based stealing
- How fine-tuned models can be cloned
- Exercises using API parameters to replicate model behaviour
- Protections for model confidentiality, from simple rate limits to advanced monitoring
LLM integration
- Understanding the LLM trust boundary
- Classical integration challenges in novel AI workflows
- Treating LLM output as untrusted user input
- Exchange formats and secure function calling
- Risks of custom GPTs, identity flow, and cross-application access
- Exercises on SQL injection, XSS payload generation, invalid parameter passing, and privilege escalation
- Principles of secure coding applied to AI systems including Bishop, Saltzer, and Schroeder
- Designing privilege boundaries for AI components
- Exercise: breaking out of an AI sandbox
Training data manipulation
- Importance of dataset integrity and reliability
- How attackers poison training data
- Using dataset cards and model cards for assurance
- Analysing datasets and reviewing dataset objectives
- Exercises constructing and analysing malicious datasets
Secure supply chain
- Proving model integrity and emerging cryptographic methods
- Hardware-assisted attestation and verification
- Risks across the model building and deployment lifecycle
Human-AI interaction
- Overreliance on LLM output and what can go wrong
- Countering hallucinations and validating information
- Sandboxing and safe API patterns
- Exercise: verifying LLM output in realistic scenarios
Secure AI infrastructure
- Requirements of secure AI infrastructure including monitoring, observability, and traceability
- Confidentiality, integrity, availability, and privacy considerations
- Case studies such as the Samsung data leak
- Tools and frameworks including LangSmith
- Exercise: experimenting with LangSmith for safe evaluation
- BlindLlama and emerging evaluation tools
Exams and assessments
- The independent APMG Certified AI Security Engineer exam is taken post class, using an exam voucher code via the APMG proctor platform.
- If you experience any issues, please contact the APMG technical help desk on 01494 4520450.
- Duration: 60 Minutes
- Questions: 60, multiple choice (4 multiple choice answers only 1 of which is correct)
- Pass Mark: 50%
Hands-on learning
This course provides extensive practical experience through:
- Interactive labs exploring AI-based attacks and defences
- Real-world scenarios simulating risks across predictive, generative, and multimodal systems
- Guided exercises using security APIs, structured defences, and dataset analysis
- Instructor-led walkthroughs that reinforce secure design, coding, and integration behaviours
Why choose QA
- Award-winning training, top NPS scores
- Over 500,000 learners in 2024
- Our training experts are industry leaders
- Read more about QA
Cyber Security learning paths
Want to boost your career in cyber security? Click on the roles below to see QA's learning pathways, specially designed to give you the skills to succeed.
AI Security learning paths
Want to boost your career in AI Security? View QA's learning pathway below, specially designed to give you the skills to succeed.
Frequently asked questions
How can I create an account on myQA.com?
There are a number of ways to create an account. If you are a self-funder, simply select the "Create account" option on the login page.
If you have been booked onto a course by your company, you will receive a confirmation email. From this email, select "Sign into myQA" and you will be taken to the "Create account" page. Complete all of the details and select "Create account".
If you have the booking number you can also go here and select the "I have a booking number" option. Enter the booking reference and your surname. If the details match, you will be taken to the "Create account" page from where you can enter your details and confirm your account.
Find more answers to frequently asked questions in our FAQs: Bookings & Cancellations page.
How do QA’s virtual classroom courses work?
Our virtual classroom courses allow you to access award-winning classroom training, without leaving your home or office. Our learning professionals are specially trained on how to interact with remote attendees and our remote labs ensure all participants can take part in hands-on exercises wherever they are.
We use the WebEx video conferencing platform by Cisco. Before you book, check that you meet the WebEx system requirements and run a test meeting to ensure the software is compatible with your firewall settings. If it doesn’t work, try adjusting your settings or contact your IT department about permitting the website.
How do QA’s online courses work?
QA online courses, also commonly known as distance learning courses or elearning courses, take the form of interactive software designed for individual learning, but you will also have access to full support from our subject-matter experts for the duration of your course.
Once you have purchased the Online course and have completed your registration, you will receive the necessary details to enable you to immediately access it through our e-learning platform and you can start to learn straight away, from any compatible device. Access to the online learning platform is valid for one year from the booking date.
All courses are built around case studies and presented in an engaging format, which includes storytelling elements, video, audio and humour. Every case study is supported by sample documents and a collection of Knowledge Nuggets that provide more in-depth detail on the wider processes.
When will I receive my joining instructions?
Joining instructions for QA courses are sent two weeks prior to the course start date, or immediately if the booking is confirmed within this timeframe. For course bookings made via QA but delivered by a third-party supplier, joining instructions are sent to attendees prior to the training course, but timescales vary depending on each supplier’s terms. Read more FAQs.
When will I receive my certificate?
Certificates of Achievement are issued at the end the course, either as a hard copy or via email. Read more here.
