Overview

The new Artificial Intelligence (AI) Safety Risk gamified simulation prepares you and your team members with the skills needed to navigate the complex landscape of AI risk management.

By the end of the simulation event, you will be equipped to tackle AI risk related challenges head-on and ensure the responsible and ethical use of artificial intelligence in your organisation.

This gamified experience is designed to prepare individuals to effectively address the risks associated with using AI in a professional context. Participants will gain the knowledge and insights necessary to learn why risk management, ethics and oversight are crucial to realising AI’s benefits from technical, commercial, product, and business perspectives.

We have built our immersive gamified scenario for various target audiences, created by our interdisciplinary team, and informed by the latest research in organisational psychology, best practice in AI governance (Inc. EU AI Act & US NIST AI Risk Management Framework) and risk management.

Read more +

Prerequisites

There are no prerequisites.

Read more +

Learning Outcomes

By the end of the gamified AI safety risk simulation, participants will be able to:

  • Discuss AI safety risk management principles.
  • Identify potential risks and challenges associated with AI implementations.
  • Formulate appropriate responses to AI related incidents.
  • Consider guard-rails to prevent errors, bias, and breaches in AI systems.
  • Discuss strategies to embrace and accelerate ethical, safe, and responsible AI.
Read more +

Outline

Throughout this engaging and thought-provoking event, participants will actively engage individually and collectively in multi-layered interactive gamified simulations that explore various real-world AI safety risk scenarios, that will impact businesses looking to embrace AI technology. Whilst building upon problem solving, critical thinking, communication, and collaboration skills.

The AI safety and risk scenarios include but are not limited to;

  • A customer service chat bot data leak.
  • Disinformation campaign using generative AI media (deepfake & voice clone scam).
  • Co-Pilot ‘hallucinating’ middleware business application logic.
  • Unfair use of customer data, and challenges related to AI-driven operations centres.
Read more +

QA is proud to be the UK partner for CyberFish Cyberpsychology Solutions.

Special Notices

Learners will receive the AI Safety Risk Management digital badge, post event after taking part in the simulation exercise.

Cyber Security learning paths

Want to boost your career in cyber security? Click on the roles below to see QA's learning pathways, specially designed to give you the skills to succeed.

= Required
= Certification
AI Security
Application Security
Cloud Security
Cyber Blue Team
DFIR Digital Forensics & Incident Response
Industrial Controls & OT Security
Information Security Management
NIST Pathway
Offensive Security
Privacy Professional
Reverse Engineer
Secure Coding
Security Auditor
Security Architect
Security Risk
Security Tech Generalist
Vulnerability Assessment & Penetration Testing