Let’s make it work for you
Overview
Large language models are transforming how organisations build products, automate workflows, and unlock value from data. We believe the future belongs to organisations that can learn, master, and apply AI capabilities at pace and scale. This workshop introduces modern prompt engineering techniques as the fastest path to building practical LLM-powered applications.
Learners will work with NVIDIA NIM, powered by the open-source Llama 3.1 large language model, alongside the LangChain library to structure and orchestrate LLM workflows. Through hands-on exercises, participants will build generative applications, document analysis pipelines, and chatbot assistants, while establishing the foundations required for more advanced techniques such as retrieval-augmented generation and parameter-efficient fine-tuning.
Prerequisites
Participants should have:
- Familiarity with basic programming fundamentals such as functions, variables, and control flow
- Experience writing simple scripts in a language such as Python
- A general understanding of APIs and working with external libraries
Target audience
This course is designed for:
- Developers and engineers looking to integrate LLMs into products or internal applications
- Technical professionals exploring AI inference and generative AI use cases
- Organisations seeking to build applied capability in AI, Cloud, and Data technologies
Learning Objectives
By the end of this workshop, learners will be able to:
- Explain the core principles of large language models and how prompt engineering influences model behaviour
- Apply iterative prompt engineering best practices to improve output quality, reliability, and relevance
- Use NVIDIA NIM to access and deploy LLM capabilities for inference-based applications
- Design and implement structured LLM workflows using LangChain
- Build application code for text generation, large-scale document analysis, and chatbot assistants
- Describe how prompt engineering underpins advanced techniques such as retrieval-augmented generation and parameter-efficient fine-tuning
- Evaluate LLM outputs and implement strategies to mitigate common risks such as hallucinations and prompt injection
Course Outline
Introduction to large language models and AI inference
- Overview of large language models and transformer-based architectures
- Understanding tokens, context windows, and inference
- Common enterprise use cases for LLMs
- The role of AI inference in production systems
- Positioning LLMs within AI, Cloud, and Data strategies
Foundations of prompt engineering
- What prompt engineering is and why it matters
- Instruction design and task specification
- Zero-shot, one-shot, and few-shot prompting
- Structuring prompts for clarity, consistency, and control
- Managing tone, format, and output constraints
Iterative prompt optimisation
- Evaluating LLM outputs against task requirements
- Techniques for refining and debugging prompts
- Chain-of-thought and structured reasoning prompts
- Using system and user messages effectively
- Establishing repeatable prompt patterns for applications
Working with NVIDIA NIM and Llama 3.1
- Overview of NVIDIA NIM architecture and capabilities
- Accessing and configuring an NVIDIA language model NIM endpoint
- Interacting with the Llama 3.1 model for inference tasks
- Performance considerations and scaling inference workloads
- Integrating NIM into application backends
Building LLM workflows with LangChain
- Introduction to the LangChain framework
- Creating prompt templates and reusable components
- Managing memory and conversational state
- Composing chains for multi-step reasoning tasks
- Orchestrating tools and external data sources
Generative applications and document analysis
- Designing text generation workflows for content and automation
- Summarisation and information extraction from long documents
- Chunking strategies and context management
- Building pipelines for large-scale document processing
- Validating and post-processing LLM outputs
Chatbot assistants and conversational systems
- Designing conversational flows and system prompts
- Managing dialogue context and user intent
- Handling edge cases and ambiguous queries
- Integrating chat interfaces with backend services
- Monitoring and improving chatbot performance
Foundations for advanced LLM techniques
- Introduction to retrieval-augmented generation
- When to use retrieval versus fine-tuning
- Overview of parameter-efficient fine-tuning concepts
- Security considerations, including prompt injection risks
- Governance, compliance, and responsible AI practices
Exams and assessments
Participants will complete practical coding exercises, guided labs, and knowledge checks throughout the workshop. A final applied assessment will require learners to design and implement an LLM-based application using NVIDIA NIM and LangChain.
Upon successful completion of the assessment, participants will receive an NVIDIA certificate recognising subject matter competency and supporting professional career growth.
Hands-on learning
This workshop is built around applied, hands-on learning:
- Guided labs using NVIDIA NIM and Llama 3.1
- Practical exercises building real LLM-powered features
- Collaborative problem-solving scenarios based on enterprise use cases
- Instructor feedback on prompt design and application architecture
Frequently asked questions
How can I create an account on myQA.com?
There are a number of ways to create an account. If you are a self-funder, simply select the "Create account" option on the login page.
If you have been booked onto a course by your company, you will receive a confirmation email. From this email, select "Sign into myQA" and you will be taken to the "Create account" page. Complete all of the details and select "Create account".
If you have the booking number you can also go here and select the "I have a booking number" option. Enter the booking reference and your surname. If the details match, you will be taken to the "Create account" page from where you can enter your details and confirm your account.
Find more answers to frequently asked questions in our FAQs: Bookings & Cancellations page.
How do QA’s virtual classroom courses work?
Our virtual classroom courses allow you to access award-winning classroom training, without leaving your home or office. Our learning professionals are specially trained on how to interact with remote attendees and our remote labs ensure all participants can take part in hands-on exercises wherever they are.
We use the WebEx video conferencing platform by Cisco. Before you book, check that you meet the WebEx system requirements and run a test meeting to ensure the software is compatible with your firewall settings. If it doesn’t work, try adjusting your settings or contact your IT department about permitting the website.
How do QA’s online courses work?
QA online courses, also commonly known as distance learning courses or elearning courses, take the form of interactive software designed for individual learning, but you will also have access to full support from our subject-matter experts for the duration of your course. When you book a QA online learning course you will receive immediate access to it through our e-learning platform and you can start to learn straight away, from any compatible device. Access to the online learning platform is valid for one year from the booking date.
All courses are built around case studies and presented in an engaging format, which includes storytelling elements, video, audio and humour. Every case study is supported by sample documents and a collection of Knowledge Nuggets that provide more in-depth detail on the wider processes.
When will I receive my joining instructions?
Joining instructions for QA courses are sent two weeks prior to the course start date, or immediately if the booking is confirmed within this timeframe. For course bookings made via QA but delivered by a third-party supplier, joining instructions are sent to attendees prior to the training course, but timescales vary depending on each supplier’s terms. Read more FAQs.
When will I receive my certificate?
Certificates of Achievement are issued at the end the course, either as a hard copy or via email. Read more here.