Let’s make it work for you 

Overview

Recent advances in large language models have created unprecedented opportunities for organisations to streamline operations, reduce costs, and improve productivity at scale. We believe organisations of the future will combine human and machine intelligence to learn, master, and apply AI capabilities quickly and effectively.

This course provides a comprehensive, practical introduction to LLM application development using the open-source ecosystem. Learners explore pretrained models from the Hugging Face repository, work directly with the Transformers API, and build task-specific and generative solutions. The course progresses from Transformer fundamentals to multimodal architectures and agentic orchestration using LangChain, equipping participants to design safe, scalable, and enterprise-ready LLM-powered applications.

Read more +

Prerequisites

Participants should have:

  • Experience with Python programming and working with external libraries
  • A foundational understanding of machine learning and neural networks
  • Familiarity with basic natural language processing concepts
  • Awareness of APIs and model inference workflows

Target audience

This course is designed for:

  • Developers building LLM-powered enterprise applications
  • Data scientists expanding into generative AI and multimodal systems
  • Machine learning engineers orchestrating LLM workflows
  • Technical professionals seeking to integrate AI capabilities into products and services
Read more +

Learning Objectives

By the end of this course, learners will be able to:

  • Navigate, evaluate, and experiment with models from the Hugging Face model repository
  • Use the Transformers API to load, configure, and deploy pretrained LLMs
  • Apply encoder-based models for tasks such as semantic analysis, embeddings, question answering, and zero-shot classification
  • Work with decoder-style and encoder-decoder architectures for text generation and sequence-to-sequence tasks
  • Integrate multimodal models to combine text, image, and audio inputs within unified workflows
  • Design and guide generative AI solutions that are safe, effective, and scalable
  • Use LangChain to orchestrate LLM pipelines, tools, and agentic workflows
  • Incorporate inference and deployment strategies to support enterprise-scale applications
Read more +

Course Outline

Course introduction

  • Overview of course objectives, structure, and expected outcomes
  • Introduction to the Hugging Face ecosystem and Transformers library
  • Discussion of enterprise use cases for LLM-powered applications
  • How LLMs enhance customer experience, automate workflows, and generate insight

Transformers and large language models

  • Motivation for Transformer architectures from deep learning first principles
  • Core components of Transformer-style architectures
  • Tokenisation and text preprocessing
  • Embeddings and vector representations
  • Self-attention mechanisms and contextual learning
  • Understanding input-output processing in LLMs

Task-specific pipelines with encoder models

  • Profiling encoder-based models and their strengths
  • Semantic analysis and embedding generation
  • Question answering pipelines
  • Zero-shot and few-shot classification
  • Lightweight models for efficient inference
  • Evaluating model performance and selecting appropriate architectures

Sequence-to-sequence and decoder-based models

  • Introduction to decoder-style, GPT-like architectures
  • Autoregressive text generation
  • Prompt-based task conditioning
  • Encoder-decoder models for machine translation and summarisation
  • Few-shot task completion and controlled generation
  • Managing output quality, format, and reliability

Multimodal architectures

  • Integrating text, image, and audio data within LLM workflows
  • Cross-modal learning concepts
  • Using models such as CLIP for linking text and images
  • Visual language models for image question answering
  • Diffusion-style models for text-guided image generation
  • Designing multimodal applications for enterprise scenarios

Scaling text generation and inference

  • Understanding inference challenges in large language models
  • Latency, throughput, and cost considerations
  • Optimised model serving and server deployment strategies
  • Scaling LLM applications to larger repositories and user bases
  • Monitoring and maintaining production LLM systems

Orchestration and agentic workflows

  • Introduction to LangChain for LLM orchestration
  • Building modular, reusable LLM pipelines
  • Tool integration and environment-enabled agents
  • Agentic patterns for decision-making and task decomposition
  • Integrating natural language interfaces with standard applications and data sources
  • Governance, safety, and responsible AI considerations

Final assessment

  • Design and build an LLM-based application integrating text generation, multimodal capabilities, and orchestration
  • Apply encoder and decoder models appropriately within a single workflow
  • Demonstrate safe and scalable application design principles
  • Present and review solutions with instructor feedback

Exams and assessments

Learners complete practical exercises throughout the course to reinforce key concepts. The final assessment requires participants to build a functional LLM-powered application that integrates generation, multimodal learning, and orchestration techniques.

Assessment emphasises applied capability, architectural understanding, and responsible design of enterprise-ready AI systems.

Hands-on learning

This course is designed around applied experimentation and development:

  • Direct interaction with pretrained models via the Hugging Face repository
  • Implementation of encoder, decoder, and multimodal pipelines
  • Guided exercises using the Transformers API
  • Orchestration of agentic workflows with LangChain
  • Iterative refinement of generative applications for safety and performance
Read more +

Why choose QA

Yellow
Need to know

Frequently asked questions

How can I create an account on myQA.com?

There are a number of ways to create an account. If you are a self-funder, simply select the "Create account" option on the login page.

If you have been booked onto a course by your company, you will receive a confirmation email. From this email, select "Sign into myQA" and you will be taken to the "Create account" page. Complete all of the details and select "Create account".

If you have the booking number you can also go here and select the "I have a booking number" option. Enter the booking reference and your surname. If the details match, you will be taken to the "Create account" page from where you can enter your details and confirm your account.

Find more answers to frequently asked questions in our FAQs: Bookings & Cancellations page.

How do QA’s virtual classroom courses work?

Our virtual classroom courses allow you to access award-winning classroom training, without leaving your home or office. Our learning professionals are specially trained on how to interact with remote attendees and our remote labs ensure all participants can take part in hands-on exercises wherever they are.

We use the WebEx video conferencing platform by Cisco. Before you book, check that you meet the WebEx system requirements and run a test meeting to ensure the software is compatible with your firewall settings. If it doesn’t work, try adjusting your settings or contact your IT department about permitting the website.

How do QA’s online courses work?

QA online courses, also commonly known as distance learning courses or elearning courses, take the form of interactive software designed for individual learning, but you will also have access to full support from our subject-matter experts for the duration of your course.

Once you have purchased the Online course and have completed your registration, you will receive the necessary details to enable you to immediately access it through our e-learning platform and you can start to learn straight away, from any compatible device. Access to the online learning platform is valid for one year from the booking date.

All courses are built around case studies and presented in an engaging format, which includes storytelling elements, video, audio and humour. Every case study is supported by sample documents and a collection of Knowledge Nuggets that provide more in-depth detail on the wider processes.

When will I receive my joining instructions?

Joining instructions for QA courses are sent two weeks prior to the course start date, or immediately if the booking is confirmed within this timeframe. For course bookings made via QA but delivered by a third-party supplier, joining instructions are sent to attendees prior to the training course, but timescales vary depending on each supplier’s terms. Read more FAQs.

When will I receive my certificate?

Certificates of Achievement are issued at the end the course, either as a hard copy or via email. Read more here.

Let's talk

A member of the team will contact you within 4 working hours after submitting the form.

By submitting this form, you agree to QA processing your data in accordance with our Privacy Policy.