Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Gen AI and Large Language Models: Building Blocks of AI

Course Summary

This interactive course offers a comprehensive learning experience for developers, data engineers/analysts, and tech product owners. The course is specifically designed to equip participants with the essential skills and in-depth knowledge required to harness the power of generative AI effectively.

By combining theory with extensive hands-on practice, this course ensures that participants gain a deep understanding of generative AI concepts and the ability to apply them to various domains. Students will learn how to generate realistic and novel outputs, such as images, text, and more, using state-of-the-art algorithms and frameworks.

Purpose
Employ the essential skills and in-depth knowledge required to harness the power of generative AI effectively.
Prerequisites

Participants should have a solid understanding of Python programming, including knowledge of data structures, control flow, functions, and libraries commonly used in data analysis and machine learning, such as NumPy, Pandas, and scikit-learn.

Participants should have working knowledge of data analysis concepts, exploratory data analysis (EDA), and machine learning algorithms

Basic knowledge of deep learning concepts is recommended

Role
Developers | Data Engineers/Analysts | Tech Product Owners
Skill level
Intermediate
Style
Lecture | Hands-on Activities
Duration
3 days
Related technologies
Python | Deep Learning

 

Productivity objectives
  • Describe Gen AI fundamentals and Deep Learning
  • Discuss Large Language Models
  • Use Transformers and other tools to train LLMs

What you'll learn:

In this course, you'll learn:
  • Overview of Generative AI
    • Introduction to generative AI and its applications
    • Understanding the basics of generative models and their Importance
    • Overview of different types of generative models (e.g., GANs, VAEs, autoregressive models)
  • Deep Learning Primer
    • Recap of essential deep learning concepts
    • Review of neural networks and their architectures
    • Explanation of optimization techniques (e.g., gradient descent, backpropagation)
  • Building Blocks of Generative Models
    • Understanding probability distributions and sampling techniques
    • Introduction to latent space and representation learning
    • Hands-on exercise: Implementing a simple generative model using Python and TensorFlow/PyTorch
  • Variational Autoencoders (VAEs)
    • VAEs architecture
    • Training VAEs and generating new samples
    • Hands-on exercise: Building a VAE for image generation and reconstruction
  • Generative Adversarial Networks (GANs)
    • Exploring the theory behind GANs
    • GAN architecture and training process
    • Generating synthetic data using GANs
    • Hands-on exercise: Training a GAN to generate images and evaluating the results
  • Introducing LLMs
    • What lies behind ChatGPT?
    • LLMs as Transformers
    • Different types of transformers and tasks
    • Famous Transformers
    • How to use Transformers without training: Prompt Engineering
    • The GenerativeAI project lifecycle vs ML lifecycle
    • Hands-on exercise: Provide summarisation and grammatical corrections with Prompt Engineering
  • Introducing Attention
    • What did we do before? Remembering Word2Vec and Seq2Seq
    • Seq2Seq limitations
    • Attention a la Badanau
    • Dot product and scaled dot product attention
    • Introducing attention in Keras
    • Hands-on exercise: Translation: Seq2Seq with Attention (ie: the paper that started it all)
  • Transformers
    • Why Seq2Seq with attention had some drawbacks?
    • Multi-Headed Attention
    • The Transformer architecture: Deep analysis of its components
    • Hands-on exercise: Create a Transformer from scratch
  • Hugging Face
    • Introducing Hugging Face
    • Introducing datasets
    • How to use a model from Hugging Face
    • How to upload a checkpoint to Hugging Face
  • Training LLMs: The easy part
    • When to train and when not to train your LLM
    • Computing difficulties of training LLMs
    • Full fine tuning, costs and potential catastrophic forgetfulness
    • Hands-on exercise: Perform full finetuning of Flan-T5 and verify forgetfulness
    • Single task vs multi-task finetuning
    • Perform transfer learning to avoid full finetuning
    • Hands-on exercise: Perform transfer learning of distillBERT on Sentiment

Dive in and learn more

When transforming your workforce, it’s important to have expert advice and tailored solutions. We can help. Tell us your unique needs and we'll explore ways to address them.

Let's chat

By filling out this form and clicking submit, you acknowledge our privacy policy.