Generative AI with Large Language Models Course Syllabus
Full curriculum breakdown — modules, lessons, estimated time, and outcomes.
Overview: This course provides a comprehensive introduction to generative AI and large language models (LLMs), combining theoretical knowledge with hands-on practice. You'll explore the fundamentals of transformer architectures, the full lifecycle of LLM projects, and key techniques like pre-training, fine-tuning, and reinforcement learning with human feedback. Through practical labs, you'll gain experience in dialogue summarization, model optimization, and deploying AI applications. The course spans approximately 14 hours, designed for flexible, self-paced learning, and concludes with a final project that integrates your skills in a real-world scenario.
Module 1: Generative AI Use Cases, Project Lifecycle, and Model Pre-training
Estimated time: 5 hours
- Introduction to generative AI and large language models
- Understanding common use cases and tasks of generative AI
- Overview of the generative AI project lifecycle
- Exploration of model pre-training processes and text generation techniques
- Understanding transformer architecture fundamentals
Module 2: Fine-tuning and Evaluating Large Language Models
Estimated time: 4 hours
- Techniques for fine-tuning LLMs with instruction datasets
- Parameter-efficient fine-tuning (PEFT) methods
- Addressing catastrophic forgetting during model adaptation
- Evaluation methods for measuring LLM performance
Module 3: Reinforcement Learning and LLM-powered Applications
Estimated time: 5 hours
- Introduction to reinforcement learning with human feedback (RLHF) for LLMs
- Enhancing reasoning with chain-of-thought prompting
- Handling knowledge cut-offs in LLMs
- Implementing information retrieval strategies for improved responses
Module 4: Training and Optimization of LLMs
Estimated time: 4 hours
- Applying empirical scaling laws to model development
- Optimizing model objectives based on dataset size and compute resources
- Understanding inference requirements and trade-offs
- Implementing state-of-the-art training and tuning methods
Module 5: Deployment and Real-World Challenges
Estimated time: 4 hours
- Strategies for deploying LLMs in production environments
- Exploring real-world applications of generative AI
- Understanding ethical and practical challenges in AI deployment
- Learning from industry researchers and practitioner insights
Module 6: Final Project
Estimated time: 6 hours
- Design and implement a generative AI application using learned techniques
- Fine-tune a model for a specific use case such as dialogue summarization
- Apply evaluation and optimization strategies to improve model performance
Prerequisites
- Familiarity with Python programming
- Basic understanding of machine learning concepts
- Some experience with deep learning frameworks is helpful but not required
What You'll Be Able to Do After
- Understand the core principles and architecture of large language models
- Build and fine-tune generative AI models for specific tasks
- Evaluate model performance using industry-standard methods
- Deploy LLMs using best practices in real-world applications
- Solve practical problems using prompt engineering and retrieval-augmented techniques