Learn to Use HPC Systems and Supercomputers Course Syllabus

Full curriculum breakdown — modules, lessons, estimated time, and outcomes.

This course provides a comprehensive introduction to High-Performance Computing (HPC) systems and supercomputers, designed for beginners aiming to gain hands-on experience with real-world HPC workflows. You'll learn to navigate clusters, manage software environments, submit jobs using PBS and Slurm, and develop parallel applications with OpenMP, MPI, and CUDA. With approximately 6 hours of focused content, the course combines conceptual understanding with practical demonstrations, guiding you from system access to writing efficient parallel code. While the format is non-interactive, the step-by-step demos ensure clarity and reproducibility, preparing you for roles in computational science and high-performance application development.

Module 1: Supercomputers and HPC Clusters

Estimated time: 0.7 hours

  • Evolution of supercomputing
  • Differences between clusters and supercomputers
  • Benefits of HPC-enabled parallelism
  • Historical supercomputers and cluster architectures

Module 2: Components of an HPC System

Estimated time: 0.8 hours

  • Login, management, compute, and storage nodes
  • Network interconnects in HPC clusters
  • Resource partitioning and system topology
  • Connecting to a demo cluster and inspecting node roles

Module 3: HPC Software Stack & Environment Modules

Estimated time: 0.8 hours

  • Data transfer using scp and rsync
  • Using environment modules to manage software
  • Loading, unloading, and switching software versions
  • Running sample applications in configured environments

Module 4: Job Schedulers – PBS & Slurm

Estimated time: 1.0 hours

  • Batch vs. interactive jobs
  • PBS commands: qsub, qstat, qdel
  • Slurm basics: sbatch, squeue, scancel
  • Writing, submitting, and managing job scripts

Module 5: Parallel Programming with OpenMP

Estimated time: 1.0 hours

  • OpenMP pragmas and compiler directives
  • Work-sharing constructs: parallel for, sections
  • Reduction operations in parallel regions
  • Measuring speedup across threads

Module 6: Message Passing with MPI

Estimated time: 1.0 hours

  • MPI initialization and finalization
  • Point-to-point communication: send/recv
  • Collective communication operations
  • Implementing ring communication and ping-pong latency tests

Module 7: GPU Programming with CUDA

Estimated time: 1.0 hours

  • GPU architecture overview
  • Writing and launching CUDA kernels
  • CUDA memory hierarchy
  • Vector addition example and GPU execution profiling

Module 8: Course Wrap-Up & Best Practices

Estimated time: 0.3 hours

  • Job array workflows
  • Environment reproducibility and module management
  • Resource quotas and job script optimization
  • Refining scripts with time and memory directives

Prerequisites

  • Familiarity with Linux command line
  • Basic knowledge of programming in C/C++ or Fortran
  • Understanding of fundamental computing concepts (CPU, memory, processes)

What You'll Be Able to Do After

  • Navigate and access HPC systems via secure shell and transfer data efficiently
  • Identify key components of HPC clusters including compute, storage, and network resources
  • Use environment modules to manage software versions and dependencies
  • Write, submit, and manage batch jobs using both PBS and Slurm schedulers
  • Develop and optimize parallel applications using OpenMP, MPI, and CUDA
View Full Course Review

Course AI Assistant Beta

Hi! I can help you find the perfect online course. Ask me something like “best Python course for beginners” or “compare data science courses”.