GPU Programming with CUDA Training Course
CUDA is an open standard for GPU programming that enables code to run on NVIDIA GPUs, which are widely used for high-performance computing, artificial intelligence (AI), gaming, and graphics. CUDA exposes the programmer to the hardware details and gives full control over the parallelization process. However, this also requires a good understanding of the device architecture, memory model, execution model, and optimization techniques.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to use CUDA to program NVIDIA GPUs and exploit their parallelism.
By the end of this training, participants will be able to:
- Set up a development environment that includes CUDA Toolkit, a NVIDIA GPU, and Visual Studio Code.
- Create a basic CUDA program that performs vector addition on the GPU and retrieves the results from the GPU memory.
- Use CUDA API to query device information, allocate and deallocate device memory, copy data between host and device, launch kernels, and synchronize threads.
- Use CUDA C/C++ language to write kernels that execute on the GPU and manipulate data.
- Use CUDA built-in functions, variables, and libraries to perform common tasks and operations.
- Use CUDA memory spaces, such as global, shared, constant, and local, to optimize data transfers and memory accesses.
- Use CUDA execution model to control the threads, blocks, and grids that define the parallelism.
- Debug and test CUDA programs using tools such as CUDA-GDB, CUDA-MEMCHECK, and NVIDIA Nsight.
- Optimize CUDA programs using techniques such as coalescing, caching, prefetching, and profiling.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
- 96% de clients satisfaits
Course Outline
Introduction
- What is CUDA?
- CUDA vs OpenCL vs SYCL
- Overview of CUDA features and architecture
- Setting up the Development Environment
Getting Started
- Creating a new CUDA project using Visual Studio Code
- Exploring the project structure and files
- Compiling and running the program
- Displaying the output using printf and fprintf
CUDA API
- Understanding the role of CUDA API in the host program
- Using CUDA API to query device information and capabilities
- Using CUDA API to allocate and deallocate device memory
- Using CUDA API to copy data between host and device
- Using CUDA API to launch kernels and synchronize threads
- Using CUDA API to handle errors and exceptions
CUDA C/C++
- Understanding the role of CUDA C/C++ in the device program
- Using CUDA C/C++ to write kernels that execute on the GPU and manipulate data
- Using CUDA C/C++ data types, qualifiers, operators, and expressions
- Using CUDA C/C++ built-in functions, such as math, atomic, warp, etc.
- Using CUDA C/C++ built-in variables, such as threadIdx, blockIdx, blockDim, etc.
- Using CUDA C/C++ libraries, such as cuBLAS, cuFFT, cuRAND, etc.
CUDA Memory Model
- Understanding the difference between host and device memory models
- Using CUDA memory spaces, such as global, shared, constant, and local
- Using CUDA memory objects, such as pointers, arrays, textures, and surfaces
- Using CUDA memory access modes, such as read-only, write-only, read-write, etc.
- Using CUDA memory consistency model and synchronization mechanisms
CUDA Execution Model
- Understanding the difference between host and device execution models
- Using CUDA threads, blocks, and grids to define the parallelism
- Using CUDA thread functions, such as threadIdx, blockIdx, blockDim, etc.
- Using CUDA block functions, such as __syncthreads, __threadfence_block, etc.
- Using CUDA grid functions, such as gridDim, gridSync, cooperative groups, etc.
Debugging
- Understanding the common errors and bugs in CUDA programs
- Using Visual Studio Code debugger to inspect variables, breakpoints, call stack, etc.
- Using CUDA-GDB to debug CUDA programs on Linux
- Using CUDA-MEMCHECK to detect memory errors and leaks
- Using NVIDIA Nsight to debug and analyze CUDA programs on Windows
Optimization
- Understanding the factors that affect the performance of CUDA programs
- Using CUDA coalescing techniques to improve memory throughput
- Using CUDA caching and prefetching techniques to reduce memory latency
- Using CUDA shared memory and local memory techniques to optimize memory accesses and bandwidth
- Using CUDA profiling and profiling tools to measure and improve the execution time and resource utilization
Summary and Next Steps
Requirements
- An understanding of C/C++ language and parallel programming concepts
- Basic knowledge of computer architecture and memory hierarchy
- Experience with command-line tools and code editors
Audience
- Developers who wish to learn how to use CUDA to program NVIDIA GPUs and exploit their parallelism
- Developers who wish to write high-performance and scalable code that can run on different CUDA devices
- Programmers who wish to explore the low-level aspects of GPU programming and optimize their code performance
Open Training Courses require 5+ participants.
GPU Programming with CUDA Training Course - Booking
GPU Programming with CUDA Training Course - Enquiry
GPU Programming with CUDA - Consultancy Enquiry
Upcoming Courses
Related Courses
Developing AI Applications with Huawei Ascend and CANN
21 HoursHuawei Ascend comprises a family of AI processors engineered for high-performance inference and training tasks.
This instructor-led live training, available online or onsite, targets intermediate-level AI engineers and data scientists eager to develop and optimize neural network models leveraging Huawei’s Ascend platform alongside the CANN toolkit.
Upon completion of this training, participants will be equipped to:
- Establish and configure the CANN development environment.
- Create AI applications utilizing MindSpore and CloudMatrix workflows.
- Enhance performance on Ascend NPUs through custom operators and tiling techniques.
- Deploy models within either edge or cloud environments.
Course Format
- Interactive lectures and discussions.
- Practical application of Huawei Ascend and the CANN toolkit within sample projects.
- Guided exercises concentrating on model construction, training, and deployment.
Course Customization Options
- To arrange a tailored training session aligned with your specific infrastructure or datasets, please reach out to us.
Deploying AI Models with CANN and Ascend AI Processors
14 HoursCANN (Compute Architecture for Neural Networks) serves as Huawei's AI compute stack, designed to deploy and optimize AI models on Ascend AI processors.
This instructor-led live training, available both online and on-site, targets intermediate-level AI developers and engineers who want to efficiently deploy trained AI models onto Huawei Ascend hardware. The course utilizes the CANN toolkit along with tools such as MindSpore, TensorFlow, or PyTorch.
Upon completing this training, participants will be capable of:
- Understanding the CANN architecture and its function within the AI deployment pipeline.
- Converting and adapting models from popular frameworks into Ascend-compatible formats.
- Utilizing tools like ATC, OM model conversion, and MindSpore for inference on both edge and cloud environments.
- Diagnosing deployment issues and optimizing performance on Ascend hardware.
Course Format
- Interactive lectures and demonstrations.
- Hands-on lab exercises using CANN tools and Ascend simulators or devices.
- Practical deployment scenarios based on real-world AI models.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
AI Inference and Deployment with CloudMatrix
21 HoursCloudMatrix serves as Huawei's integrated platform for AI development and deployment, engineered to facilitate scalable, production-ready inference pipelines.
This instructor-led live training, available both online and onsite, targets beginner to intermediate AI professionals aiming to deploy and oversee AI models leveraging the CloudMatrix platform with CANN and MindSpore integration.
Upon completion of this training, participants will possess the ability to:
- Utilize CloudMatrix for model packaging, deployment, and serving.
- Convert and optimize models specifically for Ascend chipsets.
- Establish pipelines for both real-time and batch inference tasks.
- Monitor deployments and fine-tune performance within production environments.
Course Format
- Interactive lectures and discussions.
- Practical application of CloudMatrix through real-world deployment scenarios.
- Guided exercises emphasizing conversion, optimization, and scaling.
Customization Options
- For tailored training based on your specific AI infrastructure or cloud environment, please contact us to arrange.
GPU Programming on Biren AI Accelerators
21 HoursBiren AI Accelerators are high-performance GPUs engineered for AI and HPC workloads, supporting large-scale training and inference.
This instructor-led live training (available online or onsite) targets intermediate to advanced developers looking to program and optimize applications using Biren’s proprietary GPU stack, with practical comparisons to CUDA-based environments.
Upon completing this training, participants will be able to:
- Comprehend the Biren GPU architecture and memory hierarchy.
- Configure the development environment and utilize Biren’s programming model.
- Translate and optimize CUDA-style code for Biren platforms.
- Implement performance tuning and debugging techniques.
Course Format
- Interactive lectures and discussions.
- Hands-on experience with the Biren SDK in sample GPU workloads.
- Guided exercises focused on porting and performance tuning.
Course Customization Options
- For customized training tailored to your application stack or integration needs, please contact us to arrange.
Cambricon MLU Development with BANGPy and Neuware
21 HoursCambricon MLUs (Machine Learning Units) are specialized artificial intelligence chips designed to optimize both inference and training processes for edge computing and data center environments.
This instructor-led live training, available either online or on-site, is designed for intermediate-level developers aiming to build and deploy AI models utilizing the BANGPy framework and Neuware SDK on Cambricon MLU hardware.
Upon completion of this training, participants will be equipped to:
- Establish and configure development environments for both BANGPy and Neuware.
- Create and optimize models based on Python and C++ specifically for Cambricon MLUs.
- Deploy models to edge devices and data centers operating on the Neuware runtime.
- Integrate machine learning workflows with acceleration features specific to MLUs.
Course Format
- Engaging lectures combined with interactive discussions.
- Practical, hands-on experience with BANGPy and Neuware for development and deployment.
- Guided exercises emphasizing optimization, integration, and testing.
Customization Options
- To arrange customized training tailored to your specific Cambricon device model or use case, please contact us.
Introduction to CANN for AI Framework Developers
7 HoursCANN (Compute Architecture for Neural Networks) is Huawei’s AI computing toolkit designed to compile, optimize, and deploy AI models on Ascend AI processors.
This instructor-led live training, available online or onsite, targets beginner-level AI developers seeking to comprehend how CANN integrates into the model lifecycle—from training through to deployment—and how it interoperates with popular frameworks such as MindSpore, TensorFlow, and PyTorch.
Upon completion of this training, participants will be capable of:
- Grasping the purpose and architectural design of the CANN toolkit.
- Establishing a development environment utilizing CANN and MindSpore.
- Converting and deploying a simple AI model to Ascend hardware.
- Acquiring foundational knowledge to support future CANN optimization or integration initiatives.
Course Format
- Interactive lectures and discussions.
- Practical hands-on labs focused on simple model deployment.
- A step-by-step walkthrough of the CANN toolchain and its integration points.
Customization Options
- To request customized training for this course, please reach out to us to arrange details.
CANN for Edge AI Deployment
14 HoursHuawei's Ascend CANN toolkit empowers developers to execute high-performance AI inference on edge hardware, including the Ascend 310. This toolkit offers critical capabilities for compiling, optimizing, and deploying models in environments where computational power and memory are limited.
This instructor-led live training, available either online or on-site, is designed for intermediate AI developers and integrators seeking to deploy and optimize models on Ascend edge devices using the CANN ecosystem.
Upon completion of this course, participants will be able to:
- Prepare and convert AI models for the Ascend 310 using CANN utilities.
- Construct efficient inference pipelines utilizing MindSpore Lite and AscendCL.
- Enhance model performance tailored for constrained compute and memory resources.
- Deploy and oversee AI applications in practical edge scenarios.
Course Format
- Engaging lectures combined with live demonstrations.
- Practical laboratory sessions focusing on edge-specific models and use cases.
- Real-world deployment examples executed on virtual or physical edge hardware.
Customization Options
- To request a tailored training session for this course, please reach out to us for arrangements.
Understanding Huawei’s AI Compute Stack: From CANN to MindSpore
14 HoursHuawei’s AI stack—spanning from the low-level CANN SDK to the high-level MindSpore framework—provides a tightly integrated environment for AI development and deployment, specifically optimized for Ascend hardware.
This instructor-led live training (available online or onsite) targets technical professionals at the beginner to intermediate level who aim to grasp how CANN and MindSpore components collaborate to facilitate AI lifecycle management and infrastructure planning.
Upon completing this training, participants will be able to:
- Comprehend the layered architecture of Huawei’s AI compute stack.
- Recognize how CANN facilitates model optimization and hardware-level deployment.
- Assess the MindSpore framework and its toolchain against industry alternatives.
- Position Huawei's AI stack effectively within enterprise or cloud/on-premise environments.
Course Format
- Interactive lectures and discussions.
- Live system demonstrations and case-based walkthroughs.
- Optional guided labs exploring the model workflow from MindSpore to CANN.
Customization Options
- To arrange customized training for this course, please contact us.
Optimizing Neural Network Performance with CANN SDK
14 HoursThe CANN SDK (Compute Architecture for Neural Networks) serves as Huawei’s foundational AI compute platform, empowering developers to fine-tune and enhance the performance of neural networks deployed on Ascend AI processors.
This instructor-led live training, available online or onsite, targets advanced AI developers and system engineers seeking to maximize inference performance by leveraging CANN’s advanced tools, such as the Graph Engine, TIK, and custom operator development capabilities.
Upon completion of this training, participants will be capable of:
- Gaining insight into CANN's runtime architecture and performance lifecycle.
- Utilizing profiling tools and the Graph Engine to analyze and optimize performance.
- Developing and optimizing custom operators using TIK and TVM.
- Addressing memory bottlenecks and increasing model throughput.
Course Format
- Interactive lectures and discussions.
- Practical labs involving real-time profiling and operator tuning.
- Optimization exercises based on real-world edge-case deployment scenarios.
Customization Options
- For customized training arrangements, please contact us directly.
CANN SDK for Computer Vision and NLP Pipelines
14 HoursThe CANN SDK (Compute Architecture for Neural Networks) offers robust tools for deploying and optimizing real-time AI solutions in computer vision and natural language processing, particularly on Huawei Ascend hardware.
This instructor-led live training, available either online or on-site, is designed for intermediate-level AI professionals looking to construct, deploy, and refine vision and language models using the CANN SDK for production environments.
Upon completion of this course, participants will be able to:
- Deploy and optimize computer vision (CV) and NLP models utilizing CANN and AscendCL.
- Leverage CANN utilities to convert models and embed them into live processing pipelines.
- Enhance inference performance for applications such as object detection, classification, and sentiment analysis.
- Create real-time CV/NLP pipelines suitable for both edge and cloud deployment scenarios.
Course Format
- Engaging lectures combined with practical demonstrations.
- Practical hands-on labs focused on model deployment and performance profiling.
- Real-time pipeline design using authentic CV and NLP use cases.
Customization Options
- For personalized training needs regarding this course, please reach out to us to arrange.
Building Custom AI Operators with CANN TIK and TVM
14 HoursCANN TIK (Tensor Instruction Kernel) and Apache TVM facilitate the advanced optimization and customization of AI model operators for Huawei Ascend hardware.
This instructor-led, live training (available online or onsite) is designed for advanced-level system developers who want to build, deploy, and fine-tune custom operators for AI models using CANN’s TIK programming model and its integration with the TVM compiler.
Upon completion of this training, participants will be capable of:
- Writing and testing custom AI operators utilizing the TIK DSL for Ascend processors.
- Integrating custom operations into the CANN runtime and execution graph.
- Leveraging TVM for operator scheduling, auto-tuning, and benchmarking.
- Debugging and optimizing instruction-level performance for custom computation patterns.
Course Format
- Interactive lectures and demonstrations.
- Hands-on coding of operators using TIK and TVM pipelines.
- Testing and tuning on Ascend hardware or simulators.
Course Customization Options
- To request customized training for this course, please contact us to arrange.
Migrating CUDA Applications to Chinese GPU Architectures
21 HoursDomestic GPU architectures in China, including Huawei Ascend, Biren, and Cambricon MLUs, provide CUDA alternatives specifically designed for the local AI and High-Performance Computing (HPC) sectors.
This guided, live training session (available online or on-site) is designed for advanced GPU developers and infrastructure experts looking to migrate and optimize their existing CUDA applications for Chinese hardware platforms.
Upon completion of this training, participants will be able to:
- Assess the compatibility of current CUDA workloads with Chinese chip alternatives.
- Transfer CUDA codebases to Huawei CANN, Biren SDK, and Cambricon BANGPy environments.
- Evaluate performance metrics and identify optimization opportunities across different platforms.
- Resolve practical challenges related to cross-architecture support and deployment.
Course Format
- Interactive lectures and discussions.
- Practical labs involving code translation and performance benchmarking.
- Instructor-led exercises focused on multi-GPU adaptation strategies.
Customization Options
- For tailored training based on your specific platform or CUDA project, please contact us to arrange a customized session.
Performance Optimization on Ascend, Biren, and Cambricon
21 HoursAscend, Biren, and Cambricon stand out as premier AI hardware platforms in China, each providing distinct acceleration and profiling capabilities tailored for large-scale AI production workloads.
This instructor-led live training, available online or onsite, targets advanced AI infrastructure and performance engineers seeking to optimize model inference and training processes across various Chinese AI chip architectures.
Upon completing this training, participants will be equipped to:
- Conduct benchmarks on Ascend, Biren, and Cambricon platforms.
- Diagnose system bottlenecks and identify inefficiencies in memory and compute resources.
- Implement optimizations at the graph, kernel, and operator levels.
- Optimize deployment pipelines to enhance both throughput and reduce latency.
Course Format
- Interactive lectures and group discussions.
- Practical application of profiling and optimization tools specific to each platform.
- Guided exercises centered on real-world tuning scenarios.
Customization Options
- To arrange a customized training session tailored to your specific performance environment or model type, please reach out to us directly.