Get in Touch

Course Outline

Overview of CANN Optimization Capabilities

  • How inference performance is managed within CANN.
  • Optimization objectives for edge and embedded AI systems.
  • Understanding AI Core utilization and memory allocation.

Leveraging the Graph Engine for Analysis

  • Introduction to the Graph Engine and its execution pipeline.
  • Visualizing operator graphs and runtime metrics.
  • Modifying computational graphs to achieve optimization.

Profiling Tools and Performance Metrics

  • Using the CANN Profiling Tool (profiler) for workload analysis.
  • Analyzing kernel execution time and identifying bottlenecks.
  • Memory access profiling and tiling strategies.

Custom Operator Development with TIK

  • Overview of TIK and the operator programming model.
  • Implementing a custom operator using the TIK DSL.
  • Testing and benchmarking operator performance.

Advanced Operator Optimization with TVM

  • Introduction to TVM integration with CANN.
  • Auto-tuning strategies for computational graphs.
  • Strategies for switching between TVM and TIK.

Memory Optimization Techniques

  • Managing memory layout and buffer placement.
  • Techniques to reduce on-chip memory consumption.
  • Best practices for asynchronous execution and resource reuse.

Real-World Deployment and Case Studies

  • Case study: Performance tuning for a smart city camera pipeline.
  • Case study: Optimizing the inference stack for autonomous vehicles.
  • Guidelines for iterative profiling and continuous improvement.

Summary and Next Steps

Requirements

  • A solid grasp of deep learning model architectures and training workflows.
  • Practical experience deploying models using CANN, TensorFlow, or PyTorch.
  • Familiarity with Linux CLI, shell scripting, and Python programming.

Audience

  • AI performance engineers.
  • Inference optimization specialists.
  • Developers working with edge AI or real-time systems.
 14 Hours

Number of participants


Price per participant

Upcoming Courses

Related Categories