Get in Touch

Course Outline

Performance Concepts and Metrics

  • Latency, throughput, power consumption, and resource utilization.
  • Distinguishing between system-level and model-level bottlenecks.
  • Profiling strategies for inference versus training tasks.

Profiling on Huawei Ascend

  • Utilizing CANN Profiler and MindInsight.
  • Analyzing kernel and operator performance.
  • Understanding offload patterns and memory mapping.

Profiling on Biren GPU

  • Performance monitoring features within the Biren SDK.
  • Optimizing kernel fusion, memory alignment, and execution queues.
  • Profiling techniques aware of power and temperature constraints.

Profiling on Cambricon MLU

  • Performance tools including BANGPy and Neuware.
  • Gaining kernel-level visibility and interpreting logs.
  • Integrating the MLU profiler with deployment frameworks.

Graph and Model-Level Optimization

  • Strategies for graph pruning and quantization.
  • Operator fusion and computational graph restructuring.
  • Standardizing input sizes and tuning batch parameters.

Memory and Kernel Optimization

  • Optimizing memory layout and data reuse.
  • Managing buffers efficiently across different chipsets.
  • Platform-specific kernel tuning techniques.

Cross-Platform Best Practices

  • Achieving performance portability through abstraction strategies.
  • Developing shared tuning pipelines for multi-chip environments.
  • Case study: Tuning an object detection model across Ascend, Biren, and MLU platforms.

Summary and Next Steps

Requirements

  • Experience with AI model training or deployment pipelines.
  • Understanding of GPU/MLU computing principles and model optimization techniques.
  • Basic proficiency with performance profiling tools and metrics.

Audience

  • Performance engineers.
  • Machine learning infrastructure teams.
  • AI system architects.
 21 Hours

Number of participants


Price per participant

Upcoming Courses

Related Categories