Course Outline
1. Introduction to Deep Reinforcement Learning
- Defining Reinforcement Learning.
- Distinguishing between Supervised, Unsupervised, and Reinforcement Learning.
- DRL applications in 2025 across robotics, healthcare, finance, and logistics.
- Understanding the agent-environment interaction loop.
2. Reinforcement Learning Fundamentals
- Markov Decision Processes (MDP).
- Core concepts: State, Action, Reward, Policy, and Value functions.
- The exploration vs. exploitation trade-off.
- Monte Carlo methods and Temporal-Difference (TD) learning.
3. Implementing Basic RL Algorithms
- Tabular methods: Dynamic Programming, Policy Evaluation, and Iteration.
- Q-Learning and SARSA.
- Epsilon-greedy exploration and decaying strategies.
- Setting up RL environments with OpenAI Gymnasium.
4. Transition to Deep Reinforcement Learning
- Limitations of tabular methods.
- Utilizing neural networks for function approximation.
- Deep Q-Network (DQN) architecture and workflow.
- Experience replay and target networks.
5. Advanced DRL Algorithms
- Double DQN, Dueling DQN, and Prioritized Experience Replay.
- Policy Gradient Methods: The REINFORCE algorithm.
- Actor-Critic architectures (A2C, A3C).
- Proximal Policy Optimization (PPO).
- Soft Actor-Critic (SAC).
6. Working with Continuous Action Spaces
- Challenges in continuous control.
- Utilizing DDPG (Deep Deterministic Policy Gradient).
- Twin Delayed DDPG (TD3).
7. Practical Tools and Frameworks
- Using Stable-Baselines3 and Ray RLlib.
- Logging and monitoring with TensorBoard.
- Hyperparameter tuning for DRL models.
8. Reward Engineering and Environment Design
- Reward shaping and penalty balancing.
- Sim-to-real transfer learning concepts.
- Creating custom environments in Gymnasium.
9. Partially Observable Environments and Generalization
- Handling incomplete state information (POMDPs).
- Memory-based approaches using LSTMs and RNNs.
- Enhancing agent robustness and generalization.
10. Game Theory and Multi-Agent Reinforcement Learning
- Introduction to multi-agent environments.
- Cooperation vs. competition dynamics.
- Applications in adversarial training and strategy optimization.
11. Case Studies and Real-World Applications
- Autonomous driving simulations.
- Dynamic pricing and financial trading strategies.
- Robotics and industrial automation.
12. Troubleshooting and Optimization
- Diagnosing unstable training processes.
- Managing reward sparsity and preventing overfitting.
- Scaling DRL models on GPUs and distributed systems.
13. Summary and Next Steps
- Recap of DRL architecture and key algorithms.
- Industry trends and research directions (e.g., RLHF, hybrid models).
- Additional resources and reading materials.
Requirements
- Proficiency in Python programming.
- Solid understanding of Calculus and Linear Algebra.
- Fundamental knowledge of Probability and Statistics.
- Experience in developing machine learning models using Python along with NumPy or TensorFlow/PyTorch.
Target Audience
- Developers interested in AI and intelligent systems.
- Data Scientists exploring reinforcement learning frameworks.
- Machine Learning Engineers working with autonomous systems.
Testimonials (3)
I really liked the end where we took the time to play around with CHAT GPT. The room was not set up the best for this- instead of one large table a couple of small ones so we could get into small groups and brainstorm would have helped
Nola - Laramie County Community College
Course - Artificial Intelligence (AI) Overview
Working from first principles in a focused way, and moving to applying case studies within the same day
Maggie Webb - Department of Jobs, Regions, and Precincts
Course - Artificial Neural Networks, Machine Learning, Deep Thinking
It felt like we were going through directly relevant information at a good pace (i.e. no filler material)