Course Outline
Day 1: Foundations and Core Threats
Module 1: Introduction to OWASP GenAI Security Project (1 hour)
Learning Objectives:
- Understand the evolution from OWASP Top 10 to GenAI-specific security challenges
 - Explore the OWASP GenAI Security Project ecosystem and resources
 - Identify key differences between traditional application security and AI security
 
Topics Covered:
- Overview of OWASP GenAI Security Project mission and scope
 - Introduction to the Threat Defense COMPASS framework
 - Understanding the AI security landscape and regulatory requirements
 - AI attack surfaces vs traditional web application vulnerabilities
 
Practical Exercise: Setting up the OWASP Threat Defense COMPASS tool and performing initial threat assessment
Module 2: OWASP Top 10 for LLMs - Part 1 (2.5 hours)
Learning Objectives:
- Master the first five critical LLM vulnerabilities
 - Understand attack vectors and exploitation techniques
 - Apply practical mitigation strategies
 
Topics Covered:
LLM01: Prompt Injection
- Direct and indirect prompt injection techniques
 - Hidden instruction attacks and cross-prompt contamination
 - Practical examples: Jailbreaking chatbots and bypassing safety measures
 - Defense strategies: Input sanitization, prompt filtering, differential privacy
 
LLM02: Sensitive Information Disclosure
- Training data extraction and system prompt leakage
 - Model behavior analysis for sensitive information exposure
 - Privacy implications and regulatory compliance considerations
 - Mitigation: Output filtering, access controls, data anonymization
 
LLM03: Supply Chain Vulnerabilities
- Third-party model dependencies and plugin security
 - Compromised training datasets and model poisoning
 - Vendor risk assessment for AI components
 - Secure model deployment and verification practices
 
Practical Exercise: Hands-on lab demonstrating prompt injection attacks against vulnerable LLM applications and implementing defensive measures
Module 3: OWASP Top 10 for LLMs - Part 2 (2 hours)
Topics Covered:
LLM04: Data and Model Poisoning
- Training data manipulation techniques
 - Model behavior modification through poisoned inputs
 - Backdoor attacks and data integrity verification
 - Prevention: Data validation pipelines, provenance tracking
 
LLM05: Improper Output Handling
- Insecure processing of LLM-generated content
 - Code injection through AI-generated outputs
 - Cross-site scripting via AI responses
 - Output validation and sanitization frameworks
 
Practical Exercise: Simulating data poisoning attacks and implementing robust output validation mechanisms
Module 4: Advanced LLM Threats (1.5 hours)
Topics Covered:
LLM06: Excessive Agency
- Autonomous decision-making risks and boundary violations
 - Agent authority and permission management
 - Unintended system interactions and privilege escalation
 - Implementing guardrails and human oversight controls
 
LLM07: System Prompt Leakage
- System instruction exposure vulnerabilities
 - Credential and logic disclosure through prompts
 - Attack techniques for extracting system prompts
 - Securing system instructions and external configuration
 
Practical Exercise: Designing secure agent architectures with appropriate access controls and monitoring
Day 2: Advanced Threats and Implementation
Module 5: Emerging AI Threats (2 hours)
Learning Objectives:
- Understand cutting-edge AI security threats
 - Implement advanced detection and prevention techniques
 - Design resilient AI systems against sophisticated attacks
 
Topics Covered:
LLM08: Vector and Embedding Weaknesses
- RAG system vulnerabilities and vector database security
 - Embedding poisoning and similarity manipulation attacks
 - Adversarial examples in semantic search
 - Securing vector stores and implementing anomaly detection
 
LLM09: Misinformation and Model Reliability
- Hallucination detection and mitigation
 - Bias amplification and fairness considerations
 - Fact-checking and source verification mechanisms
 - Content validation and human oversight integration
 
LLM10: Unbounded Consumption
- Resource exhaustion and denial-of-service attacks
 - Rate limiting and resource management strategies
 - Cost optimization and budget controls
 - Performance monitoring and alerting systems
 
Practical Exercise: Building a secure RAG pipeline with vector database protection and hallucination detection
Module 6: Agentic AI Security (2 hours)
Learning Objectives:
- Understand the unique security challenges of autonomous AI agents
 - Apply the OWASP Agentic AI taxonomy to real-world systems
 - Implement security controls for multi-agent environments
 
Topics Covered:
- Introduction to Agentic AI and autonomous systems
 - OWASP Agentic AI Threat Taxonomy: Agent Design, Memory, Planning, Tool Use, Deployment
 - Multi-agent system security and coordination risks
 - Tool misuse, memory poisoning, and goal hijacking attacks
 - Securing agent communication and decision-making processes
 
Practical Exercise: Threat modeling exercise using OWASP Agentic AI taxonomy on a multi-agent customer service system
Module 7: OWASP Threat Defense COMPASS Implementation (2 hours)
Learning Objectives:
- Master the practical application of Threat Defense COMPASS
 - Integrate AI threat assessment into organizational security programs
 - Develop comprehensive AI risk management strategies
 
Topics Covered:
- Deep dive into Threat Defense COMPASS methodology
 - OODA Loop integration: Observe, Orient, Decide, Act
 - Mapping threats to MITRE ATT&CK and ATLAS frameworks
 - Building AI Threat Resilience Strategy Dashboards
 - Integration with existing security tools and processes
 
Practical Exercise: Complete threat assessment using COMPASS for a Microsoft Copilot deployment scenario
Module 8: Practical Implementation and Best Practices (2.5 hours)
Learning Objectives:
- Design secure AI architectures from the ground up
 - Implement monitoring and incident response for AI systems
 - Create governance frameworks for AI security
 
Topics Covered:
Secure AI Development Lifecycle:
- Security-by-design principles for AI applications
 - Code review practices for LLM integrations
 - Testing methodologies and vulnerability scanning
 - Deployment security and production hardening
 
Monitoring and Detection:
- AI-specific logging and monitoring requirements
 - Anomaly detection for AI systems
 - Incident response procedures for AI security events
 - Forensics and investigation techniques
 
Governance and Compliance:
- AI risk management frameworks and policies
 - Regulatory compliance considerations (GDPR, AI Act, etc.)
 - Third-party risk assessment for AI vendors
 - Security awareness training for AI development teams
 
Practical Exercise: Design a complete security architecture for an enterprise AI chatbot including monitoring, governance, and incident response procedures
Module 9: Tools and Technologies (1 hour)
Learning Objectives:
- Evaluate and implement AI security tools
 - Understand the current AI security solutions landscape
 - Build practical detection and prevention capabilities
 
Topics Covered:
- AI security tool ecosystem and vendor landscape
 - Open-source security tools: Garak, PyRIT, Giskard
 - Commercial solutions for AI security and monitoring
 - Integration patterns and deployment strategies
 - Tool selection criteria and evaluation frameworks
 
Practical Exercise: Hands-on demonstration of AI security testing tools and implementation planning
Module 10: Future Trends and Wrap-up (1 hour)
Learning Objectives:
- Understand emerging threats and future security challenges
 - Develop continuous learning and improvement strategies
 - Create action plans for organizational AI security programs
 
Topics Covered:
- Emerging threats: Deepfakes, advanced prompt injection, model inversion
 - Future OWASP GenAI project developments and roadmap
 - Building AI security communities and knowledge sharing
 - Continuous improvement and threat intelligence integration
 
Action Planning Exercise: Develop a 90-day action plan for implementing OWASP GenAI security practices in participants' organizations
Requirements
- General understanding of web application security principles
 - Basic familiarity with AI/ML concepts
 - Experience with security frameworks or risk assessment methodologies preferred
 
Audience
- Cybersecurity professionals
 - AI developers
 - System architects
 - Compliance officers
 - Security practitioners