Corso di formazione Building Secure and Responsible LLM Applications
LLM application security is the discipline of designing, building, and maintaining safe, trustworthy, and policy-compliant systems using large language models.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level AI developers, architects, and product managers who wish to identify and mitigate risks associated with LLM-powered applications, including prompt injection, data leakage, and unfiltered output, while incorporating security controls like input validation, human-in-the-loop oversight, and output guardrails.
By the end of this training, participants will be able to:
- Understand the core vulnerabilities of LLM-based systems.
- Apply secure design principles to LLM app architecture.
- Use tools such as Guardrails AI and LangChain for validation, filtering, and safety.
- Integrate techniques like sandboxing, red teaming, and human-in-the-loop review into production-grade pipelines.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Struttura del corso
Overview of LLM Architecture and Attack Surface
- How LLMs are built, deployed, and accessed via APIs
- Key components in LLM app stacks (e.g., prompts, agents, memory, APIs)
- Where and how security issues arise in real-world use
Prompt Injection and Jailbreak Attacks
- What is prompt injection and why it’s dangerous
- Direct and indirect prompt injection scenarios
- Jailbreaking techniques to bypass safety filters
- Detection and mitigation strategies
Data Leakage and Privacy Risks
- Accidental data exposure through responses
- PII leaks and model memory misuse
- Designing privacy-conscious prompts and retrieval-augmented generation (RAG)
LLM Output Filtering and Guarding
- Using Guardrails AI for content filtering and validation
- Defining output schemas and constraints
- Monitoring and logging unsafe outputs
Human-in-the-Loop and Workflow Approaches
- Where and when to introduce human oversight
- Approval queues, scoring thresholds, fallback handling
- Trust calibration and role of explainability
Secure LLM App Design Patterns
- Least privilege and sandboxing for API calls and agents
- Rate limiting, throttling, and abuse detection
- Robust chaining with LangChain and prompt isolation
Compliance, Logging, and Governance
- Ensuring auditability of LLM outputs
- Maintaining traceability and prompt/version control
- Aligning with internal security policies and regulatory needs
Summary and Next Steps
Requisiti
- An understanding of large language models and prompt-based interfaces
- Experience building LLM applications using Python
- Familiarity with API integrations and cloud-based deployments
Audience
- AI developers
- Application and solution architects
- Technical product managers working with LLM tools
I corsi di formazione interaziendali richiedono più di 5 partecipanti.
Corso di formazione Building Secure and Responsible LLM Applications - Booking
Corso di formazione Building Secure and Responsible LLM Applications - Enquiry
Building Secure and Responsible LLM Applications - Richiesta di consulenza
Richiesta di consulenza
Corsi in Arrivo
Corsi relativi
Advanced LangGraph: Optimization, Debugging, and Monitoring Complex Graphs
35 oreLangGraph is a framework for building stateful, multi-actor LLM applications as composable graphs with persistent state and control over execution.
This instructor-led, live training (online or onsite) is aimed at advanced-level AI platform engineers, DevOps for AI, and ML architects who wish to optimize, debug, monitor, and operate production-grade LangGraph systems.
By the end of this training, participants will be able to:
- Design and optimize complex LangGraph topologies for speed, cost, and scalability.
- Engineer reliability with retries, timeouts, idempotency, and checkpoint-based recovery.
- Debug and trace graph executions, inspect state, and systematically reproduce production issues.
- Instrument graphs with logs, metrics, and traces, deploy to production, and monitor SLAs and costs.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Advanced Ollama Model Debugging & Evaluation
35 oreAdvanced Ollama Model Debugging & Evaluation is an in-depth course focused on diagnosing, testing, and measuring model behavior when running local or private Ollama deployments.
This instructor-led, live training (online or onsite) is aimed at advanced-level AI engineers, ML Ops professionals, and QA practitioners who wish to ensure reliability, fidelity, and operational readiness of Ollama-based models in production.
By the end of this training, participants will be able to:
- Perform systematic debugging of Ollama-hosted models and reproduce failure modes reliably.
- Design and execute robust evaluation pipelines with quantitative and qualitative metrics.
- Implement observability (logs, traces, metrics) to monitor model health and drift.
- Automate testing, validation, and regression checks integrated into CI/CD pipelines.
Format of the Course
- Interactive lecture and discussion.
- Hands-on labs and debugging exercises using Ollama deployments.
- Case studies, group troubleshooting sessions, and automation workshops.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Creazione di flussi di lavoro AI privati con Ollama
14 oreQuesto training dal vivo guidato da un istruttore (Italia online o sul posto) è rivolto a professionisti di livello avanzato che desiderano implementare flussi di lavoro AI sicuri ed efficienti utilizzando Ollama.
Al termine del training, i partecipanti saranno in grado di:
- Distribuire e configurare Ollama per il processing privato AI.
- Integrare modelli AI in flussi di lavoro aziendali sicuri.
- Ottimizzare le prestazioni AI mantenendo la privacy dei dati.
- Automatizzare i processi di business con capacità AI on-premise.
- Assicurare il conformità con le politiche aziendali di sicurezza e governance.
Claude AI per Sviluppatori: Creazione di Applicazioni Alimentate da IA
14 oreQuesta formazione dal vivo con istruttore in Italia (online o in loco) è rivolta a sviluppatori software di livello intermedio e ingegneri di intelligenza artificiale che desiderano integrare Claude AI nelle loro applicazioni, creare chatbot basati sull'intelligenza artificiale e migliorare la funzionalità del software con l'automazione basata sull'intelligenza artificiale.
Al termine di questa formazione, i partecipanti saranno in grado di:
- Utilizza l'API AI di Claude per integrare l'intelligenza artificiale nelle applicazioni.
- Sviluppare chatbot e assistenti virtuali basati sull'intelligenza artificiale.
- Migliora le applicazioni con l'automazione basata sull'intelligenza artificiale e l'elaborazione del linguaggio naturale.
- Ottimizzare e mettere a punto i modelli di intelligenza artificiale di Claude per diversi casi d'uso.
Claude AI per l'Autorizzazione dei Flussi di Lavoro e la Produttività
14 oreQuesta formazione dal vivo con istruttore in Italia (online o in sede) è rivolta ai professionisti principianti che desiderano integrare Claude AI nei loro flussi di lavoro quotidiani per migliorare l'efficienza e l'automazione.
Al termine di questa formazione, i partecipanti saranno in grado di:
- Utilizza Claude AI per automatizzare le attività ripetitive e semplificare i flussi di lavoro.
- Migliora la produttività personale e di gruppo utilizzando l'automazione basata sull'intelligenza artificiale.
- Integrare Claude AI con gli strumenti e le piattaforme aziendali esistenti.
- Ottimizza il processo decisionale e la gestione delle attività basati sull'intelligenza artificiale.
Deploying and Optimizing LLMs with Ollama Deploying and Ottimizzazione di Modelli Linguistici Large (LLM) con Ollama
14 oreQuesta formazione dal vivo condotta da un istruttore in Italia (online o in loco) è rivolta ai professionisti di livello intermedio che desiderano distribuire, ottimizzare e integrare LLM utilizzando Ollama.
Al termine di questa formazione, i partecipanti saranno in grado di:
- Impostare e distribuire LLM utilizzando Ollama.
- Ottimizza i modelli di intelligenza artificiale per migliorare prestazioni ed efficienza.
- Sfruttare l'accelerazione GPU per migliorare la velocità di inferenza.
- Integrare Ollama nei flussi di lavoro e nelle applicazioni.
- Monitorare e mantenere le prestazioni del modello di intelligenza artificiale nel tempo.
Fine-Tuning e personalizzazione dei modelli AI su Ollama
14 oreQuesta formazione live guidata da un istruttore in Italia (online o sul posto) è rivolta a professionisti di alto livello che desiderano affinare e personalizzare modelli AI su Ollama per una performance migliorata ed applicazioni specifiche al dominio.
Al termine di questa formazione, i partecipanti saranno in grado di:
- Configurare un ambiente efficiente per l'affinamento dei modelli AI su Ollama.
- Preparare insiemi di dati per l'affinamento supervisionato e l'apprendimento per rinforzo.
- Ottimizzare i modelli AI per prestazioni, precisione ed efficienza.
- Distribuire modelli personalizzati in ambienti di produzione.
- Valutare le miglioramenti dei modelli e garantire la robustezza.
Introduzione a Claude AI: Intelligenza Artificiale Con conversationale e Applicazioni per l'Impresa
14 oreQuesta formazione dal vivo con istruttore in Italia (online o in loco) è rivolta a professionisti aziendali alle prime armi, team di assistenza clienti e appassionati di tecnologia che desiderano comprendere i fondamenti di Claude AI e sfruttarli per applicazioni aziendali.
Al termine di questa formazione, i partecipanti saranno in grado di:
- Scopri le capacità e i casi d'uso di Claude AI.
- Configura e interagisci con Claude AI in modo efficace.
- Automatizza i flussi di lavoro aziendali con l'intelligenza artificiale conversazionale.
- Migliora il coinvolgimento e il supporto dei clienti utilizzando soluzioni basate sull'intelligenza artificiale.
LangGraph Applications in Finance
35 oreLangGraph is a framework for building stateful, multi-actor LLM applications as composable graphs with persistent state and control over execution.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level professionals who wish to design, implement, and operate LangGraph-based finance solutions with proper governance, observability, and compliance.
By the end of this training, participants will be able to:
- Design finance-specific LangGraph workflows aligned to regulatory and audit requirements.
- Integrate financial data standards and ontologies into graph state and tooling.
- Implement reliability, safety, and human-in-the-loop controls for critical processes.
- Deploy, monitor, and optimize LangGraph systems for performance, cost, and SLAs.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
LangGraph Foundations: Graph-Based LLM Prompting and Chaining
14 oreLangGraph is a framework for building graph-structured LLM applications that support planning, branching, tool use, memory, and controllable execution.
This instructor-led, live training (online or onsite) is aimed at beginner-level developers, prompt engineers, and data practitioners who wish to design and build reliable, multi-step LLM workflows using LangGraph.
By the end of this training, participants will be able to:
- Explain core LangGraph concepts (nodes, edges, state) and when to use them.
- Build prompt chains that branch, call tools, and maintain memory.
- Integrate retrieval and external APIs into graph workflows.
- Test, debug, and evaluate LangGraph apps for reliability and safety.
Format of the Course
- Interactive lecture and facilitated discussion.
- Guided labs and code walkthroughs in a sandbox environment.
- Scenario-based exercises on design, testing, and evaluation.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
LangGraph in Healthcare: Workflow Orchestration for Regulated Environments
35 oreLangGraph enables stateful, multi-actor workflows powered by LLMs with precise control over execution paths and state persistence. In healthcare, these capabilities are crucial for compliance, interoperability, and building decision-support systems that align with medical workflows.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level professionals who wish to design, implement, and manage LangGraph-based healthcare solutions while addressing regulatory, ethical, and operational challenges.
By the end of this training, participants will be able to:
- Design healthcare-specific LangGraph workflows with compliance and auditability in mind.
- Integrate LangGraph applications with medical ontologies and standards (FHIR, SNOMED CT, ICD).
- Apply best practices for reliability, traceability, and explainability in sensitive environments.
- Deploy, monitor, and validate LangGraph applications in healthcare production settings.
Format of the Course
- Interactive lecture and discussion.
- Hands-on exercises with real-world case studies.
- Implementation practice in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
LangGraph for Legal Applications
35 oreLangGraph is a framework for building stateful, multi-actor LLM applications as composable graphs with persistent state and precise control over execution.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level professionals who wish to design, implement, and operate LangGraph-based legal solutions with the necessary compliance, traceability, and governance controls.
By the end of this training, participants will be able to:
- Design legal-specific LangGraph workflows that preserve auditability and compliance.
- Integrate legal ontologies and document standards into graph state and processing.
- Implement guardrails, human-in-the-loop approvals, and traceable decision paths.
- Deploy, monitor, and maintain LangGraph services in production with observability and cost controls.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Building Dynamic Workflows with LangGraph and LLM Agents
14 oreLangGraph is a framework for composing graph-structured LLM workflows that support branching, tool use, memory, and controllable execution.
This instructor-led, live training (online or onsite) is aimed at intermediate-level engineers and product teams who wish to combine LangGraph’s graph logic with LLM agent loops to build dynamic, context-aware applications such as customer support agents, decision trees, and information retrieval systems.
By the end of this training, participants will be able to:
- Design graph-based workflows that coordinate LLM agents, tools, and memory.
- Implement conditional routing, retries, and fallbacks for robust execution.
- Integrate retrieval, APIs, and structured outputs into agent loops.
- Evaluate, monitor, and harden agent behavior for reliability and safety.
Format of the Course
- Interactive lecture and facilitated discussion.
- Guided labs and code walkthroughs in a sandbox environment.
- Scenario-based design exercises and peer reviews.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
LangGraph for Marketing Automation
14 oreLangGraph is a graph-based orchestration framework that enables conditional, multi-step LLM and tool workflows, ideal for automating and personalizing content pipelines.
This instructor-led, live training (online or onsite) is aimed at intermediate-level marketers, content strategists, and automation developers who wish to implement dynamic, branching email campaigns and content generation pipelines using LangGraph.
By the end of this training, participants will be able to:
- Design graph-structured content and email workflows with conditional logic.
- Integrate LLMs, APIs, and data sources for automated personalization.
- Manage state, memory, and context across multi-step campaigns.
- Evaluate, monitor, and optimize workflow performance and delivery outcomes.
Format of the Course
- Interactive lectures and group discussions.
- Hands-on labs implementing email workflows and content pipelines.
- Scenario-based exercises on personalization, segmentation, and branching logic.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Introduzione a Ollama: esecuzione di modelli di intelligenza artificiale locali
7 oreQuesto corso in diretta guidato dall'insegnante (online o sul posto) è rivolto a professionisti di livello principiante che desiderano installare, configurare e utilizzare Ollama per eseguire modelli AI localmente sui loro computer.
Al termine del corso, i partecipanti saranno in grado di:
- Comprendere i concetti fondamentali di Ollama e le sue funzionalità.
- Configurare Ollama per l'esecuzione di modelli AI locali.
- Distribuire ed interagire con LLM tramite Ollama.
- Ottimizzare le prestazioni e l'uso delle risorse per i carichi di lavoro AI.
- Esplorare casi d'uso per la distribuzione locale dell'AI in diverse industrie.