Description
Training Breakdown
What You’ll Learn
What You’ll Need
What To Expect
Who Should Attend
Trainers Profile
Training Title:
Build, Break & Defend AI Systems
Training Schedule:
Start Date: 24 September 2026
End Date: 25 September 2026
Description
In an era where AI systems are evolving from passive tools into autonomous agents capable of executing complex workflows, interacting with external tools, and making independent decisions, the security landscape is becoming significantly more complex. This comprehensive 2-day training focuses on securing modern AI systems, with a deep emphasis on Agentic AI and Model Context Protocol (MCP) environments.
Participants will engage in a hands-on, lab-driven learning experience, working with vulnerable setups, guided challenges, and real-world attack simulations. The training bridges foundational AI concepts with advanced adversarial techniques, enabling participants to both build and break AI systems while learning how to defend them effectively.
By the end of this training, participants will gain a practical and actionable understanding of how agentic AI and MCP systems function, the unique security risks they introduce, and how to identify, exploit, and mitigate these risks in real-world environments.
Training Breakdown:
Training Outline
Part 1: Understanding Agentic AI Systems
This section focuses on building a strong foundation in agentic AI, including how modern AI agents operate, reason, and interact with tools.
Foundations of Agentic AI:
- • Understanding agentic AI systems, reasoning loops, and tool-based execution
- • Introduction to orchestration frameworks like LangChain and LangGraph
- • Hands-on building of AI agents using open-source tools and models
- • Understanding how agents are designed, deployed, and interact with environments
Part 2: Agentic AI Security & Vulnerabilities
This section explores security risks in agentic AI systems, aligned with modern security frameworks.
- • Exploring vulnerabilities using frameworks like OWASP Agentic AI Top 10
- • Real-world examples of agentic AI security failures
- • Hands-on labs covering:
- o Authorization hijacking
- o Context manipulation
- o Tool misuse
Part 3: Advanced Exploitation Techniques
Participants will dive deeper into real-world attack simulations:
- • Multi-step prompt injection attacks
- • Retrieval poisoning and memory manipulation
- • Cross-tool exploitation scenarios
- • Custom-built labs simulating real-world adversarial behavior
Part 4: Defensive Strategies for Agentic AI
This section focuses on securing agent-based systems:
- • Guardrails and policy enforcement
- • Input/output validation techniques
- • Context isolation strategies
- • Restricting unsafe tool execution
- • Preventing:
- o Prompt injection
- o Data leakage
- o Unauthorized tool access
Part 5: Capstone Lab (Day 1)
- • Hands-on assessment covering agentic AI concepts
- • Build, exploit, and secure an agent-based system
Part 6: Understanding MCP (Model Context Protocol)
This section introduces the MCP ecosystem and architecture:
- • Understanding MCP components: host, client, and server
- • Hands-on building of MCP servers locally
- • Configuring tools and interacting with MCP systems
- • Using open-source models (e.g., NVIDIA ecosystem tools)
Part 7: MCP Security & Vulnerabilities
Participants will explore security risks specific to MCP systems:
- • Real-world MCP attack scenarios
- • Hands-on labs covering:
- o Tool poisoning
- o Prompt injection
- o Rug pull attacks
Part 8: Advanced MCP Exploitation
Advanced attack techniques include:
- • SSRF and command injection via MCP tools
- • Tool shadowing and signature cloaking
- • Parameter exploitation
- • Custom labs simulating real-world MCP attacks
Part 9: MCP Defense & Security Testing
This section focuses on defensive architecture and testing:
- • Secure MCP design patterns
- • Minimizing excessive agent autonomy
- • Tool validation and permission controls
- • Input/output validation techniques
- • Introduction to MCP security scanning tools
Part 10: Capstone Lab (Day 2)
- • End-to-end assessment of MCP systems
- • Identify, exploit, and secure vulnerabilities
Key Takeaways
- • Practical understanding of how Agentic AI and MCP systems work
- • Hands-on experience in simulating real-world AI attacks
- • Ability to implement defensive strategies in real-world use cases
Prerequisites & System Requirements
Prerequisites
- • Basic understanding of programming and logic
- • Fundamental knowledge of AI/LLMs
- • Familiarity with Python (recommended but not mandatory)
- • Basic knowledge of Linux and development environments
System Requirements:
- • Minimum 8GB RAM (16GB recommended)
- • 20GB free disk space
- • VS Code + Python installed locally
- • Linux preferred (Windows supported)
- • NVIDIA API key setup (guidance provided during training)
- • Ability to install and run scripts (avoid restricted work laptops)
What to Expect
- • Hands-on experience building agentic AI and MCP systems
- • Practical attack simulations using real-world scenarios
- • Deep understanding of modern AI vulnerabilities
- • Actionable strategies to secure AI systems
- • Interactive labs, challenges, and guided exercises
- • Post-training resources and lab materials
What Not to Expect
- • Becoming an AI security expert in 2 days
- • Deep mathematical theory of AI models
- • Purely theoretical or lecture-based sessions
Who Should Attend?
- • Cybersecurity professionals working on AI security
- • Developers building AI/LLM-powered applications
- • AI engineers exploring secure system design
- • Application security professionals
- • Students and enthusiasts with interest in AI and security
Trainers Profile:
Shashwath Aiyappa
Security Engineer 2 at Accorian
Bio
Pentesting since 2024
Applying AI since 2025
Akif Asif
Bio
An engineer, who likes to break into things and show you how to shut the door on the way back.
Meenakshi Ganesh
Security Engineer II at Accorian.
Bio
Segued into the Applied AI team from Pentesting, where I (try to) make, break and secure AI driven solutions.
Nagarjun
Automating Security since 2022.
Bio
Building (and breaking) AI agents to test their limits.




