
Abhinav Singh - AI SecureOps: Attacking & Defending AI Applications and Services - DCTLV2025
Name of Training: AI SecureOps: Attacking & Defending AI Applications and Services
Trainer(s): Abhinav Singh
Dates: August 11-12, 2025
Time: 8:00 am to 5:00 pm PT
Venue: Las Vegas Convention Center (West Hall)
Cost: $2,000
Course Description:
Can prompt injections lead to complete infrastructure takeovers? Could AI applications be exploited to compromise backend services? Can data poisoning in AI copilots impact a company's stock? Can jailbreaks create false crisis alerts in security systems? This immersive, CTF-styled training in GenAI and LLM security dives into these pressing questions. Engage in realistic attack and defense scenarios focused on real-world threats, from prompt injection and remote code execution to backend compromise. Tackle hands-on challenges with actual AI applications to understand vulnerabilities and develop robust defenses. You’ll learn how to create a comprehensive security pipeline, mastering AI red and blue team strategies, building resilient defenses for LLMs, and handling incident response for AI-based threats. Additionally, implement a Responsible AI (RAI) program to enforce ethical AI standards across enterprise services, fortifying your organization’s AI security foundation.
By 2026, Gartner, Inc. predicts that over 80% of enterprises will engage with GenAI models, up from less than 5% in 2023. This rapid adoption presents a new challenge for security professionals. To bring you up to speed from intermediate to advanced level, this training provides essential GenAI and LLM security skills through an immersive CTF-styled framework. Delve into sophisticated techniques for mitigating LLM threats, engineering robust defense mechanisms, and operationalizing LLM agents, preparing them to address the complex security challenges posed by the rapid expansion of GenAI technologies. You will be provided with access to a live playground with custom built AI applications replicating real-world attack scenarios covering use-cases defined under the OWASP LLM top 10 framework and mapped with stages defined in MITRE ATLAS. This dense training will navigate you through areas like the red and blue team strategies, create robust LLM defenses, incident response in LLM attacks, implement a Responsible AI(RAI) program and enforce ethical AI standards across enterprise services, with the focus on improving the entire GenAI supply chain.
This training will also cover the completely new segment of Responsible AI(RAI), ethics and trustworthiness in GenAI services. Unlike traditional cybersecurity verticals, these unique challenges such as bias detection, managing risky behaviors, and implementing mechanisms for tracking information are going to be the key challenges for enterprise security teams.
By the end of this training, you will be able to:
-
Exploit vulnerabilities in AI applications to achieve code and command execution, uncovering scenarios such as cross-site scripting, injection attacks, insecure agent designs, and remote code execution for infrastructure takeover.
-
Conduct GenAI red-teaming using adversary simulation, OWASP LLM Top 10, and MITRE ATLAS frameworks, while applying AI security and ethical principles in real-world scenarios.
-
Execute and defend against adversarial attacks, including prompt injection, data poisoning, model inversion, and agentic attacks.
-
Perform advanced AI red and blue teaming through multi-agent auto-prompting attacks, implementing a 3-way autonomous system consisting of attack, defend and judge models.
-
Implement LLM security scanners to detect and protect against injections, jailbreaks, manipulations, and risky behaviors, as well as defending LLMs with LLMs.
-
Build and deploy enterprise-grade LLM defenses, including custom guardrails for input/output protection, security benchmarking, and penetration testing of LLM agents.
-
Establish a comprehensive LLM SecOps process to secure the supply chain from adversarial attacks and create a robust threat model for enterprise applications.
-
Implement an incident response and risk management plan for enterprises developing or using GenAI services.
**Introduction (1 lab)**
- Introduction to LLM and GenAI.
- LLM & GenAI terminologies and architecture.
- Technology use-cases.
- Agents, multi-agents and multi-modal models.
**Elements of AI Security (1 lab)**
- Understanding AI vulnerabilities with case studies on AI security breaches.
- Application of Security.
- Principles of AI ethics and Safety.
- OWASP LLM top 10.
- MITRE mapping of attacks on GenAI Supply chain.
- Prompt Generation for solving specific security cases.
- Threat modeling of Agentic flows and RAG architectures.
**Adversarial LLM Attacks and Defenses (6 labs)**
- Direct and Indirect Prompt Injection attacks and their subtypes.
- Advance prompt injections through obfuscation and cross-model injections.
- Breaking system prompts and their trust criteria.
- Advance LLM red teaming: Automating multi-agent conversation to prompt inject models at scale.
- Indirect prompt injections through external input sources.
- Attack mapping through LLM top 10 and MITRE Atlas frameworks.
**Attacking & Defending Agentic Systems (5 labs)**
- Attacking LLM Agents for task manipulation and risky behavior.
- Cross site scripting and Injection attacks on AI agents for code and command execution.
- Abusing Agent APIs for model extraction, and data poisoning.
- Compromising cloud infrastructure by abusing over-permissioning and tool usage in agentic systems.
- Defense implementation through tracing and observability.
**AI Red & Blue Teaming (4 labs)**
- Automated prompt injection and jailbreak at scale.
- Using colab notebooks for automation of API calls and reporting.
- Benchmarking LLMs from generating insecure code or aid In carrying out cyber attacks.
- Jailbreak attacks and model weight tracing for root cause investigation.
- Implementing LLM Judge model to auto-evaluate attacks and refine the next stage with increasing complexity.
- Purple teaming through a 3-way LLM implementation consisting of a target, an attacker and an evaluator.
**Building Enterprise grade LLM defenses (4 labs)**
- Deploying LLM Security scanner, adding custom rules, prompt block lists and guardrails.
- writing custom detection logic, trustworthiness checks and filters.
- LLM Guard for protecting input and output.
- Protecting RAG enabled GenAI agents from emitting sensitive data & confidential internal data.
- Attack simulation and defense use-cases against financial fraud & agent manipulation.
- LLM Security Benchmarking and continuous reporting.
**Building LLM & GenAI SecOps process**
- Summarizing the learnings into building SecOps process.
- Monitoring trustworthiness and safety of enterprise LLM applications.
-
Implementing NIST AI Risk management framework(RMF) for security monitoring.
Information on some of the previous iterations of this training can be found in the links below:
- Insomni’hack, Switzerland: https://insomnihack.ch/workshops/ai-secureops-attacking-defending-genai-applications-and-services/
- BruCon, Belgium: https://www.brucon.org/training-details/ai-secureops
- Hack Miami: https://hackmiami.com/ai-secureops-attacking-defending-genai-applications-and-services-may-13-may-14-2025-tuesday-wednesday-price-2600/
- RSA San Francisco: https://path.rsaconference.com/flow/rsac/us25/FullAgenda/page/catalog/session/1728196808642001YE0k
- Blackhat, MEA: https://blackhatmea.com/trainings-list/2024/ai-secureops-genai-and-llm-security-enterprises
- Deepsec, Austria: https://deepsec.net/speaker.html#WSLOT693
Difficulty Level:
Beginner to Intermediate
Suggested Prerequisites:
- Familiarity with AI and machine learning concepts is beneficial but not required.
- Ability to run python codes and notebooks.
- Familiarity with common GenAI applications like OpenAI.
Who should take this course?
- Security professionals seeking to update their skills for the AI era.
- Red & Blue team members.
- AI Developers & Engineers interested in the security aspects of AI and LLM models.
- AI Safety professionals and analysts working on regulations, controls and policies related to AI.
- Product Managers & Founders looking to strengthen their PoVs and models with security best practices.
What Students Should Bring:
- API key for OpenAI.
- Google Colab account.
- Complete the pre-training setup before the first day.
Students will be provided with:
- One year access to a live interactive playground with various exercises to practice different attack and defense scenarios for GenAI and LLM applications.
- "AI SecureOps" Metal coin for CTF players.
- Complete course guide containing 200+ pages in PDF format. It will contain step-by-step guidelines for all the exercises, labs, and a detailed explanation of concepts discussed during the training.
- PDF versions of slides that will be used during the training.
- Access to Slack channel for continued engagement, support and development.
- Access to Github account for accessing custom-built source codes and tools.
- Access to HuggingFace models, datasets and transformers.
Trainer(s) Bio:
Abhinav Singh is an esteemed cybersecurity leader & researcher with over a decade of experience across technology leaders, financial institutions, and as an independent trainer and consultant. Author of "Metasploit Penetration Testing Cookbook" and "Instant Wireshark Starter," his contributions span patents, open-source tools, and numerous publications. Recognized in security portals and digital platforms, Abhinav is a sought-after speaker & trainer at international conferences like Black Hat, RSA, DEFCON, BruCon and many more, where he shares his deep industry insights and innovative approaches in cybersecurity. He also leads multiple AI security groups at CSA, responsible for coming up with cutting-edge whitepapers and industry reports around safety and security of GenAI.
Registration Terms and Conditions:
Trainings are refundable before July 8, 2025, minus a non-refundable processing fee of $250.
Trainings are non-refundable after July 8, 2025.
Training tickets may be transferred. Please email us at training@defcon.org for specifics.
If a training does not reach the minimum registration requirement, it may be cancelled. In the event the training you choose is cancelled, you will be provided the option of receiving a full refund or transferring to another training (subject to availability).
Failure to attend the training without prior written notification, will be considered a no-show. No refund will be given.
By purchasing this ticket you agree to abide by the DEF CON Training Code of Conduct and the registration terms and conditions listed above.
Several breaks will be included throughout the day. Please note that food is not included.
All courses come with a certificate of completion, contingent upon attendance at all course sessions.