Abhinav Singh - AI SecureOps: Attacking & Defending GenAI Applications and Services $2,600 June 2025
DESCRIPTION: Can prompt injections lead to complete infrastructure takeovers? Could AI applications be exploited to compromise backend services? Can data poisoning in AI copilots impact a company's stock? Can jailbreaks create false crisis alerts in security systems? This immersive, CTF-styled training in GenAI and LLM security dives into these pressing questions. Engage in realistic attack and defense scenarios focused on real-world threats, from prompt injection and remote code execution to backend compromise. Tackle hands-on challenges with actual AI applications to understand vulnerabilities and develop robust defenses. You’ll learn how to create a comprehensive security pipeline, mastering AI red and blue team strategies, building resilient defenses for LLMs, and handling incident response for AI-based threats. Additionally, implement a Responsible AI (RAI) program to enforce ethical AI standards across enterprise services, fortifying your organization’s AI security foundation.
By 2026, Gartner, Inc. predicts that over 80% of enterprises will engage with GenAI models, up from less than 5% in 2023.
THINGS YOU’LL LEARN:
- Exploit vulnerabilities in AI applications to achieve code and command execution, uncovering scenarios such as cross-site scripting, SQL injection, insecure agent designs, and remote code execution for infrastructure takeover.
- Conduct GenAI red-teaming using adversary simulation, OWASP LLM Top 10, and MITRE ATLAS frameworks, while applying AI security and ethical principles in real-world scenarios.
- Execute and defend against adversarial attacks, including prompt injection, data poisoning, model inversion, and agentic attacks.
- Perform advanced AI red and blue teaming through multi-agent auto-prompting attacks, implementing a 3-way autonomous system consisting of attack, defend and judge models.
- Develop LLM security scanners to detect and protect against injections, jailbreaks, manipulations, and risky behaviors, as well as defending LLMs with LLMs.
- Build and deploy enterprise-grade LLM defenses, including custom guardrails for input/output protection, security benchmarking, and penetration testing of LLM agents.
- Utilize open-source tools like HuggingFace, OpenAI, NeMo, Streamlit, and Garak to build custom GenAI tools and enhance your GenAI development skills.
- Establish a comprehensive LLM SecOps process to secure the supply chain from adversarial attacks and create a robust threat model for enterprise applications.
- Implement an incident response and risk management plan for enterprises developing or using GenAI services.
LABS YOU’LL PARTICIPATE IN:
- 7 sections with 21 labs
THIS COURSE IS BENEFICIAL FOR:
- Security professionals seeking to update their skills for the AI era.
- Red & Blue team members.
- AI Developers & Engineers interested in the security aspects of AI and LLM models.
- AI Safety professionals and analysts working on regulations, controls and policies related to AI.
- Product Managers & Founders looking to strengthen their PoVs and models with security best practices.
TECHNICAL DIFFICULTY: BEGINNER/INTERMEDIATE/ADVANCED
STUDENT REQUIREMENTS:
Familiarity with AI and machine learning concepts is beneficial but not required.
Ability to run python codes and notebooks.
Familiarity with common GenAI applications like OpenAI.
WHAT SHOULD STUDENTS BRING:
API key for OpenAI.
Google Colab account.
Complete the pre-training setup before the first day.
What will students be provided with:
- One year access to a live interactive playground with various exercises to practice different attack and defense scenarios for GenAI and LLM applications.
- "AI SecureOps" Metal coin for CTF players.
- Complete course guide containing 200+ pages in PDF format. It will contain step-by-step guidelines for all the exercises, labs, and a detailed explanation of concepts discussed during the training.
- PDF versions of slides that will be used during the training.
- Access to Slack channel for continued engagement, support and development.
- Access to Github account for accessing custom-built source codes and tools.
- Access to HuggingFace models, datasets and transformers.
TRAINER BIO: Abhinav Singh is an esteemed cybersecurity leader & researcher with over a decade of experience across technology leaders, financial institutions, and as an independent trainer and consultant. Author of "Metasploit Penetration Testing Cookbook" and "Instant Wireshark Starter," his contributions span patents, open-source tools, and numerous publications. Recognized in security portals and digital platforms, Abhinav is a sought-after speaker & trainer at international conferences like Black Hat, RSA, DEFCON, BruCon and many more, where he shares his deep industry insights and innovative approaches in cybersecurity. He also leads multiple AI security groups at CSA, responsible for coming up with cutting-edge whitepapers and industry reports around safety and security of GenAI.
- 16 hours of training with a Certificate of Completion
- Boxed lunch
- 2 coffee breaks per day & snack
Registration terms and conditions:
Trainings are refundable before May 5th, 2025 the processing fee is $250.
Trainings are non-refundable after May 16th, 2025.
Training tickets may be transferred. Please email us for specifics.
Failure to attend the Training without prior written notification, will be considered a No-Show. No refund will be given.
By purchasing this ticket you agree to abide by the DCT Code of Conduct and the registration terms and conditions listed above.