Skip to main content
seth_law_def_con_training
ken_johnson_def_con_training
seth_law_def_con_training
ken_johnson_def_con_training

Seth Law, Ken Johnson - Harnessing LLMs for Application Security - DCTLV2025

Name of Training: Harnessing LLMs for Application Security
Trainer(s): Seth Law and Ken Johnson
Dates: August 11-12, 2025
Time: 8:00 am to 5:00 pm PT
Venue: Las Vegas Convention Center
Cost: $2,000

Course Description: 

This comprehensive course is designed for developers and cybersecurity professionals seeking to harness the power of Generative AI and Large Language Models (LLMs) to enhance software security and development practices. Participants will gain a deep understanding of LLM functionality, strengths, and weaknesses, and learn to craft effective prompts for diverse use cases. The curriculum covers essential topics such as embeddings, vector stores, and Langchain, offering insights into document loading, code analysis, and custom tool creation using Agent Executors.

Course highlights:

  1. Hands-on techniques like Retrieval-Augmented Generation(RAG) and Few-Shot Prompting for secure code analysis and threat modeling.
  2. Integration of AI into security tasks to identify vulnerabilities and improve overall application security.

Course Outline: 

  1. Introduction & Overview
    1. Introduction to Generative AI Concepts
    2. Understanding LLMs: Functionality, Strengths, Weaknesses
  2. Lab Set up
    1. Ensure all students systems work and can reach our LLM and Vector DB 
  3. Langchain
    1. Overview & Components
    2. Explanation Of Documentation And Concepts
  4. Prompt Engineering
    1. Types of Prompts: User, System, AI
    2. Few-Shot Prompting: Importance & Usage
    3. Prompt Engineering - Techniques & Frameworks (eg: Reflexion, CO-STAR)
    4. Exercise: Craft Prompts Using One of the Preferred Techniques/Frameworks
  5. Context
    1. About
    2. Use Cases & Types of Context
    3. Length / Window
    4. Exercise: Use Context to Improve Prompt Performance
  6. Embeddings & Vector Stores
    1. Background: Formats, Documents, Metadata
    2. Use Cases: SimilaritySearches, Chaining
    3. Exercise: Use vector store as context
  7. Exploring LLMs
    1. Types of LLMs: Open Source & Commercial (Hugging Face, Anthropic, OpenAI, etc.)
    2. Hosting options- Hosted vs Transactional (Ex: Sagemaker vs Bedrock)
    3. Considerations
  8. Chatbot/AppSecAssistant
    1. Background & Use Cases
    2. Retrieval Augmented Generation (RAG) Techniques
    3. Implementing Chat History
    4. Exercise: Build an AI Assistant that uses your company’s documentation to answer questions for developers
  9. Source Code Analysis
    1. Recommended Approaches
    2. Code Splitting, Tree-sitter & Langchain Support
    3. Few Shot Prompting for Tuning Results
    4. Building a Knowledge-base
    5. Compositional Analysis
      1. Exercise: Build Information Gathering Tool 
    6. Vulnerability Analysis & Discovery
      1. Exercise: Build Vulnerability Scanning Tool 
  10. Agent Executors & Custom Tools
    1. Use Cases: Compositional & Behavioral Analysis
    2. Langchain ReAct
    3. Langraph Usage
    4. Exercise: Build a “Chain of Thought” so that the LLM uses reasoning and additional lookups in source code to find the answers it needs to validate a vulnerability (ex: validate insecure direct object reference finding)
  11. Model Context Protocol (MCP)
    1. Background 
    2. Use cases & benefits
    3. Exercise: Build mini-threat modeling tool using MCP
  12. Threat Modeling w/ StrideGPT
    1. Attack Tree Analysis
    2. Diagram Generation
    3. Risk Assessment
    4. Exercise: Use an automated Threat Modeling tool

Key Takeaways:

  1. Practical Mastery of AI-Driven Development Tools: Gain hands-on experience with technologies like Langchain, embeddings, and vector stores.
  2. Advanced Prompt Engineering Techniques: Learn to Craft Effective Prompts and Leverage Few-Shot Prompting.
  3. Enhanced Security Practices Through AI: Apply AI for secure code analysis, threat modeling, and DevSecOps.

Difficulty Level:

Intermediate skill level as this course requires knowledge of software/coding and application security. However, the course does NOT require ANY prior knowledge of AI/LLMs.

Who should attend:

  • Software Developers and Engineers: Looking to integrate AI and LLMs into the development processes and improve application security.

  • Cybersecurity Professionals: Focused on application security, threat modeling, and DevSecOps, seeking to leverage AI-driven tools for vulnerability identification and risk assessment.

Suggested Prerequisites:

None

What Students Should Bring: 

Computer with at least average computing power. Preferably with Ollama and Python 3.12 installed.

Trainer(s) Bio:

Seth Law is the Founder & Principal of Redpoint Security and Ken Johnson is the Co-Founder and CTO of DryRun Security. Both Seth & Ken utilize LLMs heavily in their work and have a wealth of real world applicable skills to share in applying LLMs to the application security domain.

Registration Terms and Conditions: 

Trainings are refundable before July 8, 2025, minus a non-refundable processing fee of $250.

Trainings are non-refundable after July 8, 2025.

Training tickets may be transferred. Please email us at training@defcon.org for specifics.

If a training does not reach the minimum registration requirement, it may be cancelled. In the event the training you choose is cancelled, you will be provided the option of receiving a full refund or transferring to another training (subject to availability).

Failure to attend the training without prior written notification, will be considered a no-show. No refund will be given.

By purchasing this ticket you agree to abide by the DEF CON Training Code of Conduct and the registration terms and conditions listed above.

Several breaks will be included throughout the day. Please note that food is not included.

All courses come with a certificate of completion, contingent upon attendance at all course sessions.

$2,000.00
$2,200.00