



Michael Glass & “K” SINGH - Solving Modern Cybersecurity Problems with AI - DCTLV2025 ($2,400)
Trainer(s): Michael Glass and “K” Singh
Dates: August 11-12, 2025
Time: 8:00 am to 5:00 pm PT
Venue: Las Vegas Convention Center
Cost: $2,400
Artificial Intelligence (AI) and Large Language Models (LLMs) have emerged as robust and powerful tools that have redefined how many approach problem solving. In 2023, the industry saw a surge of interest in AI and Cybersecurity experts struggled not only to threat model LLMs but to leverage them effectively. Our training presents a comprehensive educational framework aimed at equipping students with the necessary skills to not only build their own LLM toolkits but to leverage AI and LLMs to solve both simple and complex problems unique to their own environments.
The training begins with a brief overview of AI including the differences between LLM, generative AI, and myriad of other emerging AI technologies. After introductions we will give students access to our private GitHub to access all the tools and scripts needed for the class. From there students will deploy their own LLM in our cloud environment for use in the class while we explain the basics, operational constraints of running AI, on-prem vs cloud, and the basics of troubleshooting their AI environments.
Next we will demonstrate how to select high quality data from their environment and give them example data via our private GitHub. From there we will walk students through transforming this data and making it operationally effective and efficient for their AI. We will cover various types of data common to Cybersecurity environments, protentional issues with certain data types, and how to make the most of opensource to help transform the data. Students will apply the model to the LLM.
Lastly, we will cover many use cases in how students can use this AI and data to solve various problems and add value to their environment. Examples include training data to write YARA/SIGMA rules, analyzing alerts to add rankings to help prioritize and avoid alert fatigue, training the AI to work with common opensource tools such as OpenSearch, using AI to improve operational security by catching bad behaviors/patterns, improving application observability by adding context to "weird" behavior, leveraging AI as middleware to add contextual data between disparate platforms, and more! All use cases will be performed by students live and in-class.
When students leave the training they will leave with the tools they need to go back to their employer and apply what they have learned to effect immediate and impactful change. All tools and scripts will be available for students to copy and fork onto their own personal GitHub accounts. Students will also be allowed to export the tools they create in class from our private cloud environment following the training (number of days yet to be determined).
Our training is designed to bring a holistic educational approach and empower students with the knowledge, skills, and ethical awareness necessary to harness the full potential of LLMs in solving cybersecurity problems. By equipping the next generation of cybersecurity professionals with these capabilities, we aim to foster innovation, resilience, and accountability in the ever-evolving landscape of digital security.
Course Outline:
Day 1:
-
Class Meet & Greet
-
Discuss AI, LLMs, and contemporary viewpoints on the utility of AI
-
Pre-lab checklist: create keys to access private training Git to pull tools/scripts used for the class
-
Distribute access to AI environment
-
LAB: Introduction to the AI environment
-
BREAK
-
AI 101 - Primer on all things Artificial Intelligence
-
Discuss Low-Rank Adaption (LoRA)
-
How fine-tuning works, comparisons to training new models, and relationships to machine learning
-
Introduce SOCMAN - The DEF CON AI Model
-
LAB: Installing LMStudio
-
LUNCH
-
Introduction into popular open-source tools used for AI
-
Primer on CUDA and hardware acceleration
-
LAB: Installing Llama.cpp
-
Brief discussion on API
-
Introduction to the OpenSearch Application Programming Interfaces and built-in tools to integrate with third party AI solutions
-
BREAK
-
Discuss contextual searches in OpenSearch
-
LAB: OpenSearch tutorial
-
Explore how to perform contextual searches and retrieve relevant information in OpenSearch including RAG concepts
-
Discussion on using AI as middleware
-
Common problems when using AI in complex scenarios
-
Explore issues such as false positives, false negatives, and hallucinations in AI applications
-
END DAY
Day 2:
-
Students will pull down any changes made to git overnight
-
Students will download models if changes have been made overnight
-
LAB: Hunting for Malware
-
Post Lab Discussion on "Hunting for Malware"
-
Understanding the threat landscape of AI
-
Discussion on potential threats to AI systems, including attacks and vulnerabilities
-
Introduction to Google's Secure AI Framework (SAIF) and its core principles for secure AI development
-
MITRE ATLAX Matrix
-
BREAK
-
LAB: Generating SIGMA/YARA Rules using IOCs
-
Post Lab Discussion on "Generating SIGMA/YARA Rules using IOCs"
-
LAB: Automatic Pattern Analysis in WebApp Traffic
-
Post Lab Discussion on "Automatic Pattern Analysis in WebApp Traffic"
-
LUNCH
-
LAB: Alert Analysis and Hallucination Detections
-
Post Lab Discussion on "Alert Analysis and Hallucination Detections"
-
LAB: Weird Behavior & Malicious Identification of Lateral Movement
-
Post Lab Discussion on "Weird Behavior & Malicious Identification of Lateral Movement"
-
BREAK
-
LAB: Improving AI Contextual Analysis using Threat Intel - Stuxnet
-
Post Lab Discussion on "Improving AI Contextual Analysis using Threat Intel - Stuxnet"
-
LAB: OpenSearch Log Enrichment using AI
-
Post Lab Discussion on "OpenSearch Log Enrichment using AI"
-
Assist with students exporting training data
-
END DAY
Difficulty Level:
Intermediate
Suggested Prerequisites:
Basic understanding AI (beneficial but not necessary), git, and opensource tools such as OpenSearch. Comfortable in command line (either Windows or Linux).
What Students Should Bring:
Bring a laptop to the class! Students will also need Github usernames.
Trainer(s) Bio:
Michael Glass AKA "Bluescreenofwin" is currently a Principal Security Engineer providing security leadership for one of the largest streaming technology companies in the world specializing in Blue Team, SecOps, and Cloud. Michael has been in the hacking and security scene for over 15 years working for a wide variety of organizations including government, private, and non-profit. Using this diverse background he has founded the company "Glass Security Consulting" in order to provide world class Cybersecurity instruction for Information Security Professionals and Hackers alike.
“K” Singh is currently a Senior Incident Response Consultant at CrowdStrike. Previously an Incident Response Consultant and the Forensic Lab Manager for the Global Incident Response Practice at Cylance – “K” has worked with multiple Fortune 500 companies, sector-leading firms, and healthcare organizations in a variety of engagements ranging from Incident Response to Traditional “Dead Disk” Forensics and E-Discovery. Additionally, “K” is also part of the Operations team for WRCCDC-handling infrastructure for the competition’s core cluster, student environments, Social Media outlets, and liaising between the Red Team and other teams to ensure the competition runs smoothly.
Registration Terms and Conditions:
Trainings are refundable before July 8, 2025, the processing fee is $250.
Trainings are non-refundable after July 8, 2025.
Training tickets may be transferred. Please email us at training@defcon.org for specifics.
Failure to attend the Training without prior written notification, will be considered a No-Show. No refund will be given.
By purchasing this ticket you agree to abide by the DCT Code of Conduct and the registration terms and conditions listed above.