Michael Glass & "K" Singh - Solving Modern Cybersecurity Problems with AI - $1,900
$1,900.00
Trainer bio:
Michael Glass:
Michael Glass AKA "Bluescreenofwin" is a senior security engineer and Windows hacker. He is currently employed at one of the largest streaming companies in the world (aka the “entertainment” business) making sure your favorite time on the Internet goes uninterrupted. He has ran the infrastructure for the Western Regional Collegiate Cyber Defense Competition for the past 7 years, mentors cybersecurity students for several colleges across the U.S., and brews copious amounts of delicious beer for consumption in his spare time.
"K" Singh:
“K” Singh is currently an Incident Response Consultant at CrowdStrike. Previously an Incident Response Consultant and the Forensic Lab Manager for the Global Incident Response Practice at Cylance – “K” has worked with multiple Fortune 500 companies, sector-leading firms, and healthcare organizations in a variety of engagements ranging from Incident Response to Traditional “Dead Disk” Forensics and E-Discovery. Additionally, “K” is also part of the Operations team for WRCCDC-handling infrastructure for the competition’s core cluster, student environments, Social Media outlets, and liaising between the Red Team and other teams to ensure the competition runs smoothly.
Trainer social media links:
Michael Glass:
"K" Singh:
Full description of the training:
Abstract:
Artificial Intelligence (AI) and Large Language Models (LLMs) have emerged as robust and powerful tools that have redefined how many approach problem solving. In 2023 the industry saw a surge of interest in AI and Cybersecurity experts struggled not only to threat model LLMs but to leverage them effectively. Our training presents a comprehensive educational framework aimed at equipping students with the necessary skills to not only build their own LLM toolkits but to leverage AI and LLMs to solve both simple and complex problems unique to their own environments.
The training begins with a brief overview of AI including the differences between LLM, generative AI, and myriad of other emerging AI technologies. After introductions we will give students access to our private GitHub to access all the tools and scripts needed for the class. From there students will deploy their own LLM in our cloud environment for use in the class while we explain the basics, operational constraints of running AI, on-prem vs cloud, and the basics of troubleshooting their AI environments.
Next we will demonstrate how to select high quality data from their environment and give them example data via our private GitHub. From there we will walk students through transforming this data and making it operationally effective and efficient for their AI. We will cover various types of data common to Cybersecurity environments, protentional issues with certain data types, and how to make the most of opensource to help transform the data. Students will apply the model to the LLM.
Lastly, we will cover many use cases in how students can use this AI and data to solve various problems and add value to their environment. Examples include training data to write YARA/SIGMA rules, analyzing alerts to add rankings to help prioritize and avoid alert fatigue, training the AI to work with common opensource tools such as OpenSearch, using AI to improve operational security by catching bad behaviors/patterns, improving application observability by adding context to "weird" behavior, leveraging AI as middleware to add contextual data between disparate platforms, and more! All use cases will be performed by students live and in-class.
When students leave the training they will leave with the tools they need to go back to their employer and apply what they have learned to effect immediate and impactful change. All tools and scripts will be available for students to copy and fork onto their own personal GitHub accounts. Students will also be allowed to export the tools they create in class from our private cloud environment following the training (number of days yet to be determined).
Our training is designed to bring a holistic educational approach and empower students with the knowledge, skills, and ethical awareness necessary to harness the full potential of LLMs in solving cybersecurity problems. By equipping the next generation of cybersecurity professionals with these capabilities, we aim to foster innovation, resilience, and accountability in the ever-evolving landscape of digital security.
Short description of what the student will know how to do, after completing the class:
Understand various popular AI models, configure their own LLM (they will be allowed to export and package their LLMs created in class to their own digital repository or cloud upon completion), troubleshoot common problems, model their own data and create LoRA models, gather and normalize their own data for ingestion, integrate into popular open source tools such as Open Search, configure AI as middleware, solve several cybersecurity problems with AI including but not limited to, data normalization, writing YARA/SIGMA rules, adding context to alerting, improve Appsec and OPSEC with AI, automation the identifying of malware via patterns using AI, integrate AI into existing scripts and operational tooling, and more!
Outline of the class:
Day 1:
Meet the students and their backgrounds
Discuss AI, LLMs, and contemporary viewpoints on utility of AI
Help students create keys to access private training Git to pull tools/scripts used for the class
Distribute access to cloud using aforementioned keys to access AI environment
BREAK
Cover basics of creating AI
Use cloud to create individual AI using a pre-determined model for use by the individual
Troubleshooting
Discuss LoRA
LUNCH
Students will learn to train their AI using provided data to showcase how to model data
Students will spin up OpenSearch instances
Troubleshooting
Brief discussion on API
Discuss contextual searches in OpenSearch
BREAK
Discussion on using AI as middleware
Discussion on common problems when using AI in complex scenarios (false positives, false negatives, hallucinations, etc)
Using provided data, students will solve a challenge together using their AIs
Troubleshooting
END DAY
Day 2:
Students will pull down any changes made to git overnight (troubleshooting if necessary)
Students will pull down Open Search data and create a model (if this proves to be troublesome then model will be provided to students)
PROBLEM 1: Writing SIGMA/YARA Rules
BREAK
PROBLEM 2: Pattern Analysis
PROBLEM 3: Alert Analysis
LUNCH
PROBLEM 4: Weird Behavior & Malicious Identification
PROBLEM 5: Adding Context (Integration and Utilizing Threat Intel)
BREAK
PROBLEM 6: Adding Context (Making use of data from disparate platforms)
Troubleshooting
Q&A
Assist with students exporting training data
END DAY
Technical difficulty of the class (Beginner, Intermediate, Advanced):
Intermediate
Intermediate
Suggested prerequisites for the class:
Bring a laptop to the class! Basic understanding AI (beneficial but not necessary), git, and opensource tools such as OpenSearch. Comfortable in command line (either Windows or Linux).
Items students will need to provide:
Github usernames
All costs of GPU and compute rental is INCLUDED in the cost of the class.
DATE: August 12th-13th, 2024
TIME: 8am to 5pm PDT
VENUE: Sahara Las Vegas
TRAINER: Michael Glass & "K" Singh
- 16 hours of training with a certificate of completion.
- 2 coffee breaks are provided per day
- Note: Food is not included
Registration terms and conditions:
Trainings are refundable before July 1st, the processing fee is $250.
Trainings are non-refundable after July 10th, 2024.
Training tickets may be transferred. Please email us for specifics.
Failure to attend the Training without prior written notification, will be considered a No-Show. No refund will be given.
By purchasing this ticket you agree to abide by the DCT Code of Conduct and the registration terms and conditions listed above.