Early-bird Discount
expires in
Register Now

Agenda

Hack Smarter not Harder: AI Workflow for Red Teaming

Hack Smarter not Harder: AI Workflow for Red Teaming

Session
Wednesday, December 04, 2024 11:15 —11:35
Location: Solar

As the buzz around artificial intelligence (AI) continues to grow, the offensive security community is poised to navigate the practicalities and limitations of AI integration. This talk aims to provide a balanced perspective on the role of AI in enhancing offensive security strategies without succumbing to the hype. We'll also advocate for the use of local AI solutions, which offer more open and accessible capabilities, as a viable alternative to mainstream options.
Outline:
Intro
- Who we are
- Benefits of adding AI to your workflow as an attacker


Why
- Is everyone using this, and not talking about it????
- People using AI for interviews (and failing)

Why not?
- AI is great for base level explanations (like answering interview questions)
- AI is trained or fine tuned with built in ethics
- Demo
- Good for coding but not off sec. Unless..

How
- Ethics bypasses
- Social engineering your AI assistant
- Wheel of morality
- Rephrasing and lying
- "Give and get" tactics
- System prompts
- Bypass Copilot ethics using.. Copilot
- Demo/screenshots
- Train or fine tune your own model
- Use an uncensored LLM
- AI prompts and system context
- Don't download a reverse shell
- Demo/screenshots
We'll explore the current landscape of AI in offensive security, acknowledging its benefits while also addressing why its adoption isn't more widespread. The discussion will include a candid examination of AI's capabilities for foundational tasks, tempered with a realistic view of its limitations, and the unique advantages that local AI systems can bring to the table.

Through demonstrations, we'll illustrate how AI, particularly local models, can be adapted to support offensive security operations, offering insights into ethics bypass techniques. Additionally, we'll touch on the customization of AI models and the cautious use of uncensored LLMs, providing an assessment of when and how these tools could be employed.

We'll cut through the noise to focus on practical, responsible applications. This session will empower people to critically assess AI's place in the red teamer's toolkit, encourage the exploration of local AI, and leverage its strengths without overestimating its capabilities.
Rambo Anderson-You
Staff Security Engineer - Red Team
BILL
Rambo is a dedicated Red Team Operator and Cybersecurity Researcher with a rich background in enhancing the security of Active Directory and macOS environments. Currently performing red team...
Caleb Crable
Staff Security Engineer - Red Team
BILL
Caleb currently works as a Staff Security Engineer on the Bill.com Red Team, performing attacks against critical financial infrastructure and physical security controls to make sure that red team...
Almost Ready to Join the cyberevolution 2024?
Reach out to our team with any remaining questions
Get in touch