Fine Tune your personal LLM assistant to Secure coding

Presented at DEF CON 33 (2025), Aug. 8, 2025, 2 p.m. (240 minutes).

In today’s landscape, generative AI coding tools are powerful but often insecure, raising concerns for developers and organizations alike. This hands-on workshop will guide participants in building a secure coding assistant tailored to their specific security needs. We’ll begin by exploring the security limitations of current AI coding tools and discussing why fine-tuning is critical for secure development. Participants will then create and fine-tune their own LLM-based assistants using provided examples and their own use cases. By the end of the session, each attendee will have a functioning, security-focused AI coding assistant and a clear understanding of how to improve it further.

Presenters:

  • Or Sahar - Security Researcher
    Or Sahar is a security researcher, software engineer, and cofounder of Secure From Scratch — a venture dedicated to teaching developers secure coding from the very first line of code. She has worked for many years as a developer and developer team leader, before transitioning her career path to focus on hacking, application vulnerability research and security in the context of AI. Or is currently pursuing a master's degree in computer science and lectures in several colleges.
  • Yariv Tal - Security Researcher
    Yariv Tal is a senior developer & security researcher, and the cofounder of Secure From Scratch - a venture dedicated to teaching developers secure coding from the very first line of code. A summa cum laude graduate from the Technion, leveraging four decades of programming expertise and years of experience in university lecturing and bootcamp mentoring, he brings a developer's perspective to the field of security. Currently, he lectures on secure coding at several colleges and the private sector, he is the leader of the owasp-untrust project and is currently pursuing a master's degree in computer science and lectures in several colleges.

Similar Presentations: