From Prompt to Protection: A Practical Guide to Building and Securing Generative AI Applications

Presented at DEF CON 33 (2025), Aug. 9, 2025, 2 p.m. (240 minutes).

This hands-on workshop explores the offensive and defensive security challenges of Generative AI (GenAI). In the first half, participants will use structured frameworks and rapid threat prototyping to map out real-world GenAI risks such as - prompt injection, data poisoning, and model leakage. Working in teams, you'll threat model a GenAI system using simplified STRIDE and Rapid threat prototyping techniques and visual diagrams. The second half flips the script: you'll build lightweight security tools that harness GenAI for good crafting utilities. No prior AI experience is required; everything is explained as we go. This workshop is ideal for red teamers, security engineers, and curious builders. Just bring basic Python familiarity and a laptop - we’ll supply the rest. You’ll walk away with real-world threat models, working tool prototypes, and a clear framework for breaking and securing AI systems in your org.

Presenters:

  • Ashwin Iyer - Visa Inc - M&A Security Architecture (Director)
    Ashwin Iyer is a cybersecurity architect with 12+ years of experience across red teaming, threat modeling, and cloud security. He currently leads offensive security for mergers and acquisitions at Visa Inc., conducting advanced penetration tests and threat evaluations of critical financial infrastructure. Previously at SAP Ariba, he built and led the red team program, developing internal CTFs, defining SOC SLAs, and identifying high-impact vulnerabilities across global B2B platforms. Ashwin is an EC-Council CodeRed instructor (Session Hijacking & Prevention), a reviewer for Hands-On Red Team Tactics (Packt), and a contributor to PCI SSC’s segmentation guidance for modern networks. He has delivered hands-on workshops at BSidesSF, HackGDL, and Pacific Hackers on topics like GenAI threat modeling, Practical Threat Modeling for Agile. He holds certifications including OSCP, OSEP, GCPN, OSMR, CTMP and few others. When not hacking cloud platforms or vendor portals, he’s mentoring teams on how to think like attackers.
  • Ritika Verma - AI Security Research Assistant
    Ritika Verma is a cybersecurity engineer and AI security researcher with 7.5+ years of experience across enterprise security, cloud infrastructure, and applied AI. She has led security initiatives at SAP and Accenture, where she implemented MITRE ATT&CK frameworks, automated detection pipelines, and secured large-scale IAM and DLP environments. Currently pursuing her MS in Information Systems with an AI/ML focus at Santa Clara University, Ritika researches LLM security, RAG pipelines, and GenAI abuse patterns. Her open-source projects — including an AWS vulnerability triage agent (VISTA), a RAG-based compliance engine, and a CI/CD DevSecOps pipeline — reflect her obsession with bridging security engineering and real-world AI applications. She has placed 2nd in a Pre-Defcon CTF hosted at Google, mentored future security talent through WiCyS and NIST/NICE, and served as President of the SCU AI Club. Ritika is passionate about building secure-by-default systems, mentoring women in cybersecurity, and rethinking how LLMs are evaluated and abused in production environments.

Similar Presentations: