Exploring the Frontiers of Large Language Models(Generative AIs): Unveiling Attack Strategies and Safeguards

Presented at Diana Initiative 2023, Aug. 7, 2023, 1:30 p.m. (60 minutes).

Large language models, aka generative AIs, have transformed natural language processing, but their widespread use necessitates a comprehensive understanding of their vulnerabilities and the implementation of robust safeguards. This presentation delves into the frontiers of generative AIs like chatgpt, focusing on unveiling attack strategies and developing effective defenses. Real-world examples of adversarial attacks are showcased, illustrating how weaknesses in generative AIs can be exploited to manipulate outputs or extract sensitive information. Attendees gain firsthand insight into model vulnerabilities and the potential consequences of such attacks. The presentation delves into developing safeguards, highlighting cutting-edge research and best practices like robust model training, input perturbation, and anomaly detection. These techniques mitigate risks associated with attacks on large language models. Ethical considerations take center stage, emphasizing responsible AI practices, bias mitigation, fairness, and transparency in generative AIs. Attendees gain valuable insights into the importance of reliable deployment to ensure equitable and beneficial applications of these models. Attending this presentation, participants acquire essential knowledge of the evolving landscape of generative AIs, including potential attack vectors and the necessary safeguards. With this understanding, attendees can make informed decisions when utilizing generative AIs in their respective domains, promoting responsible and secure usage.

Presenters:

  • Gaspard Baye - University of Massachusetts Dartmouth
    Gaspard Baye, a prominent figure in cybersecurity, is a doctoral student and research assistant at the University of Massachusetts Dartmouth Cybersecurity Center. Holding a CVE and certifications like OSCP and CEH, Gaspard demonstrates vulnerability identification and security expertise. Their leadership as an OWASP AppSec Global Reviewer and experience securing software applications and fintech/banking infrastructures further showcase their capabilities. Gaspard's research utilizes generative AIs and Deep Learning to enhance security operations. Their contributions are recognized through publications in renowned conferences and journals. Additionally, Gaspard actively advocates for and contributes to FOSS projects, including OWASP and OpenMined PySyft. Their dedication to spreading cybersecurity awareness is evident through training thousands of professionals globally. Gaspard's work drives innovation and elevates security practices in the industry.

Links:

Similar Presentations: