Disinformation At Scale: Using GPT-3 Maliciously for Information Operations

Presented at Black Hat USA 2021, Aug. 5, 2021, 10:20 a.m. (40 minutes)

<div><span>Last year, OpenAI developed GPT-3—currently the largest and most powerful natural language model in the world. The select groups that were granted first access quickly demonstrated that it can write realistic text from almost any genre—including articles that humans couldn’t distinguish from real news stories. In the wrong hands, this tool can tear at the fabric of society and bring disinformation operations to an entirely new scale.</span></div><div><span><br></span></div><div><span>Based on six months of privileged access to GPT-3, our research tries to answer just how useful GPT-3 can be for information operators looking to spread lies and deceit. Can GPT-3 be used to amplify disinformation narratives? Can it come up with explosive news stories on its own? Can it create text that might fuel the next QAnon? Can it really change people’s stances on world affairs? We will show how we got GPT-3 to do all this and discuss ways to prepare for the next wave of automated disinformation. </span></div>

Presenters:

  • Andrew Lohn - Senior Research Fellow, Center for Security and Emerging Technology
    Andrew Lohn is a Senior Fellow at Georgetown's Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. His research focuses on the security risks of AI systems and the overlap between cybersecurity and AI. He has worked at the RAND Corporation, Sandia National Laboratories, NASA, and Hewlett Packard Labs.

Links:

Similar Presentations: