The recent nature publication, "Dual use of artificial-intelligence-powered drug discovery" demonstrated that a machine learning model designed to predict toxicity for drug discovery could be co-opted to generate deadly chemical weapon compounds, including VX gas and novel, uncharacterized toxic chemical substances. It seems likely that there will be many more such dangerous applications of AI systems, as these systems grow more powerful. At the same time, AI systems are likely to be increasingly useful for a wide variety of applications, including defensive security. Given this tension, how can the infosec community help develop good norms around the security of AI systems? The infosec community has a lot of experience navigating tradeoffs in vulnerability disclosure norms. Can these lessons be applied to AI systems that might be capable of generating their own vulnerabilities, in both computer systems and biological systems?