AI Gone Rogue: Exterminating Deep Fakes Before They Cause Menace

Presented at Black Hat Europe 2018, Dec. 6, 2018, 9 a.m. (30 minutes)

<p>The face: A crucial means of identity. But what if this crucial means of identity is stolen from you? Yes, this is happening and is termed as 'Deep fake.' Deep fake technology is an artificial intelligence based human image blending method used in different ways such as to create revenge porn, fake celebrity pornographic videos, or even in cyber propaganda. Videos are altered using General Adversarial networks in which the face of the speaker is manipulated by a network by tailoring it to someone else's face. These videos can sometimes be identified as fake by human eye; however, as neural networks get rigorously trained on more resources, it will become difficult to identify fake videos. Such videos can cause chaos and bring economical and emotional damages to one's reputation. Videos targeted on politico in form of cyber propaganda can prove to be catastrophic to a country's government.<br><br>We will discuss about the many tentacles of Deep fake and dreadful damages it can cause. But most importantly, this talk will provide a demo of the proposed solution: to identify complex Deep fake videos using deep learning. This can be achieved using a pre-trained Facenet model. The model can be trained on image data of people of importance or concern. After training, the output of the final layer will be stored in a database. A set of sampled images from a video will be passed through the neural network and the output of the final layer from the neural network will be compared to values stored in the database. The mean squared difference would confirm the authenticity of the video.<br></p><p>In 2018, we believe that Deep fake will progress to a different level. We will also talk about defensive measures against Deep fake.</p>

Presenters:

  • Niranjan Agnihotri - Associate Threat Analysis Engineer, Symantec
    <span>Niranjan Agnihotri is a Software Development Engineer at Symantec, his work involves research and development of machine learning based techniques to mitigate threats.</span>
  • Vijay Thaware - Security Response Lead, Symantec
    Vijay Thaware has been working at Symantec's STAR Anti-Spam Team for the last seven years as Security Response Lead. He is involved in anti-spam, anti-fraud, and anti-malware content development and automation. His day-to-day work involves investigation and research on latest email threats in order to present effective solutions.

Links:

Similar Presentations: