Recent Deep Learning-based Natural Language Processing (NLP) systems rely heavily on Word Embeddings, a.k.a. Word Vectors, a method of converting words into meaningful vectors of numbers. However, the process of gathering data, training word embeddings, and incorporating them into an NLP system has received little scrutiny from a security perspective. In this talk we demonstrate that we can influence such systems by manipulating training data and how we can inject them into real-world systems.