It has been said that Twitter bots and trolls helped Russia influence the United States 2016 presidential election. Looking towards the 2018 midterm elections, many voters, politicians, and government agencies are anxious and uncertain of what may come: "How might another nation influence our election?" "Might the potential for electoral interference persuade some politicians to preemptively ‘get on board' with those who may influence the political outcomes?" The torrent of news broadcasts and research publications on the subjects of social media manipulation have led to greater confusion in some respects. This talk covers aggressive research conducted over the last year focusing on bots, fake news, and hate speech on Twitter. This "aggressive research" uses methods and techniques that are directly at odds with the Twitter ToS. To best understand the motives and techniques of your adversary, it's sometimes best to walk their shoes. Using and abusing Twitter API, using automation to engage suspicious accounts without using the API, and engaging in "nontraditional" data collection methods (i.e., "social engineering"), 'Da Beave' and 'Faux Real' have been collecting and analyzing a wide range of data related to their targets. The methods employed have enabled them to gain a wider perspective than any amount of social media data alone could provide, and these techniques have allowed them to bring some of the "trolls" and "bots" out of the dark and into the daylight. This talk covers what they did and how they did it. Source code will be released as an open-source project (GPLv2).