In a pursuit of realistic expectations for learning models can we better prepare for adversarial environments by examining failures in the field? All models have flaws, given any usual menu of problems with learning; it is the rapidly increasing risk of a catastrophic-level failure that is making data /robustness/ a far more immediate concern. This talk pulls forward surprising and obscured learning errors during the Cold War to give context to modern machine learning successes and how things quickly may fall apart in evolving domains with cyber conflict.