In the world of networked computers, security through obscurity is generally ineffective. Hiding algorithms, protecting source code, and keeping procedures secret might be effective initially, but eventually the cloak of secrecy is penetrated. This talk will examine how security through obscurity is relied upon in the non-computerized world. When can security through obscurity work? What risk analysis should we use to examine the role of obscurity in the non-computerized world? The talk will present and examine the hypothesis that an "open source" mentality should be applied to security procedures for public places. This is a logical extension of the lesson in cryptanalysis - that no cryptographic method can be considered trustworthy until it has undergone a rigorous examination by qualified persons. Similarly, can we trust security procedures in the physical world designed, ostensibly, to protect the public if these procedures never undergo public scrutiny?