I posted this same question on slashdot and got a lot of the "drop it in concrete and throw it in the sea" or "just don't connect it to a network" responses. A lot of folks mentioned a Faraday cage--not exactly the type of discourse I was looking for. There were, however, some noteworthy trends and details in the discussion.
A lot of folks referred me to the classic paper "Reflections on Trusting Trust" by Ken Thompson. While I'm still trying to understand the technical details in order to realize the epiphany therein, the paper states this pretty clearly:
Ken Thompson wrote:The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect.
If I understand correctly, the idea is that one must construct one's own compiler -- without using someone else's compiler to do so -- in order to be sure that one's programs don't have back doors in them. This is the kind of succinct detail I think I'm after in asking this question. Let's face it -- I probably won't be creating my own C compiler to compile my OS from scratch. If I can't establish perfect security myself, I want to at least be able to explain why not and to understand where in the armor the chinks lie. This helps me to understand Weedpacket's post:
Weedpacket;11021215 wrote:You could fab your own ASICs, but do you really trust the HDL compiler not to insert a back door?
You just go for the most vulnerable part of the system:
http://xkcd.com/538/
With that XKCD cartoon in mind, I'm curious about where the vulnerabilities lie. Obviously, creating a back door in the hardware itself sounds really difficult compared to creating one in software or tricking a hapless user. If anyone has stats or detail on the relative frequency of attacks in these various categories, I'd love to see it. Surely someone has done a survey of exploits along these lines?
Another guy volunteered this link in response to the oft-cited Thompson paper. Apparently there's something called "Diverse Double-Compiling" that mitigates the compromised-compiler attack.
Another interesting tidbit someone offered was that a couple of guys have built their own computers and written their own software:
http://www.homebrewcpu.com/
http://www.bigmessowires.com/category/bmow1/
These computers are really f-ing primitive. The second guy apparently wrote a compiler in assembler :eek: That's one way around the compromised compiler attack.
Bruce Schneier of course appeared a couple of times. I have his book "Applied Cryptography" and the dude is pretty awesome. I expect I'll be cruising his site to read about safe personal computing and also his advice on becoming a security expert.
The most considered responses made a very good point: Security costs time and money and you really need to balance the cost of your security investments against the value of the assets being protected. I'm still looking for more ideas about how to secure one's workstation (and also servers). I'm imagining it might be feasible to construct some kind of matrix detailing the universe of possible security measures and their associated costs and how such measures might address the various species of exploit. Such a matrix might help one to plan one's security investments most effectively to address the most common attack vectors.