Designing Whole Systems

Hi Dennis Groves here......

Recently I was questioned over a comment I made about a USB key being functionally equivalent to a Smart Card in a discussion about bit-locker. I of course not understand that they are technically not equivalent. Smart cards have their own operating systems and USB keys don't. And that is huge; the costs associated to break the a smart card are far more than the cost of the system itself!

The situation may have been that the folks questioning me see the technology. I not only see the technology but I see the architecture and the behavior of people. Security is about people, process and technology.

In terms of the bit-locker architecture; both the SmartCard and the USB key can be used as an authorization token to use the drive by decrypting it. They don't establish the identity of the user; only the right to decrypt the drive. Both are two factor authentication tools. This is way better than passwords and I am 100% supportive of moves in this direction. However, the smartcard is thought to be better for reasons noted above; but also because the issuer of the smartcard has shifted the burden of their risk to holder of the card. This is one of the three ways you can manage risk; transfer it, mitigate it, or accept it. Smartcards are so good that if somebody was to be motivated for your secret; they can not just steal or clone your card anymore; they have to do a little rubber-hose crypo first (just kidding of course). Fortunately, such methods are generally not necessary as social engineering is a far easier way to steal credentials.

And after the system fails either because of dancing pigs, rubber-hose cryptanalysis, design flaw, bugs, side-channel attack or what have you; your left with functionally equivalent protection.

The problem is that security is not about a technology; and we very often forget that. Security is about mitigating risk, and it is about the people and their behaviors. When you design systems you have to really think about how the system and the people behave as a whole. And most importantly you have to design them to remain secure when the fail. You must assume failure. Sooner or later the system will fail. If not because of dancing pigs, then because of rubber-hose cryptanalysis.

A friend of mine Nigel Tranter used to have an email signature that read "The attacker only has to be lucky once, you have to be lucky all the time!"

It is worth repeating that security comes from a system in its entirety; not from a single component of that system. Currently most systems in require frequent patches, and have unknown, undocumented vulnerabilities. Things are of course improving.

Therefore, Dennis says "the way to build secure systems is to build with failure in the design."

Its about designing whole security solutions!

Comments