Security nihilism: Never good enough.

Mar 11, 2017

In the last couple of years, a meme that's come to be known as security nihilism has appeared in the security community.  In a nutshell, because there is no such thing as perfect security, there is no security at all, so why bother?  Talking about layered security controls that reinforce each other is pointless because they always skip right to the end, which is the circumvention of the nth countermeasure and final defeat.  In the crypto community, cries of "Quantum computer!" are the equivalent of invoking Godwin's Law, leading to the end of all discourse, nevermind trying to separate the marketing hype from what's actually possible or the decade-odd of research into post-quantum cryptosystems.  This has lead to a certain amount of attrition in the community.  It is my considered opinion that this may be one of the main reasons why many so-called security practitioners don't actually bother doing anything, including not even installing patches.  No, I'm not speaking hyperbolically, I've witnessed this first-hand I'm sorry to say.

A couple of weeks ago I got into a discussion with an old friend of mine about hardware security modules, dedicated devices that carry out a handful of related tasks only: They generate crypto keys on-board or accept them from a specially designed devices only, ingest plaintext and emit ciphertext, ingest ciphertext and emit plaintext, or ingest data and emit a digital signature.  If you hold a certain button down for an extra second, the HSM will treat that like a panic button and wipe its memory - no more crypto keys.  Poke around with the HSM by plugging into the serial port and messing around to figure out the communications settings or the commands, and it wipes its memory - again, no more crypto keys.  For the ones that have an Ethernet jack, plug into it and poke around at it, and it wipes its memory - no more crypto keys.  Try some clever and esoteric techniques to suss out some bits of the crypto keys and it's highly probable that you won't be successful because they're designed to leak as few bits as possible accoustically or through RF emissions.  Try to crack the case so you can get probes on the circuitry itself to read the activity of the cryptoprocessor and it'll probably zeroize itself - no more crypto keys.  Try glitching the power and the unit will power itself down as an act of defiance; zeroizing the crypto keys is optional.  There are even some very high grade, very expensive HSMs out there that will physically self-destruct if they detect they're being physically tampered with; I'll leave as an exercise to the reader to determine who uses those.  The end of that discussion, however, was that if he could get into the data center and get his hands on the unit it was game over, I lose.

Let's get down to brass tacks.  Why are hardware security devices designed this way?

If your use case is on in which you absolutely, positively would rather your operation be completely offline rather than risk an attacker getting hold of the keys to the kingdom (define as you will), dropping a few tens of thousands of dollars on a hardware device is a reasonable thing and you undoubtedly have recovery processes all worked out for such an emergency.  You or I probably don't have any such use case, but banks (in particular multinational ones), governments, and militaries do.  Certain businesses have uses for them, also.  So much so that Amazon offers HSMs as a service these days, though the monthly cost is as much as I make in three months.

Each of the things I've described so far - every response to every kind of practical attack against a security module - is a security control for one particular kind of attack or risk.  Taken individually they're not comprehensive security in and of themselves - "You do this, I'll do that other thing."  Taken collectively, however, they add up to a very secure device.  But, as they always like to say, physical access trumps everything.  If you can get your hands on it you can fuck around with it.  Physical security measures built into the unit aside, this is why there are always other security measures in place around such operations that use HSMs.  For example, you'll find things like keycards that open certain doors but not others, biometric authentication systems (normally paired with the keycard readers), data centers with two foot thick walls, data centers out in the middle of nowhere that don't look anything like what you'd imagine a data center to look like, redundant power supplies and network uplinks, surveillance systems to alert security teams that somebody isn't supposed to be there... oh, right, the security teams.  Guys with guns who get paid to shoot and let the lawyers ask questions on their behalf later.  For the record, they don't have senses of humor and they do their jobs with the efficiency of matter and antimatter annihilating each other.  Never piss them off.  Take my word for it.

Now, the honest question: Is this perfect security?  The answer is no, it isn't.  There are always ways to find and get into that data center, get past the guards, look utterly unremarkable to the surveillance grid, get through the security doors and mantraps... you get the picture.  Serious attackers try to collect as much information about their targets as possible, in an attempt to deduce what kinds of security are implemented, where, and how so they'll have a better chance of succeeding.  This is why security measures are kept secret, to deny attackers the extra knowledge.  The whole point of layering security controls is that they back each other up.  Each security measure that gets circumvented means a higher probability that the attacker will smack into the next one and get noticed or give up because the stakes are too high.  For example, your cloned ID card might get you through the uncontrolled door of the mantrap but that's so that it can lock behind you; the security controller silently recognized the fake.  (This is a dirty, dirty trick, and it works pants-shittingly well.)  It's entirely possible that an intruder could make it all the way to their goal (the rack holding the security module) and get caught by the guys with the guns.  Or the intruder might get nicked on their way out, security module tucked under an arm.  Or they might get away entirely and wreck the HSM when they start to mess around with it.  Or suppose, just suppose, they figure out how to make it decrypt the recorded netwrok traffic.  Game over.  Let's try enumerating the various victory states for the defender, from best to worst, so we have a clear picture of what we're arguing about:

  • Attacker doesn't even bother.  Yay!  \o/
  • Attacker get caught early, say, by anything up to the mantrap that controls access to the data center floor from the loby.  Yay.
  • Attacker gets caught while trying to figure out which cage is yours.  Yay.  Still scary, but it's a win.
  • Attacker gets caught while trying to break into your gate.  Yay.  Scary as hell, but it's a win.
  • Attacker gets caught while stealing or messing around with the HSM.  Scary, but this is a win, too.  Here's the first "oh shit" stage.
  • Attacker hoses the HSM.  Denial of service for the operation, but they weren't successful at their primary goal.  All physical security measures circmvented, first layer of information security worked.  Either data center security needs revamped or you pissed off a government.
  • Attacker successfully tinkers with the HSM but gets caught.  Attacker is still in jail but primary goal was a failure.  The root cause analysis meetings are going to suck, though.
  • Attacker successful but gets caught on their way out of the facility.  Better than nothing.  This is the first "defender failed" stage, in my opinion.
  • Attacker successfully absconds with the HSM, but hoses it.  Attacker eventually failed, but the defender failed, too, because every physical secrity control failed completely and the last-ditch "if I can't have it nobody can have it" security trap fired.
  • Attacker successfully absconds with the HSM and figures out how to either extract the crypto material or use the unit to decrypt the recorded traffic.  Attacker wins, defender failed.

Oh, and here's the obligatory /<-r4d hacker ninjitsu whatever-the-fuck-magick that constitutes the same kind of "defender fails utterly" failure mode: Attacker figures out how to decrypt that recorded network traffic without needing a black op.  Depending on the techniques used, there was nothing the defender could do about it because the attack was entirely outside of the defender's spheres of influence.

As you can see from the above list, there are multiple ways in which the defender can succeed, technically speaking.  There are also multiple ways in which the attacker can succeed to different degrees.  All of the defender's success cases involve the defender not being in a position to accomplish their final goal (decrypt the recorded network traffic) for one reason or another.  The attacker getting thrown in jail before they can mess with the data is a win for the defender.  The attacker fucking themselves over by hosing the HSM is also a win for the defender, though they probably won't know it.  This is, regrettably, that thing called real life.

As you can see, even those bad-ass HSMs I just wrote about have their failure modes. The unit dying because somebody screwed around with it is a failure mode because it can't be used for its intended purpose anymore.  It also means that the attacker can't succeed because they don't have what they need.  To reiterate an earlier point, generally speaking outfits that are willing to be completely offline because their hardware crypto module is down are willing to consider a denial of service a victory condition because the attacker gets sandbagged.  Of course, there are undoubtedly ways to circumvent all of the security measures and pick at least part of the crypto keys out of the unit's cryptoprocessor to make other kinds of attacks somewhat easier.  I don't know what they are, but there are hardware experts out there who can undoubtedly speak at length about such things.

There is no such thing as perfection, and as the twenty-first century wears on it becomes increasingly evident that thinking about things in old ways, for example, the information security paradigms of the 1990's, are worse than useless in the second decade of the twenty-first century.  They're actually harmful in subtle yet devestating ways.  This realization seems to have hit some people really hard.  While the fact that there is no perfect security is an accurate one, it does not mean that security as a whole is worthless.  You can only implement enough security measures that reduce risk to acceptable levels while still letting people get their work done.  You can also only be honest about what you'll consider acceptable levels of security breaches (please note the bullet points in my list about the attacker getting in but getting caught, as well as the attacker screwing themselves over), and the how you'll handle the unacceptable side effects of some of those victory conditions (having a missing but unusable or dead essential gizmo).  After that it's a roll of the dice.

Good luck.