Cutting the power doesn't necessarily mean that memory is cleared.

25 February 2008

It has long been a piece of grassroots wisdom that when the power to your computer goes dead, you're up a certain creek without a means of propulsion: Whatever you were doing at the time had gone to the great bit bucket in the sky, and unless you'd just saved your work you could kiss your next couple of hours goodbye while reconstructing everything. However, from a technical standpoint this isn't actually true. Modern-day DRAM can actually hold usable data for a finite but non-zero period of time after the main power's been cut off. This has actually been known since the late 1970's to computer scientists, but no one had actually tried to do anything with this (that anyone's talking about, anyway - you know how governments are). Last Thursday, however, a group of researchers published a number of practical, implemented attacks against disk-based encryption systems by taking advantage of this phenomenon. Specifically, they were able to extract cryptographic keys from the memories of computers, reconstruct them if they had decayed, and use them to gain access to disk volumes created by Microsoft BitLocker (built into some versions of Windows Vista), Mac OSX's FileVault, the dm-crypt subsystem of the Linux kernel, and the cross-platform disk encryption system Truecrypt.

Beneath the cut, things start getting technical. If you're not interested in going that far off the map, please understand that when an attacker gets hold of the hardware all bets are off because they can do pretty much whatever they want to it. If you're using disk-based crypto and you cut the power to your machine, a sufficiently prepared attacker can probably extract the keys from the memory modules and use them to decrypt your data. You can't use crypto to protect a storage medium in use because, to be able to access the information the keys have to be kept in memory somehow. The best advice I can give you is don't let anyone decide that you're a big enough target to warrant such an attack - don't give anyone a reason to kick your door down and dunk your DIMMs in liquid nitrogen. For most people at home who don't do anything more sensitive than check their Gmail and $website{$social_network}, chances are this won't apply to you.

Okay. Here there be dragons. This attack works because DRAM is implemented with skillions (lots and lots - some big number that I don't know the actual magnitude of) of capacitors, which store a small charge. When the capacitor is charged, it stores a bit (usually a 1, but it doesn't have to be); when it's discharged it stores the opposite bit. Because capacitors periodically discharge, they need to be refreshed to keep their values. Manufacturers specify the refresh interval of their DRAM, usually on the order of microseconds, but it's been discovered that information decay was not immediate. There is in fact a short period of time (between 2.5 and 35 wallclock seconds) before the memory circuits decay to their ground state, a period of time in which many memory gates lose their charge, and then a longer period of time in which information decay is very slow (see page 5, figures 1 through 4 of the whitepaper).

The kicker is this: By cooling the memory modules before the power is cut, the period of time that can pass before the information in a memory module begins to decay can be increased. The simplest method (spraying the RAM with an upturned can of compressed air) gave the research team approximately one minute of time before information decay set in. That's plenty of time for a prepared attacker to yank the DIMM from a machine and pop it into another machine prepared to dump its contents to storage for analysis. At the upper end, immersing a memory module in liquid nitrogen for one wallclock hour reduced the amount of information decay to a mere 0.17%, more than enough for forensic analysis to take place offsite.

To give you a sense of what this means, take a look at page 7 of the whitepaper. The research team shows the progression of a bitmap of the Mona Lisa captured from memory that been unpowered for 5, 30, 60, and 300 wallclock seconds in figure five, which helps show that the pattern of information decay in a given memory module is highly predictable. So predictable, in fact, that the team (comprised of too many people for me to list here - read the paper and give them their props) was able to develop algorithms to reconstruct certain forms of data kept in memory at all times.... this means the cryptographic keying material of the disk encryption systems I mentioned earlier. It is possible to dump the contents of memory by using a utility like memdump after transplanting a memory module into another computer, but a more elegant method involved developing bootable utilities with such a small footprint that the amount of data lost was minimized. This was done with a custom-built PXE (Preboot eXecution Environment) supplied by a netboot server, a bootable USB key with syslinux on it, and an EFI (Extensible Firmware Interace) netboot application (so that Macintosh users won't feel left out). The nasty thing about the netboot memory dumpers is that they can be deployed remotely (say, on a compromised network) and used to go through every machine that happens to be netbootable (at the cost of a reboot of each box).

Sysadmins take note: Investigate every reboot that you didn't schedule. Which you should be doing anyway.

Here's something that isn't immediately obvious about cryptographic systems, even those running on bleeding edge machines: They're very computationally intensive. Some of them so much so that pre-computation is done to speed up the encryption and decryption processes so that they won't get in the way of the user. All of the pre-computed keying information is kept in memory for the sake of accessibility on the part of the cryptographic engine... and whomever gets hold of the memory DIMMs from the system. By dumping an image of the memory module it is possible to extract the keying information programatically and use it to unlock the encrypted volume of the computer in question. The good stuff starts on page 15 of the whitepaper - you can probably skip right to it if you really want to, but as with all security related literature I strongly advise against doing so.

The obvious solution to this is to systematically wipe a system's memory to obliterate its contents. This is best done by turning on the memory check function in the computer's BIOS (by turning off the "Quick Boot" option), which most people disable because it slows down the boot process. However, this is easily circumvented by moving the memory modules into a compatible system that does not have the Quick Boot feature activated. To speed up the boot process even further, many users (and not a few sysadmins) disable netbooting capability on their servers and workstations (because the timeouts are often on the order of minutes, not seconds), and we all know how users complain about how long it takes their boxen to come back up. Software engineers can design their software to not pre-compute keying information (or at least limit how much is done), but there's no way of knowing how long it'll take them to do so. Ironically, trusted computing technologies don't do anything to prevent this attack because the purpose of TC is to limit the circumstances under which cryptographic keys are placed in system memory for an authorized process to make use of. Once the keys are in RAM, this attack can be carried out to expose them.

Ultimately, the best defense is to not let an attacker get hold of your machine in the first place. It'll be interesting to see which manufacturers start using screws that require weird security bits to remove, but as I've just shown this won't stop anyone who does a quick search on Amazon. It'll also be interesting to see when aftermarket security screws will start showing up in tech stores.