Secure deletion and sanitization of storage media.

24 December 2008

EDITED: Added Creative Commons license block. Other content remains the same.

Long ago, in the days of DOS and OS/2, deleting a file meant that it was gone for good. How file systems worked was a mystery to just about everybody, and so we were told to back up our data often lest a mistake or drive crash wipe out something important, leaving us up a certain body of water sans propulsion. Years passed, as they are wont to do, and someone discovered that data didn't really evaporate when it was deleted, it was just renamed in such a way that it couldn't be seen anymore. This discovery lead to the creation of many data recovery tools for DOS (for OS/2 was on the way out), and many a user who'd just deleted the wrong file heaved a sigh of relief. For people concerned with deleted data being extracted from their systems without their knowledge, such as law enforcement, intelligence and defense agencies of many countries around the world, and people who risk persecution or execution for their work, brows began to sweat and many sleepless nights were had.

As alluded to before, just because you delete something doesn't mean that the data winks out of existence. Most every file system does basically the same things to keep track of files and free disk space when files are deleted: a table of some kind is kept which maps blocks of disk space to names of directories and files. Block 123387 of the first hard drive holds part of ntoskrnl.exe, block 98926127 contains /usr/local/src/gtkwifi-1.10/README, and so forth. When a file is deleted the entry in the file systems' tables is updated to reflect the fact that it's slot is now available for re-use, as are the blocks on disk associated with that table entry. Moreover, the first block of a file points to the second, which points to the third and so forth, making a chain of disk blocks for that file. So long as those disk blocks aren't overwritten by parts of another file, that data might still be extracted by someone who gets hold of your computer (or just the hard drive) for a while. While many of the data forensics packages are too expensive for your average user to get hold of, there are some good ones out there that are open source (like Autopsy and ddrescue (which was designed with damaged drives more than forensic analysis in mind)) or at least highly affordable (like FTK Imager from AccessData). There are two ways that one can dig deleted data out of a drive of some kind. The first is to traverse the data structures that the file system uses to keep track of disk blocks, directories, and files (I gave a thumbmail description of these a few paragraphs ago) and look for the telltale signs of file deletion. A list of all of the recently deallocated disk blocks is presented to the user along with a list of possible filenames that they could correspond to. An option is given to either un-delete the files (which you're probably familiar with) or extract them to another location (the forensic technique). However, there is no guarantee that the contents of the file will be uncorrupted, or even usable because deleting a file returns all of the blocks associated with a file to free status (effectively unlinking all of them) and some of them may have been allocated to other files since that time. The second way is to analyze each and every block on the drive and look at the pointers to other disk blocks, in effect re-assembling the chains without going through the file system. While this takes longer and requires much more expertise (as well as specialized software), this is also the generally accepted Way To Go About Things as far as the data forensics community is concerned. This is also the method by which it is sometimes possible to extract data thought erased a very long time ago.

In the mid-1990's, a researcher named Peter Gutmann wrote a famous paper about recovering deleted and overwritten data from hard drives by highly funded and motivated parties (read: intelligence agencies) using a technique called magnetic force microscopy. This method, if implemented, would require a great deal of expertise, a large amount of highly specialized equipment, and an unknown period of time to reconstruct individual bits on the platters of a hard drive by measuring the minute residual magnetic field of a particular bit on the drive which remains after it's been changed through the re-use of a particular disk block. If you do any research at all into secure data deletion, you're going to come across this paper and if you sit down and read it all the way through, it paints the matter at hand in a way described by a good friend of mine as "Rocks fall, you all die."

Things aren't quite as simple or as bad as they seem. First of all, that paper was written with two particular hard drive technologies, MFM and RLL data encoding in mind. Depending on how long you've been using computers, you will probably recall working with MFM or RLL hard drives on an IBM PC, and you'll also probably fondly recall the family-friendly names used for same, "boat anchors." Those drives aren't used anymore, and in fact they aren't even manufactured anymore. You'll even have a hard time finding them at Goodwill Computers, swap meets, or on eBay. Hard drive technology has changed a great deal since Gutmann's paper was written. Secondly, the attacks that Gutmann describes in his paper assume an entity with the expertise, hardware, and funding popularly attributed to the National Security Agency (folklore says that No Such Agency has acres and acres of Cray supercomputers in its basement for cracking codes; doing a little research about the practical requirements of such a setup, however, would tell you that their even having two Crays at Fort Meade would be highly improbable due to the power and cooling infrastructure required to keep the Crays from melting down). Unless you're way the hell out there, chances are there is nothing at all you do that would warrant anyone going to such lengths to recover your data, should they get hold of your computer. Don't bother rigging up a thermite charge or a .45 round to destroy your hard drives should someone ask to see your computer.

Now that we've got that out of the way, the United States government is acutely aware of the security risk posed by the disposal of data storage media without sanitizing it first. In 1991, the National Computer Security Center worked up an unclassified document for the DoD, thoughtfully titled A Guide to Understanding Data Remanence in Automated Information Systems (version 2) (NCSC-TG-025, known to the infosec community as the Green Book). While I recommend that you sit down and read this document (downloadable from the above link or requestable from the government as a galley edition) at least once to understand the specifics, what it boils down to is what kinds of data storage media there are and how best to render them unreadable. It's actually not too dry as government documentation goes.

So, what, as someone who is reasonably concerned about information security and personal privacy, do you do? How far should you go to keep people from piecing together your deleted files?

The answer is, you don't have to go very far. It doesn't take much to render a file, or even an entire hard drive irrecoverable.

The simplest way is to physically destroy the drive - if you trash the platters inside the drive, all the king's horses and all the king's men will never be able to put your information back together again. Take a screwdriver and remove the screws or bolts from the metal cover of the hard drive (on the side opposite the circuit board) to expose the drive heads and spindle. Put on safety glasses and take a large hammer to the platters a few times. Sometimes the platters are made of a fragile material (I've been told ceramic but I don't actually know what it is) which shatters when struck (which is why I recommend eye protection), but most of the time they're made of aluminum. I usually take a dozen whacks at the drive with a ball-peen hammer just to be safe. If a hard drive isn't going to be used ever again (it conked out or it's too old or small to get any real use out of) then this is your best bet.

Physical destruction is the only method to render optical media unreadable. There are shredders on the consumer market that can reduce a CD to sparkly confetti and grinders that remove the reflective and data-bearing layers of optical disks from the label side. You can also use a sturdy pair of scissors to cut optical disks into tiny pieces or a blowtorch to melt them (but do this outside because the smoke is toxic). The most fun way of trashing an optical disk, however, is to put it in the microwave (ideally someone else's) for a few seconds which ruins the data-bearing and aluminum layers of the disk and leaves a pretty fractal pattern visible through the polycarbon.

Magnetic media can also be decomissioned through a process called degaussing. Put simply, this means getting your hands on a big-ass electromagnet (sold for just this purpose) and running the storage media over the business end for a while. The idea is that the microscopic magnetized regions on the disks which represent the ones and zeroes are all re-aligned according to the magnetic field the degausser generates, erasing the data in the process. It's rumored that the more powerful units can actually warp the platters inside the drive but I've never seen evidence of this so take such stories with a grain of salt.

If you plan on re-using the drive, your options are limited to wiping the contents of the disk or the individual file with special software. Wiping a hard drive is often referred to as decomissioning or flattening the drive, and can be done any number of ways. Probably the simplest involves attaching the drive to a Linux, BSD, or Mac OSX machine and running the command (as the root user) dd if=/dev/random of=/dev/drive_to_wipe bs=4k, which copies random (pseudo-random, actually) junk out of the kerenl's generator and writes it to the drive, overwriting the file system and data as it goes. Be warned that this can take a very long time depending on the size and speed of the drive, and using specialized software is more advantageous. You could use the bcwipe utility from Jetico to accomplish the same thing (bcwipe -b -m 1 /dev/drive_to_wipe); I find that it works a bit faster than the dd method.

If you need to wipe a number of hard drives in one machine or a number of systems simultaneously I cannot recommend DBAN highly enough. DBAN may be downloaded as an .iso image (for burning to CDs or DVDs) or a Windows installer (for floppy disks or USB drives). After preparing your media of choice, boot the target machine with it and follow the instructions on the screen (there aren't many). DBAN will then cycle through all of the hard drives on the machine and overwrite them with random data, thus turning your data into unusable garbage.

Darik's Boot and Nuke: when you absolutely, positively must trash every last machine in the room. I've been using it to decomission machines since 2001 and I carry a credit card CD-ROM with DBAN on it as part of my normal field kit.

Solid-state storage is an entirely different matter. Sure, it's cheap (and getting cheaper all the time) and rapidly growing in capacity, but there are a few things that you need to keep in mind if you want to use it securely. Firstly, flash storage devices actually hold a bit more than they're rated for due to the fact that each data cell has to be blanked before it can be rewritten; this burns out the data cells eventually (on average, after a half-million writes). Hard drives are constructed with extra sectors which the controller circuitry maps over failed disk sectors for much the same reason. To prevent the storage cells from giving out too early the controller circuitry built into the device implements wear leveling, which spreads writes out over the least-recently written to cells in the device. While this means that your USB key will probably be good for a couple of years of casual use, it also means that there could be bits and pieces of files spread all through the storage cells which could be extracted by a forensic investigator. The DoD's recognized method of sanitizing USB keys typically involves a hammer. One thing that you can do, however, is overwrite the file and then overwrite all of the unused space on the flash drive, but doing so too often will shorten the effective lifespan of your drive.

These are all scorched-earth solutions, however. Chances are you aren't interested in erasing and rebuilding your system every time you want to erase some files, so instead you'll have to securely overwrite and delete files to be rid of them. If you read the Gutmann paper, he originally suggested overwriting files 38 times with a series of special patterns which are thought to make it next to impossible for Them to recover erased data. However, a few years later he published a follow-up to his paper in which he said:

In the time since this paper was published, some people have treated the 38-pass overwrite technique described in it more as a kind of voodoo incantation to banish evil spirits than the result of a technical analysis of drive encoding techniques. If you're using a drive which uses encoding technology X, you only need to perform the passes specific to X, and you never need to perform all 38 passes. For any modern PRML/EPRML drive, a few passes of random scrubbing is the best you can do. As the paper says, "A good scrubbing with random data will do about as well as can be expected". This was true in 1996, and is still true now.
So, because the drive technologies which may have required 38 overwrites to be secure aren't in use anymore, there's no real reason to use them on modern drives other than to spend lots of time waiting.

Finding secure deletion software is easy. Finding secure deletion software that you know and trust is harder. When using Windows, there are two packages that I've worked with long enough to include in my standard toolkit. The first is Eraser (portable version available for download here), which does what you would expect of such software: it implements the 38-pass Gutmann voodoo data wipe, a single-pass overwrite with pseudo-random data, and the DoD 5220-22.M seven-pass overwrite. Eraser has a lot of features that I won't cover here, so I highly recommend visiting the website and reading the overview. In a nutshell, start it up and drag the files you want to shred over to the window. Then select them in the Eraser window (I like using control-A to do that), right click, and select "Run". Sit back and wait. AxCrypt is a portable file encryption utility for Microsoft Windows which also implements secure file deletion (by right-clicking on a file and choosing "AxCrypt -> Shred and delete") or automatically when you encrypt a file (destroying the original).

I've tested both applications by destroying data and then using FTK Imager on the drive to see what traces were left, and from everything I can tell they operate as expected. Scanning the drive after the files were deleted, I went down to the "unallocated space" listing of the partition and found a series of directories with numerical names. Inside of each directory was a series of files (also with numerical names), which FTK Imager lets you poke around in. It takes some work and patience but it's possible to pick out the insecurely deleted test files by their contents. The securely deleted test files showed up as files full of binary junk - they look like executables but if you scroll around in them you won't find any of the usual structures that are common to executable files. Net result: you know that data used to be there but isn't anymore, which in itself is evidence that something odd is going on.

UNIXes and UNIX-alikes (like Linux) have more options available to them, but also have more potential pitfalls to avoid. Sometimes tempfiles are created in /tmp or a subdirectory thereof, sometimes they're created in a personal ~/tmp or ~/.tmp directory, and sometimes you can find them in weirder places. Journaling file systems and software RAID are all good ways of protecting your data (along with backups.. you do back up, right?) but they also pose a special challenge because those features are designed to make losing (and thus getting rid of) data much harder. Out of the box, Fedora Core, its ilk, and Ubuntu all use the EXT3 file system by default, which is a journaling file system. Great for data integrity, but lousy for making sure that the data can't be recovered because many journaling file systems don't actually overwrite data, or at least they don't reliably. If you're concerned with high security, I recommend using an older file system like EXT2 on Linux machines. For many machines, you can set aside a partition formatted as FAT-32 for your data, but be warned that doing so is a serious security risk. FAT-32 doesn't implement file ownerships, nor does it have much in the way of file permissions, so your data will be hanging out there unprotected.

Is there a higher probability that you'll lose data? Yes. If you're concerned about your data staying gone, though, your course of action will require accepting that risk. Security is a balancing act in many respects: you'll have to decide where you're willing to trade off so much ease of use for so much hassle, a certain amount of usability for a certain amount of security, and X amount of effort for Y amount of peace of mind.

There seems to be no shortage of secure file deletion software available for Linux, BSD, or what have you, and often they can be found in the package, pkgsrc or ports collections of your alternative OS of choice. You don't need root privileges to use most of them either. Not many people know about this, but if you run Linux you get a secure data deletion utility bundled with the core systemware ("coreutils") called GNU shred. By default GNU shred overwrites files twenty-five times (!) with junk, though you can tune the number of overwrites with a command-line option. I've gotten into the habit of using the command shred -fu foo bar /baz/chorgle /blah/ (execute the shred utility, force overwrite if you have to, unlink (delete) the file when complete) when deleting files, regardless of whether they're sensitive or not.

The next utility is part of a package released by the group THC called the Secure Delete Utilities Collection (currently in version 3.1; they make older versions of their code available also), which includes the following software:
  • srm: A secure replacement for the rm command.

  • sfill: Overwrites free disk space and file system structures.

  • smem: Fills unused RAM with junk (remember the coldboot attack?)

  • sswap: Overwrites swap space with junk.
All of these utilities (I'll cover the others later) implement the Gutmann secure deletion process (quoted here so you know exactly what that entails):

  • Overwrite with the value 0xff (255 in decimal)

  • Overwrite five times with random data

  • Overwrite twenty-seven times with special patterns that Gutmann came up with (APO PANTOS KAKODAIMANOS)

  • Overwrite five more times with random data

  • Change the filename to a random string

  • Delete the file

This is great and all, but how do you use it? Simple. srm -l foo bar /baz/quux for files, srm -l -r /path/to/directory for directories.

In my testing, the srm utility takes a very long time but does a thorough job (another tradeoff to keep in mind: thoroughness vs. time required). The -l switch to srm limits it to overwriting files once with all 0xff's (255's) and once with garbage. For the odd file here or there it works nicely but I don't think it scales well if you're erasing a lot of files, and if you're shredding lots of files (especially when time is of the essence) you'll be there for a very long time.

The third secure deletion utility I'd like to mention is BCwipe by Jetico, which you can download the source code to and compile yourself if it isn't pre-packaged in your distro. By default it performs the 38-pass Gutmann voodoo banishing (gods, I love saying that) on whatever files or directories you give it, though to speed things up you can have it perform the DoD 5220.22-M seven pass wipe, overwrite everything with zeroes, or perform a variant of the DoD 5220.22-M wipe with a configurable number of passes.

Examples:
  • Gutmann wipe: bcwipe -f /path/to/files

  • US DoD Green Book shred files (the method I recommend): bcwipe -f -md /path/to/files

  • Overwrite ten times instead of seven: bcwipe -f -m 10 /path/to/files

  • Quick and dirty (overwrite with zeroes): bcwipe -f -mz /path/to/files

  • Securely erase an entire directory structure (substitute type of overwrite pattern as appropriate): bcwipe -f -r -m? /path/to/files

  • Erase an entire drive or storage device (substitute the appropriate device for hda): bcwipe -f -b -md /dev/hda

bcwipe also has a couple of features that other secure deletion utilities usually don't. For example, you can have it overwrite the unused portion of the last block of a file (called slack space) before deletion by adding the switch -S to the command, but it won't actually delete the files at the same time. You can also use it to scribble on the free space of a file system to conceal signs of secure deletion with the command bcwipe -F -v /path/to/directory, but you might need root privileges to do this. This can also take a long time to finish, so you'll have to decide if you really need to go to such lengths.

Here are the problems I have with shred and srm.. while they're handy and I love them, they have some glaring functional deficiencies that I haven't yet gotten around to correcting. First, GNU shred doesn't do directories, only files. If you try to torch an entire directory in one go rather than all the files in it, it'll error out on you and not do anything, which I find unacceptible for a secure deletion program. At the other end of the spectrum, THC's srm either goes too far (full Gutmann wipe) or not far enough (only one random pass, sometimes in a weak pattern) if you really want it to be practical to use. Neither of the utilities wipe slack space, which is something that I'm slightly concerned about.

During my research, I found one last utility for securely erasing files, a tiny application called Wipe. It doesn't look like much but if you take a look at the documentation it has a large number of useful features, everything from being directory-aware (sorry, GNU shred) to a silent mode (no output), locking files before they're overwritten, zeroing files (overwriting them with zeroes), and wiping until out of space (space overwriting). Wipe's default behavior is pretty hardcore, also (perform the Gutmann voodooo banishing wipe; delete files (special or not); enable static passes; verbose mode; just wipe files; lock files (if possible (the docs say that you really should mount all of your partitions with the option 'mand' (though the kernel documentation will explain to you why it's a bad idea)); assume a disk sector size of 512 bytes; assume a file system chunk size of 4 kilobytes; use security level 1; overwrite 8 times with garbage; wipe each file once). Of course, if you read the docs you'll find that there are lots of possible options that you can set.

Wow. So, a better question is, "How well does it work?"

Destroying a 5.5 megabyte file took about four wallclock seconds (time wipe ~/copy-of-wtmp); wiping a pair of 28k files took slightly over 0.3 seconds. Wiping a 23 megabyte directory structure of assorted files (a temporary build of busybox v1.10, to be exact), however, took longer than expected with the default options: about 5 minutes and 17 seconds. I think that this is due, in part, to the fact that wipe seemed to shred each file one at a time. While this is logical it seemed to slow things down a bit because a few shortcuts may not have been taken, which is good from a security perspective but probably bad from a "Thugs are taking a FUbar to my door!" perspective.

Ultimately, my recommendation's a toss-up: if you don't mind typing command line options every time you want to securely delete something (especially if you want to completely replace deletes with secure deletes), go with bcwipe. If you want to do the same thing with a utility that has a good set of defaults but might be a little bit slower, then use wipe. If nothing else, wipe is a good drop-in replacement for the standard /bin/rm command. For Windows, I recommend using Eraser.

There are two more things that you should probably consider about secure deletion: file metadata kept by the file system and remnants of deleted files still sitting around on the disk. Even though you shred your datafiles rather than deleting them, as you work on them often tempfiles are silently created in odd places and deleted automatically. An excellent example of this is editing documents with a word processor or text editor. You won't always know where they are, but you can take it for granted that they probably won't be wiped when you're finished. A forensic investigator might not be able to find the original document but sometimes large parts of it can be extracted by undeleting the tempfiles. The logical solution to this problem is to fill up the unallocated disk space by making lots of files (full of junk or not) and then deleting them. The quick and dirty method involves copying a large directory of files (say, your .mp3 collection) a few times to fill up the drive and then deleting all of them. Another way of going about it is to use a utility like bcwipe (bcwipe -F /path/to/someplace) or sfill from THC's Secure Deletion Toolkit (sfill -l -v /path/to/someplace) to automate the process. Keep in mind, however, that most if not all
nix file systems reserve a certain amount of disk space per file system (usually about 5%) for the root user; if you're doing this from an unprivileged account you might miss that last 5%. This might be important to you or it might not; that's your call to make. Remember what I said about deciding how paranoid you really need to be.

One thing you'll want to keep in mind is that zeroing erased files, while a useful means of making sure that they can't be recovered, sticks out like a sore thumb to anyone performing a forensic analysis of a drive. It's fairly normal for unallocated disk space to fill up with bits and pieces of text and binary data (which, out of context, tends to look a lot like garbage unless someone takes the time to run a statistical analysis) as files are created, used, altered, overwritten, and deleted. Seeing large chunks of disk space filled with zeroes, however, rarely happens unless done deliberately. By 'deliberately', I mean securely wiping data, possibly to conceal the presence of something. So, if you're afraid that someone is going to break out the microscope and tweezers when examining your hard drive, don't do it unless you want to raise eyebrows.

It's often suggested that everyone securely wipe everything, no matter how trivial or how damning their data is to get law enforcement and data forensics examiners used to seeing such measures taken by J. Random User, the idea being if everyone does it it's no longer suspicious. While the argument makes some sense you have to consider one thing: how often data storage media actually undergoes such inspection. To the best of my knowledge, none of my systems have ever been seized, and chances are neither have any of yours. It follows then, that examiners don't see such measures taken by J. Random, but by people who really are Up To No Good, and thus assume that you're one of them, too. To make this come to pass, widescale forensic examination of storage media (done as often as phone calls in the United States are monitored by the NSA) will have to happen, though right now it's limited to the US border. Therefore, if you're really concerned about this, I suggest a pattern of some regular deletion and some secure deletion.

Delete about half of the files you'll normally erase - pictures, web cache, very large files, what have you - as usual. Securely delete the other half as well as all the data that you don't want people to be able to recover. Maybe a little more or a little less so it isn't exactly a 50/50 split. Also, run a disk-filling program like sfill every few months to make all of your unused space look like it's been shredded, just in case. That way, you could honestly say that your normal routine is to fill the drive with junk every few months to make sure that no personally sensitive data (like banking or corporate information) is recoverable by criminals if your equipment is stolen, and that you've been doing this for some time. If your company really does have such a policy more's the better (hint to CIO's).

File system metadata is, if you break the phrase down, data about files kept on disk used and kept by the file system. For nix file systems, this refers to inodes and whatever journalling features the file system might implement. Again, just because your data is gone doesn't mean that some remnants aren't leftover someplace. world_domination_plan-v2.3.doc might have been securely deleted and the unallocated space filled with trash to hide the fact, but there is a possibility that a forensic investigator going through the metadata will find a deleted reference to that file - seeing that filename but not finding any blocks that would match up with it in the unallocated space could be a tipoff that you seem to be going to great lengths to make sure your data can't be recovered. The way to cover for this is to create lots of files that not only fill up the unused space of a file system but also overwrite the inodes used to keep track of files. Again, you can do this by hand by copying lots of files all over the place (the source code to the Linux kernel comes to mind) or your free space filler software can do it for you (and usually does automatically if you read the docs).

Just in case, I like to set up my systems so that they securely delete everything in the temporary directories that I know of whenever I shut down. This is actually easy to do by editing the shutdown scripts of your system (in Windbringer's case, /etc/conf.d/local.stop; consult your system docs for more information). The commands are as straightforward as they can be:
bcwipe -f -r -md -w /tmp/
bcwipe -f -r -md -w /usr/tmp/
bcwipe -f -r -md -w /var/tmp/
for i in /home/
; do bcwipe -f -r -md -w $i/tmp/ ; done
Depending on how many files accumulate in those directories, it slows the shutdown process anywhere from ten seconds up to a minute by the clock on the wall. I think that's acceptible but you might not.

Depending on how far you really want to go, you could also set your machine up with a large encrypted swap partition, mount your /tmp directory on a RAM disk, and link all of your other temporary directories to /tmp so that whenever you shut your machine down, all your tempfiles will vanish automatically, but I've never done this before so I can't tell you how useful a measure it is. Also, there are always a few applications which don't use /tmp or ~/tmp for scratch space but non-standard directories (like web browsers), so this may not be a worthwhile tactic depending on what you usually use your system for. That's a topic for another article, however.



This work by The Doctor [412/724/301/703] is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 License.