Jan 14 2018
As frequent readers may or may not remember, I rebuilt my primary server last year, and in the process set up a fairly hefty RAID-5 array (24 terabytes) to store data. As one might reasonably expect, backing all of that stuff up is fairly difficult. I'd need to buy enough external hard drives to fit a copy of everything on there, plus extra space to store incremental backups for some length of time. Another problem is that both Leandra and the backup drives would be in the same place at the same time, so if anything happened at the house I'd not only not have access to Leandra anymore, but there's an excellent chance that the backups would be wrecked, leaving me doubly screwed.
Here are the requirements I had for making offsite backups:
- Backups of Leandra had to be offsite, i.e., not in the same state, ideally not on the same coast.
- Reasonably low cost. I ran the numbers on a couple of providers and paying a couple of hundred dollars a month to back up one server was just too expensive.
- Linux friendly.
- My data gets encrypted with a key only I know before it gets sent to the backup provider.
- A number of different backup applications had to support the provider, in case one was no longer supported.
- Easy to restore data from backup.
After a week or two of research and experimentation, as well as pinging various people to get their informed opinions, I decided to go with Backblaze as my offsite backup provider, and Duplicity as my backup software. Here's how I went about it, as well as a few gotchas I ran into along the way.
Jan 14 2018
Let's say there's a website that you want to make a local mirror of. This means that you can refer to it offline, and you can make offline backups of it for archival. Let's further state that you have access to some server someplace with enough disk space to hold the copy, and that you can start a task, disconnect, and let it run to completion some time later, with GNU Screen for example. Let's further state that you want the local copy of the site to not be broken when you load it in a browser; all the links should work, all the images should load, and so forth. One of the quickest and easiest ways to do this is with the wget utility.
Jan 06 2018
A couple of weeks back, somebody I know asked me how I went about deploying SSL certificates from the Let's Encrypt project across all of my stuff. Without going into too much detail about what SSL and TLS are (but here's a good introduction to them), the Let's Encrypt project will issue SSL certificates to anyone who wants one, provided that they can prove somehow that they control what they're cutting a certificate for. You can't use Let's Encrypt to generate a certificate for google.com because they'd try to communicate with the server (there isn't any such thing but bear with me) google.com to verify the request, not be able to, and error out. The actual process is complex and kind of involved (it's crypto so this isn't surprising) but the nice thing is that there are a couple of software packages out there that automate practically everything so all you have to do is run a handful of commands (which you can then copy into a shell script to automate the process) and then turn it into a cron job. The software I use on my systems is called Acme Tiny, and here's what I did to set everything up...
Dec 27 2017
I know I haven't posted much this month. The holiday season is in full effect and life, as I'm sure you know, has been crazy. I wanted to take the time to throw a quick tip up that I just found out about which, if nothing else, will make it easier to get up and running on a Raspberry Pi that you've received as a gift. Here's the situation:
You have a new account on a machine that you want to SSH into easily. So, you want to quickly and easily transfer over one or more of your SSH public keys to make it easier to log in automatically, and maybe make running Ansible a bit faster. Now, you could do it manually (which I did for many, many years) but you'll probably mess it up at least once if you're anything like me. Or, you could use the ssh-copy-id utility (which comes for free with SSH) to do it for you. Assuming that you already have SSH authentication keys this is all you have to do:
[drwho@windbringer ~]$ ssh-copy-id -i .ssh/id_ecdsa.pub pi@jukebox
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_ecdsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out
any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now
it is to install the new keys
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'pi@jukebox'"
and check to make sure that only the key(s) you wanted were added.
Now let's try to log into the new machine:
[drwho@windbringer ~]$ ssh pi@jukebox
Linux jukebox 4.9.70-v7+ #1068 SMP Mon Dec 18 22:12:55 GMT 2017 armv7l
The programs included with the Debian GNU/Linux system are free software;
# I didn't have to enter a password because my SSH pubkey authenticated me
pi@jukebox:~ $ cat .ssh/authorized_keys
You can run this command again and again with a different pubkey, and it'll append it to the appropriate file on the other machine (~/.ssh/authorized_keys). And there you have it; your SSH pubkey has been installed all in one go. I wish I'd known about this particular trick... fifteen years ago?
Dec 02 2017
Difficulty rating: 8. Highly specific use case, highly specific setup, assumes that you know what these tools are already.
Let's assume that you have a couple of servers that you can SSH into over Tor as hidden services.
Let's assume that your management workstation has SSH, the Tor Browser Bundle and Ansible installed. Ansible does all over its work over an SSH connection, so there's no agent to install on any of your servers.
Let's assume that you only use SSH public key authentication to log into those servers. Password authentication is disabled with the directive PasswordAuthentication no in the /etc/ssh/sshd_config file.
Let's assume that you have sudo installed on all of those servers, and at least one account can use sudo without needing to supply a password. Kind of dodgy, kind of risky, mitigated by only being able to log in with the matching public key. That seems to be the devopsy way to do stuff these days.
Problem: How to use Ansible to log into and run commands on those servers over the Tor network?
Nov 27 2017
A couple of weeks ago a new release of the Keybase software package came out, and this one included as one of its new features support for natively hosting Git repositories. This doesn't seem like it's very useful for most people, and it might really only be useful to coders, but it's a handy enough service that I think it's worth a quick tutorial. Prior to that feature release something in the structure of the Keybase filesystem made it unsuitable for storing anything but static copies of Git repositories (I don't know exactly waht), but they've now made Git a first class citizen.
I'm going to assume that you use the Git distributed version control system already, and you have at least one Git repository that you want to host on Keybase; for the purposes of this example I'm going to use my personal copy of the Exocortex Halo code repository on Github. I'm further going to assume that you know the basics of using Git (cloning repositories, committing changes, pulling and pushing changes). I'm also going to assume that you already have a Keybase account and a fairly up-to-date copy of the software installed. I am, however, going to talk a little bit about the idea of remotes in Git. My discussion will necessarily have some technical inaccuracies for the sake of usability if you're not an expert on the internals of Git.