Restarting a Screen session without manual intervention.

Jun 11 2017

EDIT - 20171011 - Added a bit about getting real login shells inside of this Screen session, which fixes a remarkable number of bugs.  Also cleaned up formatting a bit.

To keep the complexity of parts of my exocortex down I've opted to not separate everything into larger chunks using popular technologies these days, such as Linux containers (though I did Dockerize the XMPP bridge as an experiment) because there are already quite a few moving parts, and increasing complexity does not make for a more secure or stable system.  However, this brings up a valid and important question, which is "How do you restart everything if you have to reboot a server for some reason?"

A valid question indeed.  Servers need to be rebooted periodically to apply patches, upgrade kernels, and generally to blow the cruft out of the memory field.  Traditionally, there are all sorts of hoops and gymnastics one can go through with traditional initscripts but for home-grown and third party stuff it's difficult to run things from initscripts in such a way that they don't have elevated privileges for security reasons.  The hands-on way of doing it is to run a GNU Screen session when you log in and start everything up (or reconnect to one if it's already running).  This process, also, can be automated to run when a system reboots.  Here's how:

Website file integrity monitoring on the cheap.

May 28 2017

A persistent risk of websites is the possibility of somebody finding a vulnerability in the CMS and backdooring the code so that commands and code can be executed remotely.  At the very least it means that somebody can poke around in the directory structure of the site without being noticed.  At worst it would seem that the sky's the limit.  In the past, I've seen cocktails of browser exploits injected remotely into the site's theme that try to pop everybody who visits the site, but that is by no means the nastiest thing that somebody could do.  This begs the question, how would you detect such a thing happening to your site?

I'll leave the question of logfile monitoring aside, because that is a hosting situation-dependent thing and everybody has their own opinions.  What I wanted to discuss was the possibility of monitoring the state of every file of a website to detect unauthorized tampering.  There are solutions out there, to be sure - the venerable Tripwire, the open source AIDE, and auditd (which I do not recommend - you'd need to write your own analysis software for its logs to determine what files, if any, have been edited.  Plus it'll kill a busy server faster than drilling holes in a car battery.)  If you're in a shared hosting situation like I am, your options are pretty limited because you're not going to have the access necessary to install new packages, and you might not be able to compile and install anything to your home directory.  However, you can still put something together that isn't perfect but is fairly functional and will get the job done, within certain limits.  Here's how I did it:

Most file monitoring systems store cryptographic hashes of the files they're set to watch over.  Periodically, the files in question are re-hashed and the outputs are compared.  If the resulting hashes of one or more files are different from the ones in the database, the files have changed somehow and should be manually inspected.  The process that runs the comparisons is scheduled to run automatically, while generation of the initial database is normally a manual process.  What I did was use command line utilities to walk through every file of my website, generate a SHA-1 hash (I know, SHA-1 is considered harmful these days; my threat model does not include someone spending large amounts of computing time to construct a boobytrapped index.php file with the same SHA-1 hash as the existing one; in addition, I want to be a good customer and not crush the server my site is hosted on several times a day when the checks run), and store the hashes in a file in my home directory.

Gargantuan file servers and tiny operating systems.

Apr 29 2017

We seem to have reached a unique point in history: Available to your average home user are gargantuan amounts of disk space (8 terabyte hard drives are a thing, and the prices are rapidly coming down to widespread affordability) and enough processing power is available for the palm of your hand that makes the computational power that put the human race on the moon compare in the same was that a grain of sand does to a beach.  For most people, it's the latest phone upgrade or more space for your media box.  For others, though, it poses an unusual challenge: How to make the best use of the hardware without wasting it needlessly.  By this, I mean how one might build a server that doesn't result in wasted hard drive space, wasted SATA ports on the mainboard, or having enough room to put all of that lovely (and by "lovely" I really mean "utterly disorganized") data that accumulates without even trying.  I mentioned last year that I rebuilt Leandra (specs in here) so I could work on some machine learning and search engine projects.  What I didn't mention was that I had some design constraints that I had to follow so that I could get the most out of her.

To get the best use possible out of all of those hard drives I had to figure out how to structure the RAID, where to put the guts of the Arch Linux install, and most importantly figure out how to set everything up so that if Leandra did blow a hard drive the entire system wouldn't be hosed.  If I partitioned all of the drives as described here and used one as the /boot and / partitions, and RAIDed the rest, if the first drive blew I'd be out an entire operating system.  Also, gauging the size of the / partition can be tricky; I like to keep my system installs as small as possible and add only packages that I absolutely need (and ruthlessly purge the ones that I don't use anymore).  20 gigs is way too big (currently, Leandra's OS install is 2.9 gigabytes after nearly a year of experimenting with this and that) but it would leave room to grow.

Decisions, decisions.

So, what did I finally decide on?

Upgrading Bolt CMS to v3.x.

Jan 02 2017

Since PivotX went out of support I've been running the Bolt CMS for my website at Dreamhost (referral link).  A couple of weeks back you may have noticed some trouble my site was having, due to my running into significant difficulty encountered when upgrading from the v2.x release series to the v3.x release series.  Some stuff went sideways, and I had to restore from backup at least once before I managed to get the upgrade procedure straightened out with the help of some of the developers in the Bolt IRC channel on Freenode.  If it wasn't for help from rossriley it would have taken significantly longer to un-fuck my website.

Here's the procedure that I used to get my site upgraded to the latest release of Bolt.