Guerilla archival using wget.

Feb 10 2017

Let's say that you want to mirror a website chock full of data before it gets 451'd - say it's epadatadump.com.  You've got a boatload of disk space free on your Linux box (maybe a terabyte or so) and a relatively stable network connection.  How do you do it?

wget.  You use wget.  Here's how you do it:

[user@guerilla-archival:(9) ~]$ wget --mirror --continue \
    -e robots=off --wait 30 --random-wait http://epadatadump.com/

Let's break this down:

  • wget - Self explanatory.
  • --mirror - Mirror the site.
  • --continue - If you have to re-run the command, pick up where you left off (including the exact location in a file).
  • -e robots=off - Ignore robots.txt because it will be in your way otherwise.  Many archive owners use this file to prevent web crawlers (and wget) from riffling through their data.  Assuming this is sufficiently important, this is what you want to use.
  • --wait 30 - Wait 30 seconds between downloads.
  • --random-wait - Actually wait for 0.5 * (value of --wait) to 1.5 * (value of --wait) seconds in between requests to evade rate limiters.
  • http://epadatadump.com/ - The URL of the website or archive you're copying.

If the archive you're copying requires a username and password to get in, you'll want to add the --user=<your username> and --password=<your password> to the above command line.

Happy mirroring.  Make sure you have enough disk space.

Autostarting Kodi on an Arch Linux media box.

Jan 20 2017

Not too long ago, when the USB key I'd built a set-top media machine died from overuse I decided to rebuild it using Arch Linux with Kodi as the media player.  The trick, I keep finding every time, lies in getting Kodi to start up whenever the machine starts up.  I think I've re-figured that out six or seven times by now, and each time after it works I forget all about it.  So, I guess I'd better write it down for once so that I've got a snapshot of what I did in case I need to do it again later.

The instructions in the Arch Linux wiki work, but you need to pick the right ones to follow.  The short-and-sweet ones with the automagickal AUR package don't work.  Forget it.

Install LightDM from the Arch package repository (sudo pacman -S lightdm).  Then install the instructions I linked to above to the letter.  That means carrying out the following tasks:

Create the file /etc/X11/Xwrapper.config.  The file should contain only the following text in bold (no double quotes): "needs_root_rights = yes"

Follow the LightDM "Enabling autologin" and "Enabling interactive passwordless login" instructions.  Create a user named "kodiuser" (you don't need to set a password" and give it access to system groups necessary to access resouces in the system.  I used the following command to do this: sudo useradd -c "Kodi Service Account" -G dbus,network,video,audio,optical,storage,users -m kodiuser

Create two additional groups which LightDM needs to enable autologin:

  • sudo groupadd -r autologin
  • sudo groupadd -r nopasswdlogin

Add kodiuser to those groups:

  • sudo gpasswd -a kodiuser autologin
  • sudo gpasswd -a kodiuser nopasswdlogin

Upgrading Ubuntu Server 14.04 to 16.04.

Oct 29 2016

A couple of days ago I got it into my head to upgrade one of my Exocortex servers from Ubuntu Server 14.04 LTS to 16.04 LTS, the latest stable release. While Ubuntu long-term support releases are good for a couple of years (14.04 LTS would be supported until at least 2020) I had some concerns about the packages themselves being too stale to run the later releases of much of my software. To be more specific, I could continue to hope that the Ruby and Python interpreters I have installed could be upgraded as necessary but at some point the core system libraries would be too old and they'd no longer compile. Not good for long-term planning.

First off, whenver you're about to do a major upgrade of anything, read the release notes so you know what you're getting yourself into. You'll also usually find some notes about all the new goodies you'll be able to play with.

In the past I've had nothing but trouble using the documented Ubuntu release upgrade process, so much so that I've had clients sign "I told you so," documents when they pressured me to do so because the procedure could reliably be expected to leave the system completely trashed, and a full rebuild was the only recourse. This time I set up a testbed in Virtualbox which consisted of a fully patched Ubuntu Server 14.04.5 LTS install. I ran through the documented upgrade process, and much to my surprise it went smoothly, leaving me with a functional virtual machine at the end of a 45 minute procedure (most of which was automatic, I only had to answer a few questions along the way). The process consisted of logging in as the root user (sudo -s) and running the updater (do-release-upgrade).

So, if it's so easy, why am I writing a blog post about it? Why worry?

Why worry, indeed. Read on.

Exocortex: Setting up Huginn

Sep 11 2016

In my last post I said that I'd describe in greater detail how to set up the software that I use as the core of my exocortex, called Huginn.

First, you need someplace for the software to live. I'll say up front that you can happily run Huginn on your laptop, desktop workstation, or server so long as it's not running Windows. Huginn is developed under Linux; it might run under one of the BSDs but I've never tried. I don't know if it'll run as expected in MacOSX because I don't have a Mac. If you want to give Huginn a try but you run Windows, I suggest installing VirtualBox and build a quick virtual machine. I recommend sticking with the officially supported distributions and use the latest stable version of Ubuntu Server. At the risk of sounding self-serving, I also suggest using one of my open source Ubuntu hardening sets to lock down the security on your new VM all in one go. If you're feeling adventurous you can get a VPS from a hosting provider like Amazon's AWS or Linode. I run some of my stuff at Digital Ocean and I'm very pleased with their service. If you'd like to give Digital Ocean a try here's my referral link which will give you $10us of credit, and you are not obligated to continue using their service after it's used up. If I didn't like their service (both commercial and customer) that much I wouldn't bother passing it around.

As serious web apps go, Huginn's system requirements aren't very high so you can build a very functional instance without putting a lot of effort or money toward it. You can run Huginn in about one gigabyte of RAM and one CPU, with a relatively small amount of disk space (twenty gigabytes or so, a fairly small amount for servers these days). Digital Ocean's $10us/month droplet (one CPU, one gigabyte of RAM, and 30 gigabytes of storage) is sufficient for experimentation and light use. To really get serious usage out of Huginn you'll need about two gigabytes of RAM to fit multiple worker daemons into memory. I personally use the following specs for all of my Huginn virtual machines: At least two CPUs, 60 gigabytes of disk space, and at least four gigabytes of RAM. Chances are, any physical machine you have on your desk exceeds these requirements so don't worry too much about it (but see these special instructions if you plan on using an ultra-mini machine like the Raspberry Pi). If you build your own virtual machine, take into account these requirements.

Arch Linux, systemd, and RAID.

May 13 2016

Long, long time readers of my blog might remember Leandra, the server that I've had running in my lab in one configuration or another since high school (10th grade, in point of fact). She's been through many different incarnations and has run pretty much every x86 CPU ever made since the 80386. She's also run most of the major distributions of Linux out there, starting with Slackware and most recently running Arch Linux (all of the packages of Gentoo with none of the spending hours compiling everything under the sun or fighting with USE flags). It's also possible to get a full Linux install going with only the packages you need in a relatively small amount of disk space; my multimedia machine, for example, is only 2.7 gigabytes in size and Leandra as she stands right now has a relatively svelte 1.1 gigabytes of systemware. However, Arch Linux was an early adopter of something called systemd, which aims to be a complete replacement of the traditional UNIX-like init system that tries to manage dependencies of services, parallelize startup and shutdown of system features, automatically start and stop stuff, replace text-based system logs with a binary database, and all sorts of bleeding edge stuff like that.

Some people love systemd. Some people hate systemd. Personally, I think it is what my besainted grandmother would say, enough to piss off the Pope. That's not really what I'm writing about, though. What I'm writing about is a problem I ran into getting Leandra back up and running after building a fairly sizeable RAID array with logical volumes built on top of it.