Faking a telnet server with netcat.

May 20 2020

Let's say that you need to be able to access a server somewhere on your network.  This is a pretty common thing to do if you've got a fair amount of infrastructure at home.  But let's say that your computer, for whatever reason, doesn't have the horsepower to run SSH because the crypto used requires math that older systems can't carry out in anything like reasonable time.  This is a not uncommon situation for retrocomputing enthusiasts.  In the days before SSH we used telnet for this, but pretty much the entire Net doesn't anymore because the traffic wasn't encrypted, so anyone with a mind to eavesdrop could grab your login credentials to abuse later.  However, on a home network behind a firewall between systems you own it doesn't hurt to use once in a while.  Good luck finding systems that still package in.telnetd, though.  However, you can fake it with a tool called netcat.

First, you need a FIFO (first in, first out) that, as far as a Linux machine is concerned is a file that multiple processes can open to read and write.  Whenever something writes into a FIFO, everything reading from it gets whatever came in the other end.  As passing data goes the question is "how hard do you really need it to be," and FIFOs answer the question with "Not hard."  Linux boxen come with a tool called mkfifo that create them; uncreating them is as simple as deleting them like any other file.  This is the first step toward faking a telnet server:

Neologism: Cave diving

Apr 22 2020

cave diving - noun phrase - The act of tunneling through multiple VPNs, bastion hosts, and chokepoints to access some network assets, usually production infrastructure in a more restricted network.  Less hazardous than but just as easy to screw up as real life cave diving.

Neologism: Software installation roulette

Apr 12 2020

software installation roulette - The practice of piping the output of a web browser or other HTTP tool directly through a system shell, usually as root to install something important.  The danger is that you don't know if the shell script has anything nefarious in it (such as rm -rf / or the installation of a rootkit) and by the time you find out it's far too late.

For example: sudo bash -c "$(wget -q -O- https://totally.legit.example.com/install.sh)"

Tunneling across networks with Nebula.

Apr 12 2020

Longtime readers have no doubt observed that I plug a lot weird shit into my exocortex - from bookmark managers to card catalogues to just about anything that has an API.  Sometimes this is fairly straightforward; if it's on the public Net I can get to it (processing that data is a separate issue, of course).  But what about the stuff I have around the lab?  I'm always messing with new toys that are network connected and occasionally useful.  The question is, how do I get it out of the lab and out to my exocortex?  Sometimes I write bots to do that for me, but that can be kind of clunky because a lot of stuff doesn't necessarily need user interaction.  I could always poke some holes in my firewall, lock them to a specific IP address, and set static addresses on my gadgets.  However, out of necessity I've got several layers of firewalls at home and making chains of port forwards work is a huge pain in the ass.  I don't recommend it.  "So, why not a VPN?" you're probably asking.

I'd been considering VPNs as a solution.  For a while I considered the possibility of setting up OpenVPN on a few of my devices-that-are-actually-computers and connecting them to my exocortex as a VPN concentrator.  However, I kept running into problems with trying to make just a single network port available over an OpenVPN connection.  I never managed to figure it out.  Then part of me stumbled across a package called Nebula, originally developed by Slack for doing just what I wanted to do: Make one port inside available to another server in a secure way.  Plus, at the same time it networks all of the servers its running on together.  Here's how I set it up.

Migrating to Restic for offsite backups.

Apr 11 2020

20200426: UPDATE: Fixed the "pruned oldest snapshots" command.

A couple of years back I did a how-to about using a data backup utility called Duplicity to make offsite backups of Leandra to Backblaze B2. (referrer link) It worked just fine; it was stable, it was easy to script, you knew what it was doing.  But over time it started to show its warts, as everything does.  For starters, it was unusually slow when compared to the implementation of rsync Duplicity uses by itself.  I spent some time digging into it and benchmarking as many functional modules as I could and it wasn't that.  The bottleneck also didn't seem to be my network link, as much as I may complain about DSL from AT&T.  Even after upgrading Leandra's network interface it didn't really fix the issue.  Encryption before upload is a hard requirement for me but that didn't seem to be bogging backup runs down either upon investigation.  I even thought it might have been the somewhat lesser read performance of RAID-5 on Leandra's storage array adding up, which is one of the reasons I started using RAID-1 when I upgraded her to btrfs.  That didn't seem to make a difference, either.

Ultimately I decided that Duplicity was just too slow for my needs.  Initial full backups aside (because uploading everything to offsite storage always sucks), it really shouldn't take three hours to do an incremental backup of at most 500 megabytes (out of over 30 terabytes).  On top of that, Duplicity's ability to rotate out the oldest backups... just doesn't seem to work. I wasn't able to clean anything up automatically or manually.  Even after making a brand-new full backup (which I try to do yearly regardless of how much time it takes) I wasn't able to coax Duplicity into rotating out the oldest increments and had to delete the B2 bucket manually (later, of course).  So I did some asking around the Fediverse and listed my requirements.  Somebody (I don't remember whom, sorry) turned me on to Restic because they use it on their servers in production.  I did some research and decided to give it a try.

Neologism: Smoke and mirrors system administration

Feb 28 2020

smoke and mirrors system administration - noun phrase - When you bring a problem to your support team and they go silent for hours to days at a time.  No amount of poking and prodding is sufficient to get anyone on the team to respond to your requests for status updates.  When they finally get back to you they say that nothing's wrong and you must have made a mistake.  Your thing is now unbroken.  They never tell you (or anyone, for that matter) what they fixed or how they fixed it.

Neologism: Basketball mode

Aug 31 2019

basketball mode - noun phrase - When a service or application crashes and restarts itself over and over, i.e., bouncing like a basketball every few seconds.  Considered an outage.

Ansible: Reboot the server and pick up where it left off.

Nov 26 2018

Here's the situation: You're using Ansible to configure a machine on your network, like a new Raspberry Pi.  Ansible has done a bunch of things to the machine and needs to reboot it - for example, when you grow a Raspbian disk image so that it takes up the entire device, it has to be rebooted to notice the change.  The question is, how do you reboot the machine, have Ansible pick up where it left off, and do it in one playbook only (instead of two or more)?

I spent the last couple of days searching for specifics and found a number of techniques that just don't work. After some experimentation, however, I pieced together a small snippet of Ansible playbook that does what I need.  Because it was such a pain to figure out I wanted to save other folks the same trouble.  Here's the code, suitable for copying and pasting into your playbook:

...the first part of your playbook goes here.
    - name: Reboot the system.
      shell: sleep 2 && shutdown -r now
      async: 1
      poll: 0
      ignore_errors: true
    - name: Reconnect and resume.
      local_action: wait_for
      args:
        host: bob-newhart
        port: 22
        state: started
        delay: 10
        timeout: 30
...the rest of your playbook goes here.

Specifics of proof of concept for later reference:

  • Ansible v2.7.0
  • Raspberry Pi 3
  • Raspbian 2018-06-27

Automating deployment of Let's Encrypt certificates.

Jan 06 2018

A couple of weeks back, somebody I know asked me how I went about deploying SSL certificates from the Let's Encrypt project across all of my stuff.  Without going into too much detail about what SSL and TLS are (but here's a good introduction to them), the Let's Encrypt project will issue SSL certificates to anyone who wants one, provided that they can prove somehow that they control what they're cutting a certificate for.  You can't use Let's Encrypt to generate a certificate for google.com because they'd try to communicate with the server (there isn't any such thing but bear with me) google.com to verify the request, not be able to, and error out.  The actual process is complex and kind of involved (it's crypto so this isn't surprising) but the nice thing is that there are a couple of software packages out there that automate practically everything so all you have to do is run a handful of commands (which you can then copy into a shell script to automate the process) and then turn it into a cron job.  The software I use on my systems is called Acme Tiny, and here's what I did to set everything up...

Quick and easy SSH key installation.

Dec 27 2017

I know I haven't posted much this month.  The holiday season is in full effect and life, as I'm sure you know, has been crazy.  I wanted to take the time to throw a quick tip up that I just found out about which, if nothing else, will make it easier to get up and running on a Raspberry Pi that you've received as a gift.  Here's the situation:

You have a new account on a machine that you want to SSH into easily.  So, you want to quickly and easily transfer over one or more of your SSH public keys to make it easier to log in automatically, and maybe make running Ansible a bit faster.  Now, you could do it manually (which I did for many, many years) but you'll probably mess it up at least once if you're anything like me.  Or, you could use the ssh-copy-id utility (which comes for free with SSH) to do it for you.  Assuming that you already have SSH authentication keys this is all you have to do:

[drwho@windbringer ~]$ ssh-copy-id -i .ssh/id_ecdsa.pub pi@jukebox
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_ecdsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out
    any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now
    it is to install the new keys

pi@jukebox's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'pi@jukebox'"
and check to make sure that only the key(s) you wanted were added.

Now let's try to log into the new machine:

[drwho@windbringer ~]$ ssh pi@jukebox
Linux jukebox 4.9.70-v7+ #1068 SMP Mon Dec 18 22:12:55 GMT 2017 armv7l

The programs included with the Debian GNU/Linux system are free software;

# I didn't have to enter a password because my SSH pubkey authenticated me
# automatically.
pi@jukebox:~ $ cat .ssh/authorized_keys
ecdsa-sha2-nistp521 AAAAE....

You can run this command again and again with a different pubkey, and it'll append it to the appropriate file on the other machine (~/.ssh/authorized_keys).  And there you have it; your SSH pubkey has been installed all in one go.  I wish I'd known about this particular trick... fifteen years ago?