Jan 11 2020
A couple of years ago I spent some time trying to set up Matrix, a self-hosted instant messaging and chat system that works a little like Jabber, a little like IRC, a little like Discord and a little like Slack. The idea is that anyone can set up their own server which can federate with other servers (in effect making a much larger network), and it can be used for group chat or one-on-one instant messaging. Matrix also has voice and video conferencing capabilities so you could hold conference calls over the network if you wanted. For example, one possible use case I have in mind is running games over the Matrix network. You could even build more exotic forms of conferencing on top of Matrix if you wanted to. Even more handy is that the Matrix protocol supports end-to-end encryption of message traffic between everyone in a channel as well as between private chats between pairs of people. If you turn encryption on in a channel it can't be turned off; you'd have delete the channel entirely (which would then cause the chat history to be purged).
Chat history is something that was a stumbling block in my threat model the last time I ran a Matrix server, somewhen in 2016. Things have changed quite a bit since then. For usability Matrix servers store chat history in their database, in part as a synchronization mechanism (channels can exist across multiple servers at the same time) and in part to provide a history that users can search through to find stuff, especially if they've just joined a channel. For some applications, like collaboration inside a company this can be a good thing (and in fact, may be legally required). For other applications (like a bunch of sysadmins venting in a back channel), not so much. This is why Matrix has three mechanisms for maintaining privacy: End to end encryption of message traffic (of entire channels as well as private chats), peer-to-peer voice and video using WebRTC (meaning that there is no server that can record the traffic, it merely facilitates the initial connection), and deleting the oldest chat logs from the back-end database. While it is true that there is no guarantee that other servers are also rotating out their message databases, end-to-end encryption helps ensure that only someone who was in the channel would have the keys to decrypt any of it. It also seems feasible to set up Matrix channels such that all of the users are on a single server (such as an internal chat) which means that the discussion will not be federated to other servers. Channels can also be made invite-only to limit who can join them. Additionally, who can see a channel's history and how much of it can be set on a by-channel basis.
For the record, on the server I built for writing this article the minimum lifetime of conversation history is one calendar day, and the maximum lifetime of conversation history is seven calendar days. If I could I'd set it to Signal's default of "delete everything before the last 300 messages" but Synapse doesn't support that so I tried to split the difference between usability and privacy (maybe I should file a pull request?) A maintenance mole crawls through the database once every 24 hours and deletes the oldest stuff. I could probably make it run more frequently than that but I don't yet know what kind of performance impact that would have.
One of the things I'm going to do in this article is gloss over the common fiddly stuff. I'm not going to explain how to create an account on a server because I'm going to assume that you know how to look up instructions for doing that. Hell, I google it from time to time because I don't do it often. I'm also going to break this process up into a couple of articles. This one will give you a basic, working install of Synapse (a minimum viable server, if you like). I also won't go over how to install Certbot (the Let's Encrypt client) to get SSL certificates even though it's a crucial part of the process. I will explain how to migrate Synapse's database off of SQLite and over to Postgres for better performance in a subsequent article. For what it's worth I have next to no experience with Postgres, so I'm figuring it out as I go along. Seasoned Postgres admins will no doubt have words for me. After that I'll talk about how to make Matrix's VoIP functionality work a little more reliably by installing a STUN server on the same machine. Later, I'll go over a simple integration of Huginn with a Matrix server (because you just know it's not a technical article unless I bring Huginn into it).
A piece of advice: Don't try to go public with a Matrix server all at once. The instructions are complex and problematic in places, so this article is written from my notes. Take your time. If you rush it you will screw it up, just like I did. Get what you need working, then move on to the next bit in a day or so. There's no rush.
Jan 06 2020
It doesn't seem that long ago that I put together a Pi-Top and started tricking it out to use as a backup system. It was problematic in some important ways (the keyboard's a bit wonky), but most of all the supported respin of Raspbian for use with the Pi-Top was really, really slow and a bit fragile. While Windbringer was busy doing a full backup last week I took my Pi-Top for a spin while out and about, and to be blunt it was too bloody slow to use. At first I figured that the microSD card I was using for the boot device was one of the lower-quality ones that bogs down once in a while, but that turned out not to be the case. Out of desperation I started looking into possibly upgrading the RasPi in that particular shell to the latest and greatest version, which I happen to have received as a Yule gift last year. Lo and behold, I was not the only person to think along these lines. (local mirror) While the article in question talked at some length about the hardware challenges involved (mostly due to the different arrangement of connectors) the software part was the most valuable to me because it answered, concretely and concisely, how to get unmodified Raspbian working with a Pi-Top's unusual control hardware. So that this information doesn't get lost in the ether I'm going to write up what I did.
Dec 02 2019
Let's say that you have a bunch of servers that you admin en masse using Ansible. You have all of them listed and organized in your /etc/ansible/hosts file. Let's say that each server is running a system service (like my Systembot) running under systemd in --user mode. (Yes, I'm going to use my exocortex-halo/ repository for this, because I just worked out a good way to keep everything up to date and want to share the technique for everyone new to Ansible. Pay it forward, you know?) You want to use Ansible to update your copy of Systembot across everything so you don't have to SSH into every box and git pull the repo to get the updates. A possible Ansible playbook to install the updates might look something like this:
Nov 17 2019
Last weekend I was running short of stuff to hack around on and lamented this fact on the Fediverse. I was summarily challenged to find a way to archive posts to the Fediverse in an open, easy to understand data format that was easy to index, and did not use any third party services (like IFTTT or Zapier). I thought about it a bit and came up with a reasonably simple solution that uses three Huginn agents to collect, process, and write out posts as individual JSON documents to the same box I run that part of my exocortex on. This is going to go deep geek below the cut so if it's not your cup of tea, feel free to move on to an earlier post.
Oct 19 2019
EDIT - 20191104 @ 2057 UTC-7 - Figured out how long it takes to scrub 40TB of disk space. Also did a couple of experiments with rebalancing btrfs and monitored how long it took.
A couple of weeks ago while working on Leandra I started feeling more and more dissatisfied with how I had her storage array set up. I had a bunch of 4TB hard drives inside her chassis glued together with Linux's mdadm subsystem into what amounts to a mother-huge hard drive (a RAID-5 array with a hotspare in case one blew out), and LVM on top of that which let me pretend that I was partitioning that mother-huge hard drive so I could mount large-ish pieces of it in different places. The thing is, while you can technically resize those virtual partitions (logical volumes) to reallocate space, it's not exactly easy. There's a lot of fiddly stuff that you have to do (resize the file system, resize the logical volume to match, grow the logical volume that needs space, grow the filesystem that needs space, make sure that you actually have enough space) and it gets annoying in a crisis. There was a second concern, which was figuring out which drive was the one that blew out when none of them were labelled or even had indicators of any kind that showed which drive was doing something (like throwing errors because it had crashed). This was a problem that required fairly major surgery to fix, on both hardware and software.
By the bye, the purpose of this post isn't to show off how clever I am or brag about Leandra. This is one part the kind of tutorial I wish I'd had when I was first starting out, and I hope that it helps somebody wrap their mind around some of the more obscure aspects of system administration. This post is also one part cheatsheet, both for me and for anyone out there in a similar situation who needs to get something fixed in a hurry, without a whole lot of trial and error. If deep geek porn isn't your thing, feel free to close the tab; I don't mind (but keep it in mind if you know anyone who might need it later).
Sep 28 2019
In September of 2019 a conference called Please Try This At Home was held in Pittsburgh, PA. One of the talks was given by Dr. Mixael Laufer on the topic of how to acquire pharmaceuticals such as mifepristone (local mirror) and misoprostol (local mirror) for emergency personal use. I spoke with Dr. Laufer and the person who made this recording, and they both agreed to let me post it for download and archival as long as I sent them the links to it. So, here it is.
Aug 03 2019
A common task that people using Huginn set up as their "Hello, world!" project is getting the daily weather report because it's practical, easy, and fairly well documented. However, the existing example is somewhat obsolete because it references the Weather Underground API that no longer exists, having been sunset at the end of 2018. Recently, the Weather Underground code in the Huginn Weather Agent was taken out because it's no longer usable. But, other options exist. The US National Weather Service has a free to use API that we can use with Huginn with a little extra work. Here's what we have to do:
- Get the GPS coordinates for the place we want weather reports for.
- Use the GPS coordinates to get data out of the NWS API.
- Build a weather report message.
- E-mail it.
As happens sometimes, the admins of the NWS API have imposed an additional constraint upon users accessing their data: They ask that the user agent string of whatever software you use be unique, and ideally include an e-mail address they can contact you through in case something goes amiss. This isn't a big deal.
This tutorial assumes that you've worked with Huginn a bit in the past, but if you haven't I strongly suggest that you read my earlier posts to familiarize yourself.
Okay. Let's get started.