Nov 27 2017
A couple of weeks ago a new release of the Keybase software package came out, and this one included as one of its new features support for natively hosting Git repositories. This doesn't seem like it's very useful for most people, and it might really only be useful to coders, but it's a handy enough service that I think it's worth a quick tutorial. Prior to that feature release something in the structure of the Keybase filesystem made it unsuitable for storing anything but static copies of Git repositories (I don't know exactly waht), but they've now made Git a first class citizen.
I'm going to assume that you use the Git distributed version control system already, and you have at least one Git repository that you want to host on Keybase; for the purposes of this example I'm going to use my personal copy of the Exocortex Halo code repository on Github. I'm further going to assume that you know the basics of using Git (cloning repositories, committing changes, pulling and pushing changes). I'm also going to assume that you already have a Keybase account and a fairly up-to-date copy of the software installed. I am, however, going to talk a little bit about the idea of remotes in Git. My discussion will necessarily have some technical inaccuracies for the sake of usability if you're not an expert on the internals of Git.
Sep 04 2017
Chances are you're running one of two major web browsers on the desktop to read my website - Firefox or Google's Chrome.
Chrome isn't bad; I have to use it at work (it's the only browser we're allowed to have, enforced centrally). In point of fact, I'd have switched to it a long time ago if it wasn't for one thing. I make heavy use of a plugin for Firefox called Scrapbook Plus, which make it possible to take a full snapshot of a web page and store it locally so that it can be read offline, annotated, and full-text searched. I never count on having connectivity (I live in the United States, after all, and right now my home connection is running quite poorly and has been for several days due to an ongoing situation at my local CO) so I try to keep both essential documentation and reading material in general stored locally for those dry periods. However, there is no port of Scrapbook Plus for Chrome, nor is there a workable equivalent addon for same (I think I've tried them all). I'm not about to do without my traveling hoard of information (which at this time numbers around 10,000 unique web pages and 15 gigabytes of disk space). Out of desperation last night I did some research into how I might be able to speed up Firefox just a little and get more use out of it until I figure out what to do. Here's what I found:
Aug 12 2017
Some time ago I wrote an article about what Keybase is and what it's good for. I also mentioned one of my pet peeves, which is that, by default the fonts used by the Keybase desktop client are way, way too small to see easily on Windbringer. A couple of days ago somebody finally figured out how to blow up the fonts on the desktop, so I can finally see what's going on without putting my nose on the display (and making the mouse cursor jump around because Windbringer has a touchscreen). While I wish that this would be a configuration option in the GUI (or, hell, even a config file) I'll take what I can get. First, some background so everything makes sense...
Jun 11 2017
EDIT - 20171011 - Added a bit about getting real login shells inside of this Screen session, which fixes a remarkable number of bugs. Also cleaned up formatting a bit.
To keep the complexity of parts of my exocortex down I've opted to not separate everything into larger chunks using popular technologies these days, such as Linux containers (though I did Dockerize the XMPP bridge as an experiment) because there are already quite a few moving parts, and increasing complexity does not make for a more secure or stable system. However, this brings up a valid and important question, which is "How do you restart everything if you have to reboot a server for some reason?"
A valid question indeed. Servers need to be rebooted periodically to apply patches, upgrade kernels, and generally to blow the cruft out of the memory field. Traditionally, there are all sorts of hoops and gymnastics one can go through with traditional initscripts but for home-grown and third party stuff it's difficult to run things from initscripts in such a way that they don't have elevated privileges for security reasons. The hands-on way of doing it is to run a GNU Screen session when you log in and start everything up (or reconnect to one if it's already running). This process, also, can be automated to run when a system reboots. Here's how:
May 28 2017
Some time ago, I found myself using a Kryoflux interface and a couple of old floppy drives that had been kicking around in my workshop for a while to rip disk images of a colleague's floppy disk collection. It took me a day or two of screwing around to figure out how to use the Kryoflux's software to make it do what I wanted. Of course, I took notes along the way so that I would have something to refer back to later. Recently, I decided that it would probably be helpful to people if I put those notes online for everyone to use. So, here they are.
May 28 2017
A persistent risk of websites is the possibility of somebody finding a vulnerability in the CMS and backdooring the code so that commands and code can be executed remotely. At the very least it means that somebody can poke around in the directory structure of the site without being noticed. At worst it would seem that the sky's the limit. In the past, I've seen cocktails of browser exploits injected remotely into the site's theme that try to pop everybody who visits the site, but that is by no means the nastiest thing that somebody could do. This begs the question, how would you detect such a thing happening to your site?
I'll leave the question of logfile monitoring aside, because that is a hosting situation-dependent thing and everybody has their own opinions. What I wanted to discuss was the possibility of monitoring the state of every file of a website to detect unauthorized tampering. There are solutions out there, to be sure - the venerable Tripwire, the open source AIDE, and auditd (which I do not recommend - you'd need to write your own analysis software for its logs to determine what files, if any, have been edited. Plus it'll kill a busy server faster than drilling holes in a car battery.) If you're in a shared hosting situation like I am, your options are pretty limited because you're not going to have the access necessary to install new packages, and you might not be able to compile and install anything to your home directory. However, you can still put something together that isn't perfect but is fairly functional and will get the job done, within certain limits. Here's how I did it:
Most file monitoring systems store cryptographic hashes of the files they're set to watch over. Periodically, the files in question are re-hashed and the outputs are compared. If the resulting hashes of one or more files are different from the ones in the database, the files have changed somehow and should be manually inspected. The process that runs the comparisons is scheduled to run automatically, while generation of the initial database is normally a manual process. What I did was use command line utilities to walk through every file of my website, generate a SHA-1 hash (I know, SHA-1 is considered harmful these days; my threat model does not include someone spending large amounts of computing time to construct a boobytrapped index.php file with the same SHA-1 hash as the existing one; in addition, I want to be a good customer and not crush the server my site is hosted on several times a day when the checks run), and store the hashes in a file in my home directory.