Jul 08 2018
I've mentioned in the past that my exocortex incorporates a number of different kinds of bots that do a number of different things in a slightly different way than Huginn does. Which is to say, rather than running on their own and pinging me when something interesting happens, I can communicate with them directly and they parse what I say to figure out what I want them to do. Every bot is function-specific so this winds up being a somewhat simpler task than it might otherwise appear. One bot runs web searches, another downloads files, videos, and audio, another wakes up and look sat system stats every minute... but where does this all start? How does it all fit together?
It starts with Jabber, the humble XMPP protocol.
May 05 2018
If you have multiple systems (like I do), a problem you've undoubtedly run into is keeping your bookmarks in sync across every browser you use. Of course, there are services that'll happily do this job on you behalf, but they're free, and we all know what free means. If you're interested in being social with your link collection there are some social bookmarking services out there for consideration, including what's left of Delicious. For many years I was a Delicious user (because I liked the idea of maintaining a public bookmark collection that could be useful to people), but Delicious got worse and worse every time it was sold to a new holding company. I eventually gave up on Delicious, pulled my data out, and thought long and hard about how often anybody actually used my public link collection. The answer wound up being "In all probability, not at all," largely because I never received any feedback at all, on-site or off. Oh, well.
For a couple of years I used an application called Unmark to manage my link collection, and it did a decent enough job. It also had some annoying quirks that, over time got farther and farther under my skin, and earlier this year I kicked Unmark in the head and started the search for a replacement. Quirks like, about half the time bookmarks would be saved without any of the descriptions or tags I gave them. No search API. The search function sucked so I couldn't plug my own search function in. Eventually, the Unmark hosted service started redirecting to the Github repository, and then even that redirect went away. Unmark hasn't been worked on in eight months, and Github tickets haven't been touched in about as long. In short, Unmark seems dead as a doornail.
So I migrated my link collection to a new application called Shaarli, and I'm quite pleased with it.
Jan 28 2018
A couple of days ago I gave a talk online to some members of the Zero State about my exocortex. It's a pretty informal talk done as a Hangout where I talk about some of the day to day stuff and where the project came from. I didn't have any notes and it was completely unscripted.
Embedding is disabled for some reason so I can't just put the vide here here. Here's a direct link to the recording.
Oct 28 2017
UPDATED: Added an Nginx configuration block to proxy YaCy.
If you've been squirreling away information for any length of time, chances are you tried to keep it all organized for a certain period of time and then gave up the effort when the volume reached a certain point. Everybody has therir limit to how hard they'll struggle to keep things organized, and past that point there are really only two options: Give up, or bring in help. And by 'help' I mean a search engine of some kind that indexes all of your stuff and makes it searchable so you can find what you need. The idea is, let the software do the work while the user just runs queries against its database to find the documents on demand. Practically every search engine parses HTML to get at the content but there are others that can read PDF files, Microsoft Word documents, spreadsheets, plain text, and occasionally even RSS or ATOM feeds. Since I started offloading some file downloading duties to yet another bot my ability to rename files sanely has... let's be honest... it's been gone for years. Generally speaking, if I need something I have to search for it or it's just not getting done. So here's how I fill that particular niche in my software ecosystem.
Oct 12 2017
Originally published at Mondo 2000, 10 October 2017.
A common theme of science fiction in the transhumanist vein, and less commonly in applied (read: practical) transhumanist circles is the concept of having an exocortex either installed within oneself, or interfaced in some way with one's brain to augment one's intelligence. To paint a picture with a fairly broad brush, an exocortex was a system postulated by JCR Licklider in the research paper Man-Computer Symbiosis which would implement a new lobe of the human brain which was situated outside of the organism (though some components of it might be internal). An exocortex would be a symbiotic device that would provide additional cognitive capacity or new capabilities that the organism previously did not posses, such as:
- Identifying and executing cognitively intensive tasks (such as searching for and mining data for a project) on behalf of the organic brain, in effect freeing up CPU time for the wetware.
- Adding additional density to existing neuronal networks to more rapidly and efficiently process information. Thinking harder as well as faster.
- Providing databases of experiential knowledge (synthetic memories) for the being to "remember" and act upon. Skillsofts, basically.
- Adding additional "execution threads" to one's thinking processes. Cognitive multitasking.
- Modifying the parameters of one's consciousness, for example, modulating emotions to suppress anxiety and/or stimulate interest, stimulating a hyperfocus state to enhance concentration, or artificially inducing zen states of consciousness.
- Expanding short-term memory beyond baseline parameters. For example, mechanisms that translate short-term memory into long-term memory significantly more efficiently.
- Adding I/O interfaces to the organic brain to facilitate connection to external networks, processing devices, and other tools.
Sep 30 2017
A Google feature that doesn't ordinarily get a lot of attention is Google Alerts, which is a service that sends you links to things that match certain search terms on a periodic basis. Some people use it for vanity searching because they have a personal brand to maintain, some people use it to keep on top of a rare thing they're interested in (anyone remember the show Probe?), some people use it for bargain hunting, some people use it for intel collection... however, this is all predicated on Google finding out what you're interested in, certainly interested enough to have it send you the latest search results on a periodic basis. Not everybody's okay with that.
A while ago, I built my own version of Google Alerts using a couple of tools already integrated into my exocortex which I use to periodically run searches, gather information, and compile reports to read when I have a spare moment. The advantage to this is that the only entities that know about what I'm interested in are other parts of me, and it's as flexible as I care to make it. The disadvantage is that I have some infrastructure to maintain, but as I'll get to in a bit there are ways to mitigate the amount of effort required. Here's how I did it...
Sep 24 2017
Some time ago I wrote an article of suggestions for archiving web content offline, at the very least to have local copies in the event that connectivity was unavailable. I also expressed some frustration that there didn't seem to be any workable options for the Chromium web browser because I'd been having trouble getting the viable options working. After my attempt at fixing up Firefox fell far short of my goal (it worked for all of a day, if that) I realized that I needed to come up with something that would let me do what I needed to do. I installed Chromium on Windbringer (I'm not a fan of Chrome because Google puts a great deal of tracking and monitoring crap into the browser and I'm not okay with that) and set to work. Here's how I did it:
First I spent some time configuring Chromium with my usual preferences. That always takes a while, and involved importing my bookmarks from Firefox, an automated process that took several hours to run. I also exported everything I had cached in Scrapbook, which wound up taking all night. I then installed the SingleFile Core plugin for Chrome/Chromium, which does the actual work of turning web pages open in browser tabs into a cacheable single file. I restarted Chromium, which I probably didn't need to do but I really wanted a working solution so I opted for caution and then installed PageArchiver from the Chrome store and restarted Chromium again. This added the little "open file folder" icon to the Chromium menu bar. The order the add-ons are installed in seems to matter, add SingleFile Core first if you do nothing else.
Now get ready for me to feel stupid: If you want to store something using PageArchiver, click on the file folder icon to open the PageArchiver pop-up, click "Tabs" to show a list of tabs you have open in Chromium/Chrome, click the checkboxes for the ones you want to save, and then hit the save button. For systems like Windbringer which have extremely high resolution screens, that save button may not be visible. You can, however, scroll both horizontally and vertically in the PageArchiver pop-up panel to expose that button. I didn't realize that before so I never found that button. That's all it took.
Here's what didn't work:
I can't import my Scrapbook archives because they're sitting in a folder on Windbringer's desktop as a couple of thousand separate subdirectories, each of them containing all of the web content for a single web page. I need to figure out what to do there. It may consist of writing a utility that turns directories full of HTML into SQL commands to inject them into PageArchiver's SQLite database which, by default, resides in the directory $HOME/.config/chromium/Default/databases/chrome-extension_ihkkeoeinpbomhnpkmmkpggkaefincbn_0 (the directory name is constant; the jumble of letters at the end is the same as the one in the Chrome Store URL) and has the filename 2 (yes, just the number 2). You can open it up with the SQLite browser of you choice if you wish and go poking around. Somebody may have come up with a technique for it and I just haven't found it yet, I don't know. I may not be able to add them in any reasonable way at all and have to resort to running an ad-hoc local web server with Python or something if I want to access them, like this:
[drwho@windbringer ~]$ python2 -m SimpleHTTPServer 8000
Jun 19 2017
A couple of months back I did a brief writeup of Keybase and what it's good for. I mentioned briefly that it implements a 1-to-n text chat feature, where n>=1. Yes, this means that you can use Keybase Chat to talk to yourself, which is handy for prototyping and debugging code. What does not seem to be very well known is that the Keybase command line utility has a JSON API, the documentation of which you can scan through by issuing the command `keybase chat help api` from a command window. I'm considering incorporating Keybase into my exocortex so I spent some time one afternoon playing around with the API, seeing what I could make it do, and writing up what I had to do to make it work. As far as I know there is no official API documentation anywhere; at least, Argus and I didn't find any. So, under the cut are my notes in the hope that it helps other people work with the Keybase API.
The API may drift a bit, so here are the software versions I used during testing:
Jun 17 2017
I've been promising myself that I'd do a series of articles about tools that I've incorporated into my exocortex over the years, and now's as good a time as any to start. Rather than jump right into the crunchy stuff I thought I'd start with something that's fairly simple to use, straightforward, and endlessly useful for many purposes - a wiki.
Usually, when somebody brings up the topic of wikis one either immediately thinks of Wikipedia or one of the godsawful corporate wikis that one might be forced to use on a daily basis. And you're not that off the mark, because ultimately they're websites that let one or more people create, modify, and delete articles about just about anything one might be inclined to by using only a web browser. Usually you need to set up or be given an account to log into them because wiki spam is to this day a horrendous problem to fight (I've had to do it as parts of previous jobs, and I wouldn't wish it on my worst enemy). If you've been around a while, when you think of having a wiki you might think of setting up something like WikiWikiWeb or Mediawiki, which also means setting up a server, a database, web server software, the wiki software, configuring everything... and unless you have a big, important project that necessitates it, it's kind of overkill and you go right back to a text file on your desktop. And I don't blame you.
There are other options out there that require much less in the way of overhead that are also nicer than the ubiquitous notes.txt file. For the past couple of years (since 2012.ev at least) I've been using a personal wiki called Tiddlywiki for most of my projects which requires just a fairly modern web browser (if you're using Internet Explorer you need to be running IE 10 or later) and some room on your desktop for another file.
Jun 11 2017
EDIT - 20171011 - Added a bit about getting real login shells inside of this Screen session, which fixes a remarkable number of bugs. Also cleaned up formatting a bit.
To keep the complexity of parts of my exocortex down I've opted to not separate everything into larger chunks using popular technologies these days, such as Linux containers (though I did Dockerize the XMPP bridge as an experiment) because there are already quite a few moving parts, and increasing complexity does not make for a more secure or stable system. However, this brings up a valid and important question, which is "How do you restart everything if you have to reboot a server for some reason?"
A valid question indeed. Servers need to be rebooted periodically to apply patches, upgrade kernels, and generally to blow the cruft out of the memory field. Traditionally, there are all sorts of hoops and gymnastics one can go through with traditional initscripts but for home-grown and third party stuff it's difficult to run things from initscripts in such a way that they don't have elevated privileges for security reasons. The hands-on way of doing it is to run a GNU Screen session when you log in and start everything up (or reconnect to one if it's already running). This process, also, can be automated to run when a system reboots. Here's how: