Exocortices: A definition of a technology.

Oct 12 2017

Originally published at Mondo 2000, 10 October 2017.

A common theme of science fiction in the transhumanist vein, and less commonly in applied (read: practical) transhumanist circles is the concept of having an exocortex either installed within oneself, or interfaced in some way with one's brain to augment one's intelligence.  To paint a picture with a fairly broad brush, an exocortex was a system postulated by JCR Licklider in the research paper Man-Computer Symbiosis which would implement a new lobe of the human brain which was situated outside of the organism (though some components of it might be internal).  An exocortex would be a symbiotic device that would provide additional cognitive capacity or new capabilities that the organism previously did not posses, such as:

  • Identifying and executing cognitively intensive tasks (such as searching for and mining data for a project) on behalf of the organic brain, in effect freeing up CPU time for the wetware.
  • Adding additional density to existing neuronal networks to more rapidly and efficiently process information.  Thinking harder as well as faster.
  • Providing databases of experiential knowledge (synthetic memories) for the being to "remember" and act upon.  Skillsofts, basically.
  • Adding additional "execution threads" to one's thinking processes.  Cognitive multitasking.
  • Modifying the parameters of one's consciousness, for example, modulating emotions to suppress anxiety and/or stimulate interest, stimulating a hyperfocus state to enhance concentration, or artificially inducing zen states of consciousness.
  • Expanding short-term memory beyond baseline parameters.  For example, mechanisms that translate short-term memory into long-term memory significantly more efficiently.
  • Adding I/O interfaces to the organic brain to facilitate connection to external networks, processing devices, and other tools.

Building your own Google Alerts with Huginn and Searx.

Sep 30 2017

A Google feature that doesn't ordinarily get a lot of attention is Google Alerts, which is a service that sends you links to things that match certain search terms on a periodic basis.  Some people use it for  vanity searching because they have a personal brand to maintain, some people use it to keep on top of a rare thing they're interested in (anyone remember the show Probe?), some people use it for bargain hunting, some people use it for intel collection... however, this is all predicated on Google finding out what you're interested in, certainly interested enough to have it send you the latest search results on a periodic basis.  Not everybody's okay with that.

A while ago, I built my own version of Google Alerts using a couple of tools already integrated into my exocortex which I use to periodically run searches, gather information, and compile reports to read when I have a spare moment.  The advantage to this is that the only entities that know about what I'm interested in are other parts of me, and it's as flexible as I care to make it.  The disadvantage is that I have some infrastructure to maintain, but as I'll get to in a bit there are ways to mitigate the amount of effort required.  Here's how I did it...

Technomancer Tools: Creating a local web archive with Chrome and PageArchiver.

Sep 24 2017

Some time ago I wrote an article of suggestions for archiving web content offline, at the very least to have local copies in the event that connectivity was unavailable.  I also expressed some frustration that there didn't seem to be any workable options for the Chromium web browser because I'd been having trouble getting the viable options working.  After my attempt at fixing up Firefox fell far short of my goal (it worked for all of a day, if that) I realized that I needed to come up with something that would let me do what I needed to do.  I installed Chromium on Windbringer (I'm not a fan of Chrome because Google puts a great deal of tracking and monitoring crap into the browser and I'm not okay with that) and set to work.  Here's how I did it:

First I spent some time configuring Chromium with my usual preferences.  That always takes a while, and involved importing my bookmarks from Firefox, an automated process that took several hours to run.  I also exported everything I had cached in Scrapbook, which wound up taking all night.  I then installed the SingleFile Core plugin for Chrome/Chromium, which does the actual work of turning web pages open in browser tabs into a cacheable single file.  I restarted Chromium, which I probably didn't need to do but I really wanted a working solution so I opted for caution and then installed PageArchiver from the Chrome store and restarted Chromium again.  This added the little "open file folder" icon to the Chromium menu bar.  The order the add-ons are installed in seems to matter, add SingleFile Core first if you do nothing else.

Now get ready for me to feel stupid: If you want to store something using PageArchiver, click on the file folder icon to open the PageArchiver pop-up, click "Tabs" to show a list of tabs you have open in Chromium/Chrome, click the checkboxes for the ones you want to save, and then hit the save button.  For systems like Windbringer which have extremely high resolution screens, that save button may not be visible.  You can, however, scroll both horizontally and vertically in the PageArchiver pop-up panel to expose that button.  I didn't realize that before so I never found that button.  That's all it took.

Here's what didn't work:

I can't import my Scrapbook archives because they're sitting in a folder on Windbringer's desktop as a couple of thousand separate subdirectories, each of them containing all of the web content for a single web page.  I need to figure out what to do there.  It may consist of writing a utility that turns directories full of HTML into SQL commands to inject them into PageArchiver's SQLite database which, by default, resides in the directory $HOME/.config/chromium/Default/databases/chrome-extension_ihkkeoeinpbomhnpkmmkpggkaefincbn_0 (the directory name is constant; the jumble of letters at the end is the same as the one in the Chrome Store URL) and has the filename 2 (yes, just the number 2).  You can open it up with the SQLite browser of you choice if you wish and go poking around.  Somebody may have come up with a technique for it and I just haven't found it yet, I don't know.  I may not be able to add them in any reasonable way at all and have to resort to running an ad-hoc local web server with Python or something if I want to access them, like this:

[drwho@windbringer ~]$ python2 -m SimpleHTTPServer 8000

Point in time documentation of the Keybase Chat API

Jun 19 2017

A couple of months back I did a brief writeup of Keybase and what it's good for.  I mentioned briefly that it implements a 1-to-n text chat feature, where n>=1.  Yes, this means that you can use Keybase Chat to talk to yourself, which is handy for prototyping and debugging code.  What does not seem to be very well known is that the Keybase command line utility has a JSON API, the documentation of which you can scan through by issuing the command `keybase chat help api` from a command window.  I'm considering incorporating Keybase into my exocortex so I spent some time one afternoon playing around with the API, seeing what I could make it do, and writing up what I had to do to make it work.  As far as I know there is no official API documentation anywhere; at least, Argus and I didn't find any.  So, under the cut are my notes in the hope that it helps other people work with the Keybase API.

The API may drift a bit, so here are the software versions I used during testing:

Client:  1.0.22-20170512224715+f5fba02ec
Service: 1.0.22-20170512224715+f5fba02ec

Technomancer tools: Tiddlywiki

Jun 17 2017

I've been promising myself that I'd do a series of articles about tools that I've incorporated into my exocortex over the years, and now's as good a time as any to start.  Rather than jump right into the crunchy stuff I thought I'd start with something that's fairly simple to use, straightforward, and endlessly useful for many purposes - a wiki.

Usually, when somebody brings up the topic of wikis one either immediately thinks of Wikipedia or one of the godsawful corporate wikis that one might be forced to use on a daily basis.  And you're not that off the mark, because ultimately they're websites that let one or more people create, modify, and delete articles about just about anything one might be inclined to by using only a web browser.  Usually you need to set up or be given an account to log into them because wiki spam is to this day a horrendous problem to fight (I've had to do it as parts of previous jobs, and I wouldn't wish it on my worst enemy).  If you've been around a while, when you think of having a wiki you might think of setting up something like WikiWikiWeb or Mediawiki, which also means setting up a server, a database, web server software, the wiki software, configuring everything... and unless you have a big, important project that necessitates it, it's kind of overkill and you go right back to a text file on your desktop.  And I don't blame you.

There are other options out there that require much less in the way of overhead that are also nicer than the ubiquitous notes.txt file.  For the past couple of years (since 2012.ev at least) I've been using a personal wiki called Tiddlywiki for most of my projects which requires just a fairly modern web browser (if you're using Internet Explorer you need to be running IE 10 or later) and some room on your desktop for another file.

Restarting a Screen session without manual intervention.

Jun 11 2017

EDIT - 20171011 - Added a bit about getting real login shells inside of this Screen session, which fixes a remarkable number of bugs.  Also cleaned up formatting a bit.

To keep the complexity of parts of my exocortex down I've opted to not separate everything into larger chunks using popular technologies these days, such as Linux containers (though I did Dockerize the XMPP bridge as an experiment) because there are already quite a few moving parts, and increasing complexity does not make for a more secure or stable system.  However, this brings up a valid and important question, which is "How do you restart everything if you have to reboot a server for some reason?"

A valid question indeed.  Servers need to be rebooted periodically to apply patches, upgrade kernels, and generally to blow the cruft out of the memory field.  Traditionally, there are all sorts of hoops and gymnastics one can go through with traditional initscripts but for home-grown and third party stuff it's difficult to run things from initscripts in such a way that they don't have elevated privileges for security reasons.  The hands-on way of doing it is to run a GNU Screen session when you log in and start everything up (or reconnect to one if it's already running).  This process, also, can be automated to run when a system reboots.  Here's how:

Pretty serious anomalies in the stock market on Monday.

Feb 07 2017

As I've mentioned a few times in the past, diverse parts of my exocortex monitor many different aspects of the world.  One of them, called Ironmonger, constantly data mines the global stock markets looking for anomalies.  Ordinarily, Ironmonger only triggers when stock trading events greater than three standard deviations hit the market.  On Monday, 6 Feb at 14:50:38 hours UTC-0800 (PST), Ironmonger did an acrobatic pirouette off the fucking handle.  Massive trades of three different tech companies (Intel, Apple, and Facebook) his the US stock market within the same thirty second period.  By "massive," I mean that 3,271,714,562 shares of Apple, 3,271,696,623 shares of Intel, and 2,030,897,857 shares of Facebook all hit the market at the same time.  The time_t datestamps of the transactions were 1486421438 (Intel), 1486421431 (Apple), and 1486421442 (Facebook) (I use time.is to convert them back into organic-readable time/date specifiers).  I grabbed some screenshots from the Exocortex console at the time - check them out:

Intel ; Apple ; Facebook

The tall blue slivers at the far right-hand edges of each graph represent the stock trades. I waited a couple of hours and took another set of screenshots (Intel, Apple, Facebook) because the graph had moved on a bit and the transaction spikes were much more visible.  While my knowledge of the stock market is limited, I have to admit that I've never seen multi-billion share stock trades happen before.  Out of curiosity, I took a look at the historical price per share of each of those stocks to see what those huge offers did to them.  The answer, somewhat surprisingly, was "not much."  Check out these extracts from Ironmonger's memory: Facebook, Intel, and Apple.

Because I am a paranoid and curious sort, I immediately wondered if there was a correlation with the large spike in the Bitcoin transaction fee earlier that day (at 13:19:16 UTC-0800, to be precise).  The answer is... probably not.  A transaction fee of 2.35288902 BTC (approximately $2510.93us as of 22:32 hours UTC-0800 on 7 February 2017, as I write this article ahead of time), while a sizeable sum that would certainly guarantee that someone's transaction made it into a block at that very instant does not mean that it was involved.  There just isn't enough data, but it stands on its own as another anomaly that day.  I wish I knew who put those huge blocks of stock up for sale all at once.  The only thing they seem to have in common is that they're all listed on the Singularity Index, which is mildly noteworthy.

Anybody have any ideas?

Parsing simple commands in Python.

Feb 04 2017

A couple of weeks ago I ran into some of the functional limits of my web search bot, a bot that I wrote for my exocortex which accepts English-like commands ("Send me top 15 hits for HAL 9000 quotes.") and runs web searches in response using the Searx meta-search engine on the back end.  This is to say that I gave my bot a broken command ("Send hits for HAL 9000 quotes.") and the parser got into a state where it couldn't cope, threw an exception, and crashed.  To be fair, my command parser was very brittle and it was only a matter of time before I did something dumb and wrecked it.  At the time I patched it with a bunch of if..then checks for truncated and incorrect commands, but if you look at all of the conditionals and ad-hoc error handling I probably made the situation worse, as well as much more difficult to maintain in the long run.  Time for a rewrite.

Back to my long-term memory field.  What to do?

I knew from comp.sci classes long ago that compilers use things called parsers and grammars to interpret code so that it can be converted into an executable.  I also knew that the parser Infocom used in its interactive fiction was widely considered to be the best anyone had come up with in a long time, and it was efficient enough to run on humble microcomputers like the C-64 and the Apple II.  For quite a few years I also ran and hacked on a MOO, which for the purposes of this post you can think of as a massive interactive fiction environment that the players can modify as well as play in; a MOO's command parser does pretty much the same thing as Infocom's IF parser but is responsive to the changes the user's make to their environments.  I also recalled something called a parse tree, which I sort-of-kind-of remembered from comp.sci but because I'd never actually done anything with them, I only recalled a low-res sketch.  At least I had someplace to start from so I turned my rebooted web search bot loose with a couple of search terms and went through the results after work.  I also spent some time chatting with a colleague whose knowledge of the linguistics of programming languages is significantly greater than mine and bouncing some ideas off of him (thanks, TQ!)

But how do I do something with all this random stuff?

Huginn: Writing a simple agent network.

Jan 15 2017

EDIT: 20170123 - My reviewers have suggested some edits to the article, many of which I've applied.

It's been a while since I wrote a Huginn tutorial, so let's start with a basic one to get you comfortable with the idea of building an agent network.  This agent network will run every half hour, poll a REST API endpoint, and e-mail you what it gets.  You'll have to have access to an already running Huginn instance that can send outbound e-mail.  This post is going to be kind of lengthy, but that's because I'm laying out some fundamentals.  Once you understand those you can skip past the explanations and move on to the good stuff.

First, a little background - what's a REST API?  If you already know just skip down past the cut and move on, but if you don't know what I'm talking about I'll try to explain.  I'm going to assume that you've been able to install Huginn using my instructions or someone else's, or you've got access to a running instance.  I'm also going to assume that you're not a hardcore coder, you're someone who's trying to apply a useful tool to your everyday life.

At its simplest, an API (Application Program Interface) is a way to interact with a system or part of a system.  It's (hopefully) designed to be regular, which means that once you understand the basics you can apply that knowledge to figure out the more complex parts with a little messing around because the basics continue to apply.  Let's say that I've written a library called myLib, which implements a bunch of really annoying stuff (like opening and closing files and sorting elements of data) so you don't have to.  My library has a bunch of functions that carry out those tasks (openStupidFile(), readAllOfFilesContents(), sortIntegers(), sortFloatingPointValues(), searchThisCrapForAString()) when you call them in your own code.  Those functions are part of my library's API.  In the documentation are instructions for calling each function, which includes the arguments you need to pass to each function (e.g., openStupidFile() takes two arguments, a full path to a file and 'r' for read-only or 'rw' for read-write, and it returns a handle to the file that you can pass to another function or NULL if it failed).  The data type each function returns (the file handle or NULL value) is part of the API, as are the arguments each function takes (path to the file and 'r' or 'rw').

The same principle has been applied to the Web in several different ways.  What concerns us right now is something called the RESTful API (REpresentational State Transfer), which basically means interacting with a web service using HTTP verbs (GET, PUT, POST, and so forth) and referencing  URLs instead of functions in a library.  Like HTTP, requests are stateless, which means that you make a request, the server responds, and there's no further context beyond that.  You can think of RESTful APIs as fire-and-forget.  The general idea is that there is a web server of some kind, which could be a traditional one like Apache or a specialized one running inside a web app built around a server like web.py which responds to those URLs in some way.  If you make a GET request to a URL, it'll send you some data.  If you make a PUT request you replace something on the server at that URL with something you send it.  If you make a POST request you create a new something on the server.  If you make a DELETE request that something on the server gets erased.  All of this depends on the HTTP verbs the server supports (not all REST APIs need to support all of them), your access permissions (not every account can do everything), whether or not you've authenticated to the server (it is sometimes the case that read-only access doesn't require an account but read-write access does require an account or an API token or something else along those lines), or who owns a particular resource (Alice's resources are read-only for every other account on the server, but read-write for her alone), of course.  REST makes life easier but it's not carte blanche to run hog wild.  Additionally, many REST API services enforce access limits - you get so many requests per minute, hour, or day and after that it returns errors.  For example, Twitter's API will return an Error 420 (enhance your calm) if you trip their rate limiter.