Nov 27 2017
A couple of weeks ago a new release of the Keybase software package came out, and this one included as one of its new features support for natively hosting Git repositories. This doesn't seem like it's very useful for most people, and it might really only be useful to coders, but it's a handy enough service that I think it's worth a quick tutorial. Prior to that feature release something in the structure of the Keybase filesystem made it unsuitable for storing anything but static copies of Git repositories (I don't know exactly waht), but they've now made Git a first class citizen.
I'm going to assume that you use the Git distributed version control system already, and you have at least one Git repository that you want to host on Keybase; for the purposes of this example I'm going to use my personal copy of the Exocortex Halo code repository on Github. I'm further going to assume that you know the basics of using Git (cloning repositories, committing changes, pulling and pushing changes). I'm also going to assume that you already have a Keybase account and a fairly up-to-date copy of the software installed. I am, however, going to talk a little bit about the idea of remotes in Git. My discussion will necessarily have some technical inaccuracies for the sake of usability if you're not an expert on the internals of Git.
Click for the rest of the article...
Nov 20 2017
A couple of days ago (a couple of minutes ago, as I happen to write this) I watched a documentary on Youtube about a modern urban legend, the video game called Polybius. I don't want to give away the entire story if you've not heard it before, but a capsule version is that in 1981.ev a strange video game called Polybius was installed in a number of video arcades in the Pacific Northwest. The game supposedly had a strange effect on some of the people playing it, ranging from long periods of hypnosis to night terrors, epileptic convulsions and, it is rumored, a small number of deaths due to sudden heart failure. It's a story circulated for years online in one form or another, and a number of people have built their own versions that fit the details of the story, with varying degrees of fidelity. I'll admit, one of my long-term plans is to build a MAME cabinet at home that looks like one as a conversation piece. It's a modern day tall tale, where chances are you know somebody who knows somebody whose brother dated the sister of a guy who wound up in the hospital in a coma back in 198x because he spent 50 hours entranced playing some weird game in an arcade while on a family trip, and mysteriously the cabinet was gone by the time he was released.
One thing that I don't think I've heard anybody say, though, is that the origins of the story might date back to the late 1990's. I first came across a story about a video game in the early 1980's that had strange effects on its players in the book GURPS Warehouse 23, published by Steve Jackson Games (first printing in 1997, second printing in 1999, available for purchase as a downloadable PDF from the Steve Jackson Online Store because the dead tree edition is out of print). The chapter Conspiracies, Cover-Ups, and Hoaxes of the game supplement opens with a story called The Astro Globs! Cover-Up, which talks about a video game called Astro Globs! (unsurprisingly) developed in 1983 by a computer programmer named Gina Moravec (after Hans Moravec?) which was uncannily adaptive to the person playing it. The video game described by the game book would figure out how the person playing it thought and tailored itself to be increasingly challenging and fascinating without ever getting frustrating, which also made it dangerously hypnotic. The son of the programmer of the game was hospitalized for dehydration after playing it for over 72 hours with neither sleep nor food nor water.
The first printing of Warehouse 23 was in 1997, which implies that the genesis of the Astro Globs! story was some time prior to that. From what little I know of the professional RPG authorship industry, factor in maybe a year's time for proofreading, layout, and the first print run to wind up in the warehouse for distribution (this was in the late 90's, after all - desktop publishing was nowhere near as advanced as it is now, and print-on-demand was certainly not a thing then) and two or three years for development, editing, playtesting, kicking around the group of people working on the text... so I would carefully guess that the idea came about some time in the early 1990's.
The documentary states that the page on coinop.org I linked to above was created on 3 August 1998 at 0000 hours (timezone unknown) (local mirror, 20171120), which puts it about a year after the first edition of Warehouse 23 hit the shelves. The researchers who made the documentary say that they traced the page as far back as 6 February 2000 using the Wayback Machine, which strongly implies that the date in the page footer is incorrect, possibly due to a default value entered in the back-end database during a site migration.
So... perhaps some GURPS conspiracy flavor can be found in the roots of this story? Maybe somebody trying to make their favorite part of the book come to life somehow?
Nov 19 2017
I guess I should wish everybody out there a happy Thanksgiving that celebrates it.
I haven't been around much lately, certainly not as much as I would like to be. Things have been difficult lately, to say the least.
Around this time of year things go completely berserk at my dayjob. For a while I was pulling 14 hour days, capped off with feverishly working three days straight on one of the biggest projects of my career, which not only wound up going off without more than the expected number of hitches but has garnered quite a few kudos from the community. I'm rather proud of how it turned out. Unfortunately, it also took its toll, namely, on my health. During the final leg of the project I noticed that I was starting to get sick, and by that Tuesday my cow-orkers were telling me to go home and sleep because I looked like death warmed over. Unsurprisingly, I've been battling a nasty cold that's kicked the legs out from under me. I still haven't kicked out of big-project mode yet, because the last few times I've started to feel better I've run myself aground again without realizing I was doing so. This is not good. It also seems that I brought this particular nasty home, and now my family is in various stages of fighting it off.
Click for the rest of the article...
Oct 28 2017
UPDATED: Added an Nginx configuration block to proxy YaCy.
If you've been squirreling away information for any length of time, chances are you tried to keep it all organized for a certain period of time and then gave up the effort when the volume reached a certain point. Everybody has therir limit to how hard they'll struggle to keep things organized, and past that point there are really only two options: Give up, or bring in help. And by 'help' I mean a search engine of some kind that indexes all of your stuff and makes it searchable so you can find what you need. The idea is, let the software do the work while the user just runs queries against its database to find the documents on demand. Practically every search engine parses HTML to get at the content but there are others that can read PDF files, Microsoft Word documents, spreadsheets, plain text, and occasionally even RSS or ATOM feeds. Since I started offloading some file downloading duties to yet another bot my ability to rename files sanely has... let's be honest... it's been gone for years. Generally speaking, if I need something I have to search for it or it's just not getting done. So here's how I fill that particular niche in my software ecosystem.
Click for the rest of the article...
Oct 12 2017
Originally published at Mondo 2000, 10 October 2017.
A common theme of science fiction in the transhumanist vein, and less commonly in applied (read: practical) transhumanist circles is the concept of having an exocortex either installed within oneself, or interfaced in some way with one's brain to augment one's intelligence. To paint a picture with a fairly broad brush, an exocortex was a system postulated by JCR Licklider in the research paper Man-Computer Symbiosis which would implement a new lobe of the human brain which was situated outside of the organism (though some components of it might be internal). An exocortex would be a symbiotic device that would provide additional cognitive capacity or new capabilities that the organism previously did not posses, such as:
Click for the rest of the article...
- Identifying and executing cognitively intensive tasks (such as searching for and mining data for a project) on behalf of the organic brain, in effect freeing up CPU time for the wetware.
- Adding additional density to existing neuronal networks to more rapidly and efficiently process information. Thinking harder as well as faster.
- Providing databases of experiential knowledge (synthetic memories) for the being to "remember" and act upon. Skillsofts, basically.
- Adding additional "execution threads" to one's thinking processes. Cognitive multitasking.
- Modifying the parameters of one's consciousness, for example, modulating emotions to suppress anxiety and/or stimulate interest, stimulating a hyperfocus state to enhance concentration, or artificially inducing zen states of consciousness.
- Expanding short-term memory beyond baseline parameters. For example, mechanisms that translate short-term memory into long-term memory significantly more efficiently.
- Adding I/O interfaces to the organic brain to facilitate connection to external networks, processing devices, and other tools.
Oct 08 2017
A couple of weeks ago I had an invitation to take a lunch cruise on San Francisco Bay aboard the Hornblower. It was a work sort of thing, a quarterly fun-thing to do after putting in longer hours than usual organized by one of my cow-orkers. As luck would have it, that was one of the rare days that it rained in the Bay Area. You might think that it would put a damper on things but it doesn't rain much out here these days so any change of weather is not only noteworthy, it's a pleasant change of pace for a lot of us.
Anyway, here are the pictures I took.
Oct 08 2017
"Program a map to display frequency of data exchange, every thousand megabytes a single pixel on a very large screen. Manhattan and Atlanta burn solid white. Then they start to pulse, the rate of traffic threatening to overload your simulation. Your map is about to go nova. Cool it down. Up your scale. Each pixel a million megabytes. At a hundred million megabytes per second, you begin to make out certain blocks in midtown Manhattan, outlines of hundred-year-old industrial parks ringing the old core of Atlanta..."
--From Neuromancer by William Gibson
While wandering around downtown San Francisco a couple of weeks ago, I came across an art installation in the lobby of an office building that ostensibly displayed a realtime visualization of Internet traffic as a 3D map of the city. I'm not entirely sure that's accurate because that would require an immense amount of access to network infrastructure they probably don't own. My working hypothesis is that it's a visualization of activity of their customers run through a geoIP service with a fairly high degree of resolution (probably correlated against customer service records) and turned into a highly impressive animation. I didn't record any video footage, I just took a couple of pictures.
Here's a gallery of those pictures.
Sep 30 2017
A Google feature that doesn't ordinarily get a lot of attention is Google Alerts, which is a service that sends you links to things that match certain search terms on a periodic basis. Some people use it for vanity searching because they have a personal brand to maintain, some people use it to keep on top of a rare thing they're interested in (anyone remember the show Probe?), some people use it for bargain hunting, some people use it for intel collection... however, this is all predicated on Google finding out what you're interested in, certainly interested enough to have it send you the latest search results on a periodic basis. Not everybody's okay with that.
A while ago, I built my own version of Google Alerts using a couple of tools already integrated into my exocortex which I use to periodically run searches, gather information, and compile reports to read when I have a spare moment. The advantage to this is that the only entities that know about what I'm interested in are other parts of me, and it's as flexible as I care to make it. The disadvantage is that I have some infrastructure to maintain, but as I'll get to in a bit there are ways to mitigate the amount of effort required. Here's how I did it...
Click for the rest of the article...
Sep 30 2017
Longtime readers have probably seen the odd post about my getting fed up with Firefox and migrating my workflow (and much of my online data archive) to Chromium, which has been significantly faster if nothing else than Firefox lately. Of course, due to Windbringer's screen resolution I immediately ran into problems with just about every font size being too small, including the text in the URL bar, the menus, and the add-ons that I use. On a lark I went back to my font sizes in Keybase article and give it a try. Lo and behold, when I used --force-device-scale-factor=1.5 it worked - I can see everything now. I could complain about the size of the text in the bookmarks bar, but I'm willing to deal with it because now I can read everything. For the record, here are the contents of my ~/Desktop/chromium.desktop file, so you can do it yourself:
Exec=chromium --force-device-scale-factor=1.5 %U
Sep 24 2017
Some time ago I wrote an article of suggestions for archiving web content offline, at the very least to have local copies in the event that connectivity was unavailable. I also expressed some frustration that there didn't seem to be any workable options for the Chromium web browser because I'd been having trouble getting the viable options working. After my attempt at fixing up Firefox fell far short of my goal (it worked for all of a day, if that) I realized that I needed to come up with something that would let me do what I needed to do. I installed Chromium on Windbringer (I'm not a fan of Chrome because Google puts a great deal of tracking and monitoring crap into the browser and I'm not okay with that) and set to work. Here's how I did it:
First I spent some time configuring Chromium with my usual preferences. That always takes a while, and involved importing my bookmarks from Firefox, an automated process that took several hours to run. I also exported everything I had cached in Scrapbook, which wound up taking all night. I then installed the SingleFile Core plugin for Chrome/Chromium, which does the actual work of turning web pages open in browser tabs into a cacheable single file. I restarted Chromium, which I probably didn't need to do but I really wanted a working solution so I opted for caution and then installed PageArchiver from the Chrome store and restarted Chromium again. This added the little "open file folder" icon to the Chromium menu bar. The order the add-ons are installed in seems to matter, add SingleFile Core first if you do nothing else.
Now get ready for me to feel stupid: If you want to store something using PageArchiver, click on the file folder icon to open the PageArchiver pop-up, click "Tabs" to show a list of tabs you have open in Chromium/Chrome, click the checkboxes for the ones you want to save, and then hit the save button. For systems like Windbringer which have extremely high resolution screens, that save button may not be visible. You can, however, scroll both horizontally and vertically in the PageArchiver pop-up panel to expose that button. I didn't realize that before so I never found that button. That's all it took.
Here's what didn't work:
I can't import my Scrapbook archives because they're sitting in a folder on Windbringer's desktop as a couple of thousand separate subdirectories, each of them containing all of the web content for a single web page. I need to figure out what to do there. It may consist of writing a utility that turns directories full of HTML into SQL commands to inject them into PageArchiver's SQLite database which, by default, resides in the directory $HOME/.config/chromium/Default/databases/chrome-extension_ihkkeoeinpbomhnpkmmkpggkaefincbn_0 (the directory name is constant; the jumble of letters at the end is the same as the one in the Chrome Store URL) and has the filename 2 (yes, just the number 2). You can open it up with the SQLite browser of you choice if you wish and go poking around. Somebody may have come up with a technique for it and I just haven't found it yet, I don't know. I may not be able to add them in any reasonable way at all and have to resort to running an ad-hoc local web server with Python or something if I want to access them, like this:
[drwho@windbringer ~]$ python2 -m SimpleHTTPServer 8000