May 05 2018
If you have multiple systems (like I do), a problem you've undoubtedly run into is keeping your bookmarks in sync across every browser you use. Of course, there are services that'll happily do this job on you behalf, but they're free, and we all know what free means. If you're interested in being social with your link collection there are some social bookmarking services out there for consideration, including what's left of Delicious. For many years I was a Delicious user (because I liked the idea of maintaining a public bookmark collection that could be useful to people), but Delicious got worse and worse every time it was sold to a new holding company. I eventually gave up on Delicious, pulled my data out, and thought long and hard about how often anybody actually used my public link collection. The answer wound up being "In all probability, not at all," largely because I never received any feedback at all, on-site or off. Oh, well.
For a couple of years I used an application called Unmark to manage my link collection, and it did a decent enough job. It also had some annoying quirks that, over time got farther and farther under my skin, and earlier this year I kicked Unmark in the head and started the search for a replacement. Quirks like, about half the time bookmarks would be saved without any of the descriptions or tags I gave them. No search API. The search function sucked so I couldn't plug my own search function in. Eventually, the Unmark hosted service started redirecting to the Github repository, and then even that redirect went away. Unmark hasn't been worked on in eight months, and Github tickets haven't been touched in about as long. In short, Unmark seems dead as a doornail.
So I migrated my link collection to a new application called Shaarli, and I'm quite pleased with it.
Jan 28 2018
A couple of days ago I gave a talk online to some members of the Zero State about my exocortex. It's a pretty informal talk done as a Hangout where I talk about some of the day to day stuff and where the project came from. I didn't have any notes and it was completely unscripted.
Embedding is disabled for some reason so I can't just put the vide here here. Here's a direct link to the recording.
Oct 28 2017
If you've been squirreling away information for any length of time, chances are you tried to keep it all organized for a certain period of time and then gave up the effort when the volume reached a certain point. Everybody has therir limit to how hard they'll struggle to keep things organized, and past that point there are really only two options: Give up, or bring in help. And by 'help' I mean a search engine of some kind that indexes all of your stuff and makes it searchable so you can find what you need. The idea is, let the software do the work while the user just runs queries against its database to find the documents on demand. Practically every search engine parses HTML to get at the content but there are others that can read PDF files, Microsoft Word documents, spreadsheets, plain text, and occasionally even RSS or ATOM feeds. Since I started offloading some file downloading duties to yet another bot my ability to rename files sanely has... let's be honest... it's been gone for years. Generally speaking, if I need something I have to search for it or it's just not getting done. So here's how I fill that particular niche in my software ecosystem.
Oct 12 2017
Originally published at Mondo 2000, 10 October 2017.
A common theme of science fiction in the transhumanist vein, and less commonly in applied (read: practical) transhumanist circles is the concept of having an exocortex either installed within oneself, or interfaced in some way with one's brain to augment one's intelligence. To paint a picture with a fairly broad brush, an exocortex was a system postulated by JCR Licklider in the research paper Man-Computer Symbiosis which would implement a new lobe of the human brain which was situated outside of the organism (though some components of it might be internal). An exocortex would be a symbiotic device that would provide additional cognitive capacity or new capabilities that the organism previously did not posses, such as:
- Identifying and executing cognitively intensive tasks (such as searching for and mining data for a project) on behalf of the organic brain, in effect freeing up CPU time for the wetware.
- Adding additional density to existing neuronal networks to more rapidly and efficiently process information. Thinking harder as well as faster.
- Providing databases of experiential knowledge (synthetic memories) for the being to "remember" and act upon. Skillsofts, basically.
- Adding additional "execution threads" to one's thinking processes. Cognitive multitasking.
- Modifying the parameters of one's consciousness, for example, modulating emotions to suppress anxiety and/or stimulate interest, stimulating a hyperfocus state to enhance concentration, or artificially inducing zen states of consciousness.
- Expanding short-term memory beyond baseline parameters. For example, mechanisms that translate short-term memory into long-term memory significantly more efficiently.
- Adding I/O interfaces to the organic brain to facilitate connection to external networks, processing devices, and other tools.
Sep 30 2017
A Google feature that doesn't ordinarily get a lot of attention is Google Alerts, which is a service that sends you links to things that match certain search terms on a periodic basis. Some people use it for vanity searching because they have a personal brand to maintain, some people use it to keep on top of a rare thing they're interested in (anyone remember the show Probe?), some people use it for bargain hunting, some people use it for intel collection... however, this is all predicated on Google finding out what you're interested in, certainly interested enough to have it send you the latest search results on a periodic basis. Not everybody's okay with that.
A while ago, I built my own version of Google Alerts using a couple of tools already integrated into my exocortex which I use to periodically run searches, gather information, and compile reports to read when I have a spare moment. The advantage to this is that the only entities that know about what I'm interested in are other parts of me, and it's as flexible as I care to make it. The disadvantage is that I have some infrastructure to maintain, but as I'll get to in a bit there are ways to mitigate the amount of effort required. Here's how I did it...
Sep 24 2017
Some time ago I wrote an article of suggestions for archiving web content offline, at the very least to have local copies in the event that connectivity was unavailable. I also expressed some frustration that there didn't seem to be any workable options for the Chromium web browser because I'd been having trouble getting the viable options working. After my attempt at fixing up Firefox fell far short of my goal (it worked for all of a day, if that) I realized that I needed to come up with something that would let me do what I needed to do. I installed Chromium on Windbringer (I'm not a fan of Chrome because Google puts a great deal of tracking and monitoring crap into the browser and I'm not okay with that) and set to work. Here's how I did it:
First I spent some time configuring Chromium with my usual preferences. That always takes a while, and involved importing my bookmarks from Firefox, an automated process that took several hours to run. I also exported everything I had cached in Scrapbook, which wound up taking all night. I then installed the SingleFile Core plugin for Chrome/Chromium, which does the actual work of turning web pages open in browser tabs into a cacheable single file. I restarted Chromium, which I probably didn't need to do but I really wanted a working solution so I opted for caution and then installed PageArchiver from the Chrome store and restarted Chromium again. This added the little "open file folder" icon to the Chromium menu bar. The order the add-ons are installed in seems to matter, add SingleFile Core first if you do nothing else.
Now get ready for me to feel stupid: If you want to store something using PageArchiver, click on the file folder icon to open the PageArchiver pop-up, click "Tabs" to show a list of tabs you have open in Chromium/Chrome, click the checkboxes for the ones you want to save, and then hit the save button. For systems like Windbringer which have extremely high resolution screens, that save button may not be visible. You can, however, scroll both horizontally and vertically in the PageArchiver pop-up panel to expose that button. I didn't realize that before so I never found that button. That's all it took.
Here's what didn't work:
I can't import my Scrapbook archives because they're sitting in a folder on Windbringer's desktop as a couple of thousand separate subdirectories, each of them containing all of the web content for a single web page. I need to figure out what to do there. It may consist of writing a utility that turns directories full of HTML into SQL commands to inject them into PageArchiver's SQLite database which, by default, resides in the directory $HOME/.config/chromium/Default/databases/chrome-extension_ihkkeoeinpbomhnpkmmkpggkaefincbn_0 (the directory name is constant; the jumble of letters at the end is the same as the one in the Chrome Store URL) and has the filename 2 (yes, just the number 2). You can open it up with the SQLite browser of you choice if you wish and go poking around. Somebody may have come up with a technique for it and I just haven't found it yet, I don't know. I may not be able to add them in any reasonable way at all and have to resort to running an ad-hoc local web server with Python or something if I want to access them, like this:
[drwho@windbringer ~]$ python2 -m SimpleHTTPServer 8000