The California DMV did what?

A while ago I did the usual song-and-dance with the California DMV to renew the registration of my vehicle, as one does periodically. Due to the fact that I live in a fairly high-infrastructure area (not quite New York City, but certainly not as underdeveloped as Pittsburgh or the part of the DC metropolitan complex I used to reside in are in this respect) it's actually kind of rare that I need to actually drive anywhere. If I can't walk to it in half an hour or therabouts I can take BART and not think much of it (usually because I can catch up on my reading during the trip). So, I paid however much I needed to renew the registration and forgot about it because it was going to take a couple of days for the new registration and stickers to arrive in the mail. Due to the fact that the process to renew a registration is much less involved than the process to get registered in the first place it didn't seem like such a big deal.

A few days ago it suddenly struck me that I hadn't gotten the new paperwork in the mail and set about making inquiries of the California DMV. As John Scott Tynes once observed, when you scratch history it bleeds weirdness; the same principle holds when trying to figure out what may or may not have gone wrong inside the bowels of the Department of Motor Vehicles, a realm which I feel certain that no earthly sorcerer or adventurer should investigate too deeply lest it awaken and devour the unwary.

Long story short, they managed to (internally and on the card) get the address on my driver's license correct (they won't actually tell you what it is, you have to read off what's on it and they'll give you a "Yes" or "No" answer, sort of like the planchette of the world's most fucked up Ouija board) but they got the registration address of my vehicle completely and totally wrong. By this, I mean that they somehow managed to combine my old address in Maryland and my new address in California into something completely off-the-wall in their database, get it past the usual sanity checks (though I think I'm being overly idealistic in my supposition that there are address sanity checks in their back-end database, I don't think those were a thing when the system was built), but get the address of registration superficially correct appearing on the paperwork. They sent the new tags and paperwork to this completely fucked address, where it's probably sitting collecting dust at a Post Office's dead letter drop. Moreover, once the errors were identified I was informed that the DMV cannnot correct the error online or over the phone, nor can they issue new tags. I have to wait until a full month has passed since the error was detected, show up in person with supporting documentation, and straighten the problem out manually. I believe that a full cycle of sacrifices must be made at the correct times for the process to correctly initiate. This may also have something to do with the relative maturation rates of the sacrifices themselves (black and purple hens with heterochromic eyes don't exactly grow on trees, you know).

Contrast this with one of my cow-orkers who made the journey into the Plane of DMV Torment some days ago while their back-end database system was offline, meaning that every process was carried out manually by the employees there. Thirty minutes, in and out, and she walked out with all of the correct paperwork required and no errors.

The Doctor | 09 February 2016, 12:47 hours | default | No comments

Yup. I've updated my .plan file.

At long last, I've updated my .plan file again. The usual warnings - NSFW among them - apply.

The Doctor | 08 February 2016, 10:00 hours | default | No comments

Semi-autonomous software agents: Practical applications.

In the last post in this series I talked about the origins of my exocortex and a few of the things I do with it. In this post I'm going to dive a little deeper into what my exocortex does for me and how it's laid out.

My agent networks ("scenarios" in the terminology of Huginn) are collections of specialized agents which each carry out one function (like requesting a web page or logging into an XMPP server to send a message). Those agents communicate by sending events to one another; those events take the form of structured, packaged pieces of information that the receiving agent can pick values out of or pass along depending on how it's configured. Below the cut is what one kind of event looks like.


More under the cut...

The Doctor | 03 February 2016, 08:00 hours | code, content | Three comments

Don't worry, I'm still alive.

A friendly heads-up for my regular readers - I'm still alive and kicking. Not necessarily doing well, mind you - I've been sick twice in the last month (sick enough that I didn't have it in me to write anything, let alone study or do more than scan my e-mail for anything important happening and then go right back to bed), and I've been undergoing some fairly painful dental procedures at least once a month for the past few months, which takes a lot out of me. Additionally, I'm still studying for a couple of certifications for work, which is basically about as much work as writing my graduation theses in undergrad. I've also had to get a significant amount of code working for my exocortex, not only for personal reasons (Ever try to type when you have a broken wrist? Feels the same to me.) but because I'll be going to a couple of conferences this year and needed working code for them. Also... work's tiring. Some nights I don't get home until very late and rather that write, I go to bed and try to get a good night's sleep.

So, now that I've got some stuff wrapped up and taken care of, I'll have a little more time to post here. I'm debating whether or not I'm going to continue doing "bleeding edge tech" stuff because I don't know if I'm adding anything useful to the dialogue with respect to this stuff. I do so in the hopes of bringing some pretty esoteric stuff out of the realm of technological esoterica and down to street level, smoothing the way for more people as it were, but I don't get any feedback about it so it may not continue to be a useful thing to do.

Now that Google is using HTTPS as a significant factor in its ranking algorithms, I'll probably be migrating this site off of the self-signed certificate in favor of a cert issued by the Let's Encrypt project because my hosting provider now supports them natively. I'll update that page appropriately.

The Doctor | 31 January 2016, 20:07 hours | default | No comments

EDITED: 20160131 - Call for Participants: The Future of Immigration Conference

The Brighter Brains Institute in conjunction with the Institute for Ethics and Emerging Technologies has announced that its next conference will be held on 6 February 2016 and bears the title Argue 4 Tomorrow. As usual, the conference will take place at the Humanist Hall in Oakland, California. The format of this conference will differ from previous conferences in that it will take the form of a slightly modified Oxford style debate rather than a collection of presentations as we usually think of them. The three debate topics will be Open Borders - For or Against, Basic Income Guarantee - For or Against?, and What is the Nature of the Singularity and When Will it Arrive? - What's Your Opinion. For each debate, the audience will vote to determine the victor.

The conference is actively looking for participants who want to join the debate teams at this time. If you will be in the area and want to participate in one or more of the debates, please e-mail brighterbrainsinstitute at gmail dot com and tell them Bryce sent you.

If you are a member of the IEET or the East Bay Futurists advance tickets are only $10us. General admission tickets are $15us. Sales end on 6 February 2016.

Here is the Eventbrite page for the debates.

The Doctor | 12 January 2016, 09:00 hours | default |

A new InSoc album and an upcoming concert!

Part of me just discovered (and ordered tickets for) an upcoming Information Society concert at the DNA Lounge in San Francisco, CA on 23 March 2016. Not only will this be their first concert in a while in the Bay Area, but it will be the release party for their new album, entitled Orders of Magnitude. OoM is described as a collection of covers of and homages to music that helped shape their unique musical style over the years and appears as wildly diverse and free wheeling as it is whimsical from the track listing posted. This is going to be one for the ages, folks... bring your earplugs and your dancing shoes, because you're going to want to remember this one.

Tickets are $17us ahead of time, $23us at the door.


More under the cut...

The Doctor | 11 January 2016, 10:00 hours | default | No comments

Semi-autonomous software agents: A personal perspective.

So, after going on for a good while about software agents you're probably wondering why I have such an interest in them. I started experimenting with my own software agents in the fall of 1996 when I first started undergrad. When I went away to college I finally had an actual network connection for the first time in my life (where I grew up the only access I had was through dialup) and I wanted to abuse it. Not in the way that the rest of my classmates were but to do things I actually had an interest in. So, the first thing I did was set up my own e-mail server with Qmail and subscribed to a bunch of mailing lists because that's where all of the action was at the time. I also rapidly developed a list of websites that I checked once or twice a day because they were often updated with articles that I found interesting. It was through those communication fora that I discovered the research papers on software agents that I mentioned in earlier posts in this series. I soon discovered that I'd bitten off more than I could chew, especially when some mailing lists went realtime (which is when everybody started replying to one another more or less the second they received a message) and I had to check my e-mail every hour or so to keep from running out of disk space. Rather than do the smart thing (unsubscribing from a few 'lists) I decided to work smarter and not harder and see if I could use some of the programming languages I was playing with at the time to help. I've found over the years that it's one thing to study a programming language academically, but to really learn one you need a toy project to learn the ins and outs. So, I wrote some software that would crawl my inbox, scan messages for certain keywords or phrases and move them into a folder so I'd see them immediately, and leave the rest for later. I wrote some shell scripts, and when those weren't enough I wrote a few Perl scripts (say what you want about Perl, but it was designed first and foremost for efficiently chewing on data). Later, when that wasn't enough I turned to C to implement some of the tasks I needed Leandra to carry out.

Due to the fact that Netscape Navigator was highly unreliable on my system for reasons I was never quite clear on (it used to throw bus errors all over the place) I wasn't able to consistently keep up with my favorite websites at the time. While the idea of update feeds existed as far back as 1995 they didn't actually exist until the publication of the RSS v0.9 specification in 1999, and ATOM didn't exist until 2003, so I couldn't just point a feed reader at them. So I wrote a bunch of scripts that used lynx -dump http://www.example.com/ > ~/websites/www.example.com/`date '+%Y%m%d-%H:%M:%S'`.txt and diff to detect changes and tell me what sites to look at when I got back from class.

That was one of the prettier sequences of commands I had put together, too.


More under the cut...

The Doctor | 28 December 2015, 10:00 hours | content | Two comments

Helen Wendel, RIP.

Helen Wendel

Born: 14 April 1951
Died: 22 December 2015

Her grove missed her so that they called her home.

The Doctor | 23 December 2015, 15:49 hours | content | No comments

Software agents under the hood: What do their guts look like?

In my last post I went into the the history of semi-autonomous software agents in a fair amount of detail, going as far back as the late 1970's and the beginning of formal research in the field in the early 1980's. Now I'm going to pop open the hood and go into some detail about how agents are architected in the context of how they work, some design issues and constraints, and some of the other technologies that they can use or bridge. I'm also going to talk a little about agents' communication protocols, both those used to communiate amongst themselves and those used to communicate with their users.

Software agents are meant to run autonomously once they're activated on their home system. They connect to whatever resources are set in their configuration files and then tend settle into a poll-wait loop where they hit their configured resources about as fast as the operating system will let them. Each time they hit their resources they look for a change in state or a new event and examine every change detected to see if it fits their programmed criteria. The agent then fires an event if there is a match and goes back to its poll-wait loop. Other types of agents use a scheduler design pattern instead of a poll-wait loop. In this design pattern, agents ping their data sources periodically but then go to sleep for a certain period of time, which can be anywhere from a minute to days or even months. This reduces CPU load (because poll-wait loops can hit a resource dozens or even hundreds of times a second, which causes the CPU to spend most of its time waiting for I/O to finish) and network utilization. Some agents may be designed to sleep by default but register themselves with an external scheduler process that wakes them up somehow, possibly by sending them an command over IPC or using an OS signal to touch them off.


More under the cut...

The Doctor | 07 December 2015, 10:00 hours | content | No comments

Nigeria ICT Fest slides

Here are the slides for my presentation at the Nigeria ICT Fest, held 4 and 5 December 2015. The slides are in both MS Powerpoint and PDF formats with associated PGP signatures to ensure that they haven't been tampered with.

Ongoing_Threats_to_Emerging_Financial_Entities.pdf (signature)

Ongoing_Threats_to_Emerging_Financial_Entities.pptx (signature)

The Doctor | 05 December 2015, 09:00 hours | content | No comments

Virtualbox virtual machines keep aborting.

If you've been experimenting with different operating systems for a while, or you have some need to run more than one OS on a particular desktop machine, chances are you've been playing around with Oracle Virtualbox due to its ease of use, popular set of features, flexibility, and cost. You've also probably run into the following syndrome (usually while trying to build a new virtual machine):
If you look in the kernel message buffer (dmesg | less) you might something that looks like the following message:

[285069.745248] EMT-1[17090]: segfault at 618 ip 00007fb2855323e1 sp 00007fb295581c40 error 4 in VBoxDD.so[7fb285483000+17d000]
[285095.055473] EMT-1[17214]: segfault at 618 ip 00007f6971e343e1 sp 00007f6981e83c40 error 4 in VBoxDD.so[7f6971d85000+17d000]
[285118.696835] EMT-1[17335]: segfault at 618 ip 00007f247f6b73e1 sp 00007f24a3486c40 error 4 in VBoxDD.so[7f247f608000+17d000]
[285159.558270] EMT-1[17464]: segfault at 618 ip 00007fc4ab5b53e1 sp 00007fc4d2d4ac40 error 4 in VBoxDD.so[7fc4ab506000+17d000]

Maybe you google it for a while, or you search through your notes in vain. If you go through the VM's logs (Oracle VM VirtualBox Manager -> Right-click on a VM -> Show Log) you'll be presented with one or more tabs, each containing logs for a boot attempt (likely, if you're reading this after a Google search you'll have three or four such logs).

I run into this every few months and promptly forget how to fix it, which is why I'm posting it here.

Access the virtual machine's properties (Oracle VM VirtualBox Manager -> Right-click on a VM -> Settings). Click on "USB" in the pane on the left-hand side of the "<mumble OS> - Settings" window. Un-check "Enable USB 2.0 (EHCI) Controller". Leave "Enable USB Controller" checked. Click the "OK" button. Try booting the VM again.

That should do it.

The Doctor | 03 December 2015, 19:13 hours | content | No comments

The Nigeria ICT Fest will be held this weekend!

The Nigeria ICT Fest is a public/private initiative for spurring economic development in the country of Nigeria by applying communication and information technologies. It will last two days, 4 and 5 December 2015 and will be held in Nigeria. On Friday, 4 December the conference will be held at Magrellos Fast Food in Festac. On Saturday, 5 December the conference will be held at Radisson Blu Anchorage Hotel on Victoria Island in the city of Lagos.

I will not be physically present at the Fest, unfortunately, but I will be attending via telepresence. I will be presenting at 1630 hours GMT+1 on Saturday, 5 December 2015 on the topic of security threats and actors in the field of online finance. To figure out what time that corresponds to wherever you happen to be, I suggest using http://time.is/ to do the necessary conversion.

Please follow them on Twitter and Facebook.

Please spread these links around everywhere you can, so that the ICT Fest gets as many attendees as possible.

The Doctor | 30 November 2015, 15:43 hours | default | No comments

The history of software agents.

Building on top of my first post about software agents, I'd like to talk about the history of the technology in reasonable strokes. Not so broad that interesting details are lost (or misleading ones added) but not so narrow that we forget the forest while studying a single tree.

Anyway, software agents could be said to have their roots in UNIX daemons, dating back to the creation of UNIX at AT&T in the 1970's. On the big timesharing systems of the time, where multiple people could be logged into the same machine working simultaneously without stepping on one another another it was observed that it was more efficient to split off lower-priority functions from the UNIX kernel into separate pieces of software. The system kernel is the top of the hierarchy of system privileges. The kernel manages all of the resources in the system and carries out privileged operations (such as actually interacting with the hardware and memory management). This means that the overhead of the kernel doing all of that, plus much less important stuff like monitoring the current system temperature (for example) would be considerable. So much so, in fact, that the system would bog down. Generally speaking, if something can be done without being part of the system core it should be, and the system as a whole becomes more efficient. For example, rather than having the UNIX kernel poll the machine's serial ports constantly for keystrokes from users' terminals (back then serial terminals were how timesharing systems were interacted with), it makes more sense to have one daemon called getty ("get TTY") listen on each serial port, grab characters and send them to the user's shell when they arrived, send output through the serial port when appropriate, and only pester the kernel for something when it really had to. This is why, instead of building the system logger into the kernel (which gets pretty busy because it has to catch and write output from everything running in the background) you'll find some variant of syslogd running in userspace. Or, you'll find a job scheduler called crond that executes commands or batches thereof on behalf of users and system admins at timed intervals.


More under the cut...

The Doctor | 30 November 2015, 10:00 hours | content | No comments

Semi-autonomous agents: What are they, exactly?

This post is intended to be the first in a series of long form articles (how many, I don't yet know) on the topic of semi-autonomous software agents, a technology that I've been using fairly heavily for just shy of twenty years in my everyday life. My goals are to explain what they are, go over the history of agents as a technology, discuss how I started working with them between 1996e.v. and 2000e.v., and explain a little of what I do with them in my everyday life. I will also, near the end of the series, discuss some of the software systems and devices I use in the nebula of software agents that comprises what I now call my Exocortex (which is also the name of the project), make available some of the software agents which help to expand my spheres of influence in everyday life, and talk a little bit about how it's changed me as a person and what it means to my identity.

This series of articles was previously highly summarized in the form of a presentation at the invitation of Ripple Labs in August of 2015.

So, let's kick this off. What are software agents, exactly? One working definition is that they are utility software that acts on behalf of a user or other piece of software to carry out useful tasks, farming out busywork that one would have to do oneself to free up time and energy for more interesting things. A simple example of this might be the pop-up toaster notification in an e-mail client alerting you that you have a new message from someone; if you don't know what I mean play around with this page a little bit and it'll demonstrate what a toaster notification is. Another possible working definition is that agents are software which observes a user-defined environment for changes which are then reported to a user or message queuing system. An example of this functionality might be Blogtrottr, which you plug the RSS feeds of one or more blogs into, and whenever a new post goes up you get an e-mail containing the article. Software agents may also be said to be utility software that observes a domain of the world and reports interesting things back to its user. A hypothetical software agent may scan the activity on one or more social networks for keywords which a statistically unusual number of users are posting and send alerts in response. I'll go out on a limb a bit here and give a more fanciful example of what software agents can be compared to, the six robots from the Infocom game Suspended. In the game, you the player are unable to act on your own because your body is locked in a cryogenic suspension tank, but the six robots (Auda, Iris, Poet, Sensa, Waldo, and Whiz) carry out orders given them, subject to their inherent limitations but are smart enough to figure out how to interpret those orders (Waldo, for example, doesn't need to be told exactly how to pick up a microsurgical arm, he just knows how to do it).


More under the cut...

The Doctor | 16 November 2015, 09:00 hours | content | No comments

Star Wars, the Force, and balance.

I've had some ideas kicking around in the back of my head for a while, in particular after finally watching the other two Star Wars prequels (I saw the first and it put me off from watching the other two for many years - ye gods...) and this article in the Huffington Post about where the next movie might be headed. I'll not cover that territory because there really isn't any reason to, but there are a few things that I've been ruminating on for a while.

First, let me state a couple of things up front: I'm not a raving Star Wars fan. There are things I enjoy much more than the Star Wars movies but I do appreciate them as science fiction. Second, I haven't seen the trailers for the next movie. I might get around to it. I'm also not versed in the Star Wars Expanded Universe - the games, the novels, the cartoons... so any of this stuff might be covered in there somewhere. I don't know. These are also my informed speculations on the matter; I don't have any kind of inside line to the Lucasfilm/Disney/whoever else empire. I'm also trying to write with nuance, so please don't treat these words as being written with a broad brush. Treat them as examples and nto as absolutes or stereotypes.


More under the cut...

The Doctor | 31 October 2015, 17:45 hours | default | No comments

Direct neural interface: Hopefully coming soon to a brain near you

Direct neural interface has long been a dream and fantasy of tech geeks like myself who grew up reading science fiction. Slap an electrode net on your head (or screw a cable into an implanted jack) and there you are, controlling a computer with the same ease that you'd walk down the street or bend a paperclip with your fingers. If nothing else, those of us who battle the spectre of carpal tunnel syndrome constantly know that our careers have a shelf life, and at some point we're going to be out of action more or less permanently. So we are constantly on the lookout for ways to not wind up on permanent disability because we can't work anymore.

Or maybe you just found out way more about me than you really needed to know. Let's move along, shall we?

Bits and pieces of brain/computer interface technology have been around for years: The electroencephalogram is a non-invasive sensing technology for picking up the electrical activity of the brain, and relatively inexpensive open source eegs like the OpenBCI exist for people hacking around. Microprocessors are now fast and powerful enough to crunch EEG data in realtime for very little money and the most unusual hardware can be repurposed for getting the data into your laptop. You can purchase reusable EEG electrodes on Amazon for very little money to ensure that you get the highest quality signals (or you can make your own). TMS (transcranial magnetic stimulation) has been around since the mid-1990's, when only a dedicated subculture of body hackers and modification enthusiasts were winding their own electromagnets and seeing what would happen when they were placed on different areas of their skulls. But what would it take to put all of this together to transfer information from one person's brain to that of another person?

The answer is: Not much, really. A research team at the University of Washington have published the results of experments they've conducted that accomplished just that. One test subject was wired up to an EEG monitoring their cortical electrical activity; the EEG was interfaced with a computer plugged into their local area network where it transmitted the data to another computer. In another lab about a mile down the road, a second test subject with a TMS unit strapped to the back of their head that was interfaced with a second computer receiving the EEG data from the network. The TMS was positioned over the primary visual cortex. When the TMS was energized the resulting magnetic field caused phosphenes to appear in the subject's field of vision (if you want to replicate this at home, close your eyes and gently press on your eyelids; what you see are phosphenes triggered by the pressure stimulating your retinas, which send signals down your optic nerves into your visual cortices). The first test subject viewed a static image; the second test subject used some software to ask the first yes-or-no questions about the image, which was answered by thinking "Yes" or "No" very hard. The second test subject detecting a strong phosphene interpreted it as a "Yes" response. When the experiments were done, the numbers were crunched and it was found that five pairs of test subjects playing twenty games, where half were controls and half were real games showed a success rate of 72%. In a separate control group, their success rate was only 18%, which is significantly below that of the experimental group. If you've a mind to, their peer-reviewed paper is available at PLOS ONE.


More under the cut...

The Doctor | 27 October 2015, 09:00 hours | default | One comment

Aftermath of the Future of Politics conference.

No notes to post, I was too busy running tech for the conference. And fighting with Skype.

The Doctor | 20 October 2015, 16:31 hours | default | No comments

Machine learning going from merely unnerving to scary.

It seems like you can't go a day with any exposure to media without hearing about machine learning, or developing software which isn't designed to do anything in particular but is capable of teaching itself to carry out tasks tasks and make educated predictions based upon its training and data already available to it. If you've ever had to deal with a speech recognition system, bought something off of Amazon that you didn't know existed (but seemed really interesting at the time), or used a search engine you've interacted with a machine learning system of some kind. That said, here's a roundup of some fascinating stuff being done with machine learning systems at this time.

First, let's talk about the chess. As board games go it's a tricky one to write software for due to the number of potential moves every turn. Pretty much every chess engine out there, from IBM's Deep Blue to Colossus Chess back in 1984 use more or less the same general technique, which is brute forcing the set of all possible moves for that board configuration, deleting the moves that obviously won't work (i.e., illegal moves) with varying degrees of cleverness, and winnowing down the remaining possible positions to extract the best possible move for that moment. Well engineered systems can run several hundred million possible moves in a few seconds before settling on a move; conversely, human chess players are observed using the fusiform face areas of their brains to evaluate five or six moves per second before picking a move, which is obviously much slower but history has borne out just how efficient a means of playing chess wetware is. Enter Giraffe by Matthew Lai at the Imperial College of London. Giraffe is implemented as a very sophisticated machine learning system which makes use of multiple layers of neural networks, each of which analyzes chess boards in a different way. One layer looks at the state of the game board as a whole, another analyzes the location of each piece relative to others on the board, and another considers the squares each piece can move to as well as the game effects of each possible move. Giraffe started out knowing nothing about the game of chess becasue it was an unformatted, unprogrammed neural network construct. Lai then began feeding into Giraffe carefully selected parts of databases of chess games, where each game is documented move-by-move and annotated every step of the way. This is, incidentally, the important bit about training AI software. Whatever data sets you train them with have to be annotated in some natively machine readable way so that the software has a "native language" to attach ideas to, just as you or I would think in our native languages and mentally translate into a second language learned later in life. All told, it took Giraffe about 72 hours continously to assimilate the information needed to play chess. At the end of the traning process Giraffe was benchmarked against human chess players, and it was discovered that Giraffe ranks as a FIDE International Chess Master. If you're curious, here's the paper Lai wrote about building and training Giraffe.


More under the cut...

The Doctor | 13 October 2015, 08:30 hours | default | One comment

I am now obligated to say something.

Readers of my site or social aquaintenances may be aware of independent presidential candidate and outspoken transhumanist Mr. Zoltan Istvan, who is at this time on the campaign trail. More specifically Zoltan is one of the residents of the Immortality Bus which is driving across the country to raise awareness of death and why time and funds must be allocated to study cures for aging and decrepitude in the human animal. Zoltan Istvan seems, in the times I've spoken with him on a casual basis a reasonably decent, intelligent, and well read person. He is a very successful and ambitious person, and I will not take that away from him.

However, Zoltan is working hard to promulgate an us-versus-them mentality among the people he is interacting with. Either you're with him and working to overcome death, or you're against him (in his parlance, a "deathist"). He's calling out moderate voices in the community - people who are hard at work in the lab or at the bench and not talking, people who have nuance in their worldviews, and people who have called him out for trolling (such as was done outside of churches in the Deep South some weeks back).

I cannot, in good conscience, back him any longer.

There are as many avenues to personally directed evolution and potentially transcendence as there are members of the transhumanist community. Our strength is in our diversity of viewpoints, our works, and our willingness to collaborate so that all of us benefit, not in rhetoric which makes us look like a bunch of extremists. It's hard enough being taken seriously when you say you build prosthetic limbs in your workshop, and telling people that they're on the side of death if they won't listen to you isn't helping any.

Zoltan, there is no reason that you should read these words, but just the same: I have a great deal of respect for you, and I do not think ill of you. I'd love to hang out and talk shop over coffee with you the next time in you're the area. But you're going about this the wrong way.

Here is the official word of the Transhumanist Party, the words of which I happen to endorse even though I have elected to not join the organization.

The Doctor | 12 October 2015, 18:28 hours | default | One comment

I'm not about to break a streak.

It's getting near the end of September and I haven't posted anything yet this month. What's going on?

Rather a lot, actually.

I've taken on a significant amount of responsibility at my day job this year, and sometimes that means putting in long hours. Long enough hours that, if I don't faceplant shortly after arriving at home I'm awake for only an hour or two afterward, and the last thing I want to lay eyes on is a keyboard. I usually study during that time before crashing for the next day. Yes, this means that I'm at the point in my career where racking up certifications is now a requirement, and not merely an idea to entertain and then set aside for later. This is not terribly conducive to blogging, as one would guess.

I've also been putting in a fair amount of time working on a couple of conferences, both organizing and preparing papers for. Organizing a conference seems so easy when all your're doing is preparing and practicing a presentation, and maybe running your paper past someone for a final look-see before it goes live. But when it comes to getting everyone to send their presentations to you in a timely manner for testing, getting a laptop and projector, figuring out lunch for all the attendees... it's a lot of work.

I have a fairly large queue of stuff I want to write about, probably fifteen or twenty long-form posts worth. If and when things calm down somewhat (and assuming that my body's immune system doesn't segfault on me) I'll try to get to work on them.

The Doctor | 27 September 2015, 16:57 hours | default | No comments
"We, the extraordinary, were conspiring to make the world better."