Exocortex: Identity and Agency

12 September 2016

Some time ago I was doing a longform series on Exocortex, my cognitive prosthetic system. I left off with some fairly broad and open-ended questions about the implications of such a software system for identity and agency. Before I go on, though, I think I'd better define some terms. Identity is one of those slippery concepts that you think you get until you have to actually talk about it. One possible definition is "the arbitrary boundry one draws between the self and another," or "I am me and you are you." A more technical definition might be "the condition or character as to who a person or what a thing is; the qualities, beliefs, et cetera that distinguish or identify a person or thing." That said, in this context I think that a useful working definition for the word 'identity' might consist of "the arbitrary boundry one draws between the self and another being that may or may not incorporate the integration of tools or other augmentations." Let us further modify the second, technical definition to include "the condition or character as to who a person or what a thing is or consists of due to the presence or absence of augmentations that modify the capabilities and/or attributes thereof," due to the fact that the definition should explicitly take into account the presence or absence of software or hardware augmentations. We also need to examine the definition of the word agency, which seems even more problematic. The Free Dictionary says that one definition is "the condition of being in action or operation," or loosely "being able to do stuff." The Stanford Encyclopedia of Philosophy says (among other things) the following about agency as a concept: The exercise or manifestation of the capacity to act. Of course, there are also arguments about the philosophy of agency that involve actors that should not be capable of having the intention to act doing so anyway, sometimes in ways that are functionally indistinguishable from organic life (which we usually think of as actors in the philosophical sense, anyway). And that's where things start getting tangled up.

Before I move on, I should set up two additional definitions. For the purposes of this post, 'agent' will refer to one of the functional units of Huginn used to construct solutions to larger problems. 'Constructs' will refer to the separate pieces of more complex software that plug into Huginn from outside.

As I've mentioned in the past about Exocortex, while I built it to do things on my behalf the network of agents and constructs they run more or less autonomously. That is to say, every agent has its own schedule - some agents run every minute, some agents run every five minutes, or ten minutes, or hour, or at set times of the day. These agents are kicked off by Huginn's scheduler, which looks at the list of all timed agents once a minute, picks out the ones that are supposed to run next, and fires them off. Other agents only run when they receive an event from another agent, i.e., on a trigger. It could be said that this gives them a certain kind of agency - I do some things on a set schedule (like get up, go to work, check my e-mail, and so forth) so it's reasonable that extensions of myself would operate the same way. The other agents, the ones that are triggered instead of scheduled, are no different from what is being called "interrupt driven work" these days, or stuff that has to get done because somebody asked for it instead of it coming up on a calendar. They also carry out tasks that I would otherwise have to do myself - refresh a page, pull an RSS feed, look at a price, and so forth. So, those individual processes have agency in that they have tasks that they accomplish on a regular basis, and the sum total of those tasks is roughly equivalent to several hours of my time every day. The tasks that the constructs I've built - the XMPP bridge, the web index and search bots, Paywall Breaker, and the rest - have a more sophisticated kind of agency. They were written to carry out very specific and complex tasks, like translating Jabber messages into a REST API that other pieces of software can pull commands from, or download an HTML page, render it, and upload it into a document management system. That agency is predicated upon all of the ways they can carry out those tasks (for example, look at the code for paywall breaker) or throw an exception trying.

This has some interesting implications.

For starters, the agency of all of those pieces of software is an extension of my own agency. I built them to do things on my behalf and they do them as well as they are able, sending the results back to me through any number of communication channels. There is the matter of what happens if they mess up, because software always has bugs in it but that does not excuse not performing due dilligence to identify and mitigate or at least minimize the impact of its failure modes. The non-trivial tasks that my Exocortex implements are sufficiently important that I can't brush off bugs or mixups as "just glitches in my software." None of this software was built for fun, it was built to do what most people would consider important things, many of which are directly involved in my professional life. It would be like trying to pass off dropping an armful of dishes as "having a butterfingers night" which, if you've ever had any close calls with shattered crockery at home flies about as well as a truck full of bowling balls. At the very least it would be dishonest, because I am just as responsible for extensions of myself as I am for my own actions. One way of looking at it is that those agents mean there's more "me" there, so I'm just as responsible for their messing up as I am for screwups that my physical hands had direct contact with. No pressure, right?

There is also the question of whether or not I, that-which-writes-this-blog-post (though not necessarily that which does the research, pulls up relevant links, or any secondary tasks) am necessarily aware of anything that my Exocortex is up to. Of course, on an academic level I know what the agent networks are doing (there are just over a thousand agents running right now, so I can't fit all of it into my wetware...) and what the constructs on servers around the world are up to, but do I really know what's going on inside them? Do I know what the Ruby interpreter is doing instruction-by-instruction in the memory fields of those servers? No. I could if I loaded it into a debugger and watched every thread run in realtime, though it would take time after the fact to figure out what the instructions actually accomplish. No mean feat, and certainly one that I can't do in realtime, no more than I can instrument the neurons and synapses inside my own skull to monitor what they're doing. (Contrary to popular conception, an EEG only displays the electrical activity of the brain in a fairly low resolution way.) I also don't know the values of any of the events they're generating and transmitting between each other unless I'm specifically watching in a web browser, with multiple tabs open and refreshing in realtime. Certainly possible, and necessary to debug things, but it also defeats the purpose of outsourcing these tasks (as it were) to software agents if I do it too much. Same with the constructs running on other servers: They all emit some output into the shells they run in, more if debugging output is specifically enabled, but there isn't much sense in watching them all the time unless I'm trying to fix something. It is certain that only agents that interact with each other could be said to be aware of one another; minimizing coupling also minimizes interference and if there's no reason that two dissimilar things need to talk, there is no reason to program them to do so, just like a Swiss army knife doesn't need to be attached to your bootlaces. The reverse is also true: I only interact with the user-facing agents when I need to, and not because I have nothing better to do at the moment. It may sound strange, but I don't spend all my time interacting with other parts of myself. When I need some web searches done, or a phone call made, I send a quick message to the construct that does so for me and do something else until I hear back (like type in notes for other stuff in the project I'm working on, or answer an interrupt driven request or something).

A rather obvious question is whether or not any of this software and hardware is "really" a part of me. That is a less orchestrated question than it sounds. Popular opinion is that tools of any kind, from hammers to screwdrivers to anything else you might pick up and put down are external objects. However, medical science has hypothesized for many years, and recent experiments have shown that this is probably not the case. The human mind has a body schema, or an internal representation of how the body is structured based upon sensory input (which includes proprioception, or the awareness of the positions of the parts of one's body). As it turns out, the body schema is very much mutable, and changes on a scale of minutes if not faster. Recent experiments have shown that tools used to carry out tasks result in changes to one's body schema because the brain starts treating the tools as parts of the body, and when the tools are put away the body schema shifts back to the way it was before. The same has been shown to be true of prosthetic limbs and assistive devices for locomotion (original paper here). It has been my observation that having an information gathering network has increased my reach and my self-education in many ways; my weekly information diet includes perhaps a dozen whitepapers and maybe a textbook or two during my daily commute (though I've also noticed that my relationship to electronic media to the printed word has changed in some ways I find disquieting, to say the least). I have certainly found that my information archives (comprised of instances of Scrapbook on a number of machines, the archive maintained by Paywall Breaker, my personal search engines, and library directories on any number of servers) constitutes personal memory external to that which resides inside my skull. Even if I dilligently kept up with speed reading and eidetic memorization exercises every day (the former yes, the latter not so much for reasons outside of the scope of this article) there remains the problem of knowing what I would need to keep inside my head and accessible at all times, and what I can store externally and recover as needed. Do I really need to remember each and every e-mail I've written in the past twenty-five years? How about the specifics of the Common Criteria? The answer is "No, I don't." I need to know where to find and access that information when required, but storing all of it inside my head is neither feasible or efficient. What I do need to get through life, however, is to know how to find and evaluate for correctness and usefulness information on an as-needed basis and basic reasoning skills associated with deductive, inductive, and abductive reasoning. I need to keep a fairly small set of information memorized at all times that I use multiple times per day to be a more-or-less functional adult in the twenty-first century (like navigating the banking system, paying bills, user credentials, how to write a decent letter or e-mail, things like that). Basic education, of course, is assumed here because that's part of "evaluate for correctness and usefulness."

But that whole "engineers who can't do basic math" thing? That's totally true, and I have to practice it lest I forget how to do it. I think it's because, at a certain point you stop doing elementary stuff; to put it another way, I do significantly more Boolean logic and Bayesian reasoning than I do adding and subtracting day in and day out. In like fashion, during circumstances when I lack connectivity or happen to not have my laptop with me, I'm kind of hosed if I can't get to one or more of my personal archives. My personal e-mail not so much, maybe, but not being able to access a particular enginering textbook outside of my expertise or a tutorial on a particularly arcane topic (like easter egg hunting) can be a real kick in the pants.

So, what does this mean for my identity? What am I?

I'm not going to get into identity politics in this post because it's well outside of the scope of this series of posts. What is within scope is this: To a certain extent, all of us are augmented by the technologies we use every day, from our wristwatches that assist us in telling time to our smartphones to Google Mail and Calendar. As Donna Haraway stated in A Cyborg Manifesto (local mirror), cyborgs are creatures that synthesize organisms (in this frame of reference, sapients like you and I), machines (computers and interface devices), social relations (how we interact with each other both in and out of the context of those machine augmentations), and fictions-as-stories in the sense of the stories that we tell in those contexts. To put that last bit another way, social science and psychology have observed that putting things in the context of stories means that people are significantly more likely to involve themselves in the world and their tools in a personal manner. Funnily enough, when people hear or read stories, they put themselves into the stories, usually by identifying with the protagonist (but not always) which changes how they interact with what the story is about. It sounds weird, but this is one of the principles behind user stories in agile software development. Anybody who's paid attention to how the Web has changed in the past five years knows (more or less instinctively) what I'm talking about. Having made these claims with associated evidence (and sufficient web search fodder for motivated readers) here's where I stand on the issue of identity: Due to the software augmentations and hardware interfaces incorporated into who I am and what I do each and every day, my identity is inherently that of a cyborg leaning heavily toward the software side of things. It is inherently difficult to separate what I am and how I do it from the components of my exocortex. There is also a blurry line between what I-that-writes-this and components of my exocortex do because there is a non-zero probability (closer to a 30% probability, really) that anybody contacting me will wind up in contact with at least one software construct doing something on my behalf before they actually get through to me. Not that I have anything against anybody, but between handling evolving situations at work, sitting in meetings, and commuting through zones that have spotty to no connectivity, would you rather wait a day or two for me to get back to you or have something get back to you in a reasonably fast timeframe and let me know about the contact after the fact? For those of you who are upset at the possibility of holding a conversation with a software construct, what if I told you that you were talking to my executive assistant because I was in meetings all day and unable to check my e-mail because there were no breaks? Why is one inherently unacceptable and the other perfectly reasonable?

Something that the world in general is going to have to get used to is that it's become so inherently complex that individuals are less and less able to navigate it, or keep track of everything happening in their lives without significant forms of augmentation. Chances are if you're reading this post you have at least one Gmail account, which means that you make use of one of the most slick web-based user interfaces ever made, plus you're leveraging the capabilities of one of the most sophisticated search engines on the planet to search your e-mail. You also probably use Google Calendar, which is probably the first and possibly only web-based calendaring application on the Net that doesn't suck. If you're uncomfortable with e-mailing back and forth with a software construct why are you okay with using shared calendars to negotiate meetings, dates, and other personal events or chatting with tech support bots? Why are you comfortable with letting a megacorp's gargantuan maching learning system manage your personal e-mail but it's weird that someone might have a personal database and search engine managing their e-mail? I will carefully submit that, by using similar tools presented to you as services instead of building your own infrastructure you as readers are just as much cyborgs as I am, only you're less aware of it because, as far as you're concerned you're just clicking links in a web browser instead of building out new functionality on your own. There isn't anything wrong with not having your own infrastructure, by the way, but I do think it should be acknowledged that you do make heavier use of technologies that extend who you are and what you do than you think you do.

I'll probably come back and revisit this post in the weeks to come, but for now I think that's about all I've got in the way of non-crunchy stuff. My next post in this series will be about how to set up Huginn and build some basic agent networks to experiment with.