Building on top of my first post about software agents, I'd like to talk about the history of the technology in reasonable strokes. Not so broad that interesting details are lost (or misleading ones added) but not so narrow that we forget the forest while studying a single tree.
Anyway, software agents could be said to have their roots in UNIX daemons, dating back to the creation of UNIX at AT&T in the 1970's. On the big timesharing systems of the time, where multiple people could be logged into the same machine working simultaneously without stepping on one another another it was observed that it was more efficient to split off lower-priority functions from the UNIX kernel into separate pieces of software. The system kernel is the top of the hierarchy of system privileges. The kernel manages all of the resources in the system and carries out privileged operations (such as actually interacting with the hardware and memory management). This means that the overhead of the kernel doing all of that, plus much less important stuff like monitoring the current system temperature (for example) would be considerable. So much so, in fact, that the system would bog down. Generally speaking, if something can be done without being part of the system core it should be, and the system as a whole becomes more efficient. For example, rather than having the UNIX kernel poll the machine's serial ports constantly for keystrokes from users' terminals (back then serial terminals were how timesharing systems were interacted with), it makes more sense to have one daemon called getty ("get TTY") listen on each serial port, grab characters and send them to the user's shell when they arrived, send output through the serial port when appropriate, and only pester the kernel for something when it really had to. This is why, instead of building the system logger into the kernel (which gets pretty busy because it has to catch and write output from everything running in the background) you'll find some variant of syslogd running in userspace. Or, you'll find a job scheduler called crond that executes commands or batches thereof on behalf of users and system admins at timed intervals. In the late 1970's and through the 1980's rather a lot of academic research was done in the field of semi-autonomous software agents. This research came from a number of fields: AI, systems engineering, cybernetics, and distributed computation to name a few. Ultimately, the field of software agents came out Carl Hewitt's Actor Model of software (original paper). Hewitt postulated in his research the existence of computers with highly parallel architectures, on the order of hundreds to thousands of CPUs with independent memory fields connected to high bandwidth communication buses. He called this a processing fabric, which doesn't sound too different from today's grid computing architectures. The software that would run in such a massively parallel environment would logically need to be split up into individual modules called actors. Actors could be carefully compared to primitives in programming languages because they would be the atomic unit of computation in the processing fabric. Some actors would handle device I/O, other actors would be tasked with user interaction, and still others would carry out other kinds of tasks. These actors would communication with each other over some sort of IPC protocol, passing information around in the form of events to carry out meaningful types of information process. Actors could also spawn limited numbers of copies of themselves to better carry out some tasks if they possessed that capability, and later terminate those duplicates. Most importantly, actors could act asynchronously, which is to say they made no assumptions about when events or actions would occur and so could operate on a more or less independent basis. If it became necessary, and if they were designed to do so, actors could coordinate with each other to carry out tasks and share the information to do so. For these reasons, we would call Hewitt's individual software actors software agents.
During this period of fairly intensive research in the field multiple Ph.D theses were written and defended. I won't list them all because there are so many, and they'll probably double the length of this post, if not triple it. Additionally, quite a few of them are old enough that they're simply not online yet and might not everybe; that's part of the problem with documents from twenty-plus years ago, but that's a post for another time by an expert. Suffice it to say that if you've earned a doctorate in semi-autonomous software agents, the technology's "made it" and there isn't much more you can do than work with it.
Jumping forward to the mid- to late 1990's the early generation of personal software agents appeared in the form of instant messaging and e-mail autoresponders. Essentially, if you had such an agent turned on and watching an IM account (ICQ, let's say) or your inbox and somebody that you wanted to keep an eye out for (or, more likely, somebody you didn't want to communicate with) pinged you, the autoresponder would detect it and send them a message. In the case of the former it might send a message to the effect of "I'm sorry, I'm at work and can't talk right now, please leave a message"; in the latter it might reply with an e-mail that read "I'm on vacation until such-and-such date, if this is an emergency please call 212-555-4240" and go back to sleep. During that time there were some IRC bots that had a /MEMO function that let registered users send private messages to one another and set up things like automatic channel operations operations and actions in response to a user joining, leaving, or doing something in a channel. Writing this, I'm thinking soundly of some of the shenanagains that used to happen in the context of an IRC channel I hung out in around that time... suffice it to say that you can do amazing things with just a little storytelling. Some personal spam filtering solutions of the time (Spamassassin) also come to mind in that they could be set up to police your inbox for you and raise the signal to noise ratio.
Unfortunately, as things were wont to be back then personal agents were briefly a "hot new thing" in the late 1990's but were promptly forgotten when Y2k became the thing that everybody was afraid of. Additionally, it was discovered that personal agents didn't solve all the world's problems, so people immediately decided they were worthless and moved on. This is, unfortunately, not uncommon on the Net and is incorrect. Just because something does not salve all of the world's ills does not mean that something is worthless, it means that it is effective within its problem space.
That's about it for this update. The next one, which should be along in a week or two (because I have a presentation to write for a conference) will delve into how software agents work under the hood, what they seem to be good for, and some of their design issues.