On transhumanism.
I've been wrestling with this post for weeks now because, at its heart, transhumanism isn't a simple set of beliefs, actions, or ideas. It encompasses many disciplines, from cybernetics to engineering to computer science to biology and many things in between. I say that not as a cop-out but because practically every discipline is covered in some way and informs the body of knowledge somehow. It is also a deeply personal philosophy, often attracting adherents who attempt to lead by example as well as participating in the research, development, and deployment of the technologies which originally inspired it (such as neurology, computer science, cybernetics, and medicine).
I've also been concerned about the repercussions of discussing certain sensitive topics which are integral to transhumanism but very much like a magnesium suppository introduced to a blowtorch elsewhere. Mostly I've been worried about things like genetic manipulation on a large scale (entire countries, entire nations) and curing certain diseases (or things that some people don't consider an affliction at all - for example, autism). These are hot button topics for various reasons and I have no desire to find myself roasted to a crisp across the Web. So, at the risk of sounding arrogant, I've decided to write this article with someone like myself in mind: thoughtful, curious, not prone to making snap decisions, willing to consider the drawbacks and potential hazards, and willing to not only listen to what someone has to say but also give them a chance to retract or clarify any stupid statements that they might make (especially in the event of a simple mistake).
There is also another topic which I've been wrestling with, which is that of accessibility of the advanced technologies postulated by transhumanism. Technology has never been evenly distributed and I strongly doubt that it ever will be. Even today, access to medical care around the world is hit or miss and whether or not you can pay for it is another thing entirely. Access to computers is better than it was when I was a child but there are many millions of people around the world who don't own a computer or have access to related technologies of some kind. If and when practical nanotechnology is developed, it will not be widespread until someone figures out how to get it down to street level for people to hack around with. The point I'm trying to make is that transhumanism would probably not be the Great Equalizer, where everyone's lives are suddenly made better, all problems are fixed, and everyone suddenly has access to everything they need. Chances are, an entirely new set of problems will be created, and it will take many years for equilibrium to be reached. I do think, however, that if the capability to make optimal use of the resources someone has available to them (organic material, feedstock, modifiable parts, data storage, what have you) hits street level, it will go a long way toward leveling the playing field.
At its heart, transhumanism is a philosophy in which science and technology can be intelligently used to advance the human condition beyond what it is now in a positive direction. The capabilities of the human body, while amazing could be extended beyond what even a Navy SEAL is capable of with sufficient re-engineering and prosthetic augmentation. Case in point: Olympic sprinter Oscar Pistorius, who was disqualified from the 2008 Olympics because his prosthetic racing legs, which are constructed out of carbon fibre composite and lightweight alloys were thought to give him an unfair advantage over other runners. It should be noted that Pistorius lost his legs around the age of one and were not the result of elective modification. In fact, they may not have provided much of an advantage at all as they were passive and not active prosthetics. The point I'm trying to make here is that if someone like Pistorius can achieve amazing things having never had normal legs, what would a relatively normal human being be capable of after augmentation? What new capabilities could be discovered? Transhumanism also espouses research of biomedical technologies that would increase quality of life: the identification and correction of inherited diseases, such as multiple sclerosis, cystic fibrosis, and propensities toward cancers of all kinds. Less critical genetic defects like nearsightedness, bad teeth, or allergies should also be corrected under this paradigm on the theory that they too would contribute to quality of life for many but are not life threatening. Now, before you go all Gattaca on me, stop and think about this for a minute: How much less would your life suck if you never again caught a cold or the flu? How much more would you enjoy life if you didn't need to take a handful of medication to get through the day or you didn't have to count your spoons to decide if you were going to spend time with your kids or cook dinner but not both? What if you didn't have to decide how you were going to manage your sex life with your partner because the probability that you would have a child with Down's Syndrome was too high (for some value of "too high" personal to the would-be parents)?
Think some of these are uncomfortable questions? Thousands, maybe millions of people answer these and more like them every day, be it by getting out of bed in the morning or not or minimizing contact with the outside world at certain times of year because their immune systems are compromised. If the technology to qualitatively improve the everyday lives of people by healing (or preventing) chronic illness comes to be, why should it not be done? More's the point, who is to say that if and when these advances are made available the human race will not have finally gotten over its habit of persecuting people because they're seen as different by someone? If I can ask for a pony, I can also assume for the moment that one bright and shiny day the human race will decide that racism, sexism, and looking down on people who aren't as lucky or well off as you are no longer necessary and dispense with such garbage in favor of other, more positive things. Maybe we're partway there.
On the other hand, this brings up an issue that isn't spoken of often which happens to be an edge case of morphological freedom. If we are to postulate a hypothetical future in which personal modification and augmentation are far in advance of what we have now (elective genetic alteration and widespread use of prosthetics), what of people who choose to have no alterations made at all? No fully programmable immune system, no cranial computer, no intelligence augmentations, no removal of genes which make one more likely to develop chronic or terminal diseases.. How would we handle people who don't want to be People v2.0?
When talking about H+ and bioengineering, life extension and the possibility of functional immortality are part of the package. Some practitioners in the field of genetic engineering hypothesize that, given sufficient knowledge and capability to manipulate the genetic material of complex lifeforms it would be possible to extend their lives incrementally, at some point rendering them functionally incapable of dying due to disease or old age. Perhaps it is possible to engineer bodies that age so slowly that correcting entropic damage is a relatively easy operation; maybe one day a process, a set of implants, or a treatment of some kind will allow human bodies to remain in the their prime (physiologically speaking) for as long as people want them to. If medical science ever advances to that level it's a pretty safe bet that few people will be able to afford the procedures for years if not decades. It's not going to be cheap, and you can bet the last microchip on your motherboard that insurance companies won't cover it. Frankly, I have no idea what it would take for such a technology to become widespread because it would turn how people live their lives upside-down. What I do feel that I can safely say is that the rate of human reproduction will probably decrease because a population that doesn't lose members will soon outstrip the land and resources available to it. Also, wealth will remain in the hands of the people that originally had it and their descendants won't get their hands on it easily, if at all. Like it or not, money has to be factored into consideration.
For the more materialistically inclined of you, if your parents never die you won't inherit their money. Think about it.
The existing power structures will have to adapt to a population that has a far lower turnover rate than today's, also. There are a bunch of other problems that'll have to be solved and it's a safe bet that problems that no one's thought of will rear their ugly heads and have to be dealt with along the way.
Going hand-in-hand with the topics of biomedical and genetic engineering is the field of intelligence enhancement: if not using the raw processing power of our brains as they are now to their fullest, then finding ways of augmenting the brain so that you can think more thoughts at once, think more complex thoughts, think much more rapidly, and communicate more complex ideas more effectively. This encompasses everything from mental exercises like those found at the Mentat Wiki to working logic puzzles in your spare time for practice to experimenting with nootropic drugs to tweak the operating parameters of the brain. Perhaps one day the human race will find ways to integrate computers with their brains to expand their minds, but I don't feel confident that this will happen within my lifetime. However, it could be argued that the portable information processing devices many of us carry around today, be it a PDA, a smart phone, a laptop computer, an ebook reader, USB storage, or some combination thereof are the first manifestation of such augmentations. All of these devices allow us to offload data processing or storage to gadgets and thus free up compute cycles within our heads. While we don't yet have the technology to implement these devices inside our bodies their user interfaces are sufficiently simple to learn quickly yet allow complex operations to be carried out within reasonable periods of time, thus, we haven't yet reached the point where implantation is necessary for their use. In other words, your PDA hasn't gotten so complex that it could only be controlled by thought alone.
I would also like to take this moment to point something out: the rate at which the world is changing is nothing if not blinding. There are few people alive right now who can make sense out of more than a tiny bit of everything going on. More and more information is being generated and made accessible every day and at some point the technologies that we use to manage and access it all (RSS feeds, search engines, microblogging services, link dumps, news tickers, and so forth) will no longer be enough to let us work with it in a reasonable fashion and still live our lives. I'm not talking about dinner dates and movie tickets, either, I'm talking about the knowledge that we need to get through the day, to not get stuck in traffic for four hours, to know that something horrible has happened someplace, and to know that there is still a way to get through to people when the phone lines are down (yes, the day I couldn't get a dial tone on any of my POTS lines due to 9/11 still haunts my dreams). Also, the rate at which bodies of knowledge and technologies are advancing is such that at some point unaugmented minds will probably not be able to keep up (ask anyone who needs to assimilate large volumes of data to maintain their certifications), or be able to grasp the implications behind the use of these technologies as well as those which are yet to come.
This dovetails nicely with a topic that is guaranteed to touch off duels to the death in academic circles, the creation of sapient artificial intelligence. In recent years artificial intelligence has been redefined as an aspect of computer science which involves the study and development of software which analyzes its environment and mission parameters and develops strategies which will maximize its chances of success. While this isn't easy by any stretch of the imagination (and anyone who tells you it is easy is either full of it or working with software models that are too simple to be useful in any practical fashion) early forms of this technology exist now and are implemented in many places. If you've ever heard of data mining, adaptive systems, or information forecasting these fields can be referred to as the precursors of weak AI, or software implementing data analysis and manipulation algorithms designed to solve one particular sort of problem or work with one type of data well. These systems could carefully be said to emulate certain aspects of intelligent thought. The software that tracks your purchases on Amazon and suggests other things that you might be interested in could be said to be a weak AI (maybe); so can the software that looks at everything you buy at the supermarket with your membership card and prints out coupons for things you might be interested in purchasing later (the former, I find, is a hell of a lot more accurate than the latter). If you're talking about autonomous software that is sapient, or capable of exercising judgment and wisdom, self-determination, human-like free will (if you believe in that sort of thing), and creativity ala HAL or Kilburn then this is referred to as strong AI.
We don't have strong AI yet. Somewhen around the mid-1980's researchers stopped saying that we'd have it real soon now because we don't even understand what general intelligence is, how it's implemented in our gray matter, or how it functions. Without that knowledge we can't really begin to engineer systems which do the same sorts of things, let alone in the same ways. Personally speaking (and this is me speaking as a technologist who doesn't do AI research), I don't think that we can engineer algorithms which will give rise to a human-equivalent mind though there are some scarily smart people who think they've got a handle on what it would take and it won't be a Perl script. I fall into the evolutionary/emergence camp of AGI, which means that I think that software and data systems will one day become so complex, software which implements massively parallel processing so common, and information analysis algorithms so ubiquitous that at some point something that will identify itself or be identified by others as a sapient mind will spontaneously coalesce. If we reach this point then perhaps we'll figure out how it works after the fact by reverse engineering it. Or maybe it'll be nice enough to construct new AGIs for us (progenitor AGIs) if we ask it politely.
Seeing as how I've just shown how little I really know about AI, I think I'll try to redeem myself by touching briefly on personal computing. In just twenty years, personal computers have transformed from arcane devices that only the hardest core nerds had on their desks to commodity gear that you can even find on trash piles. It's hard to imagine life today without a personal computer of some kind, profiles on a couple of websites, a webmail account or two, an instant messenger handle or two (or five or six), and possibly a Flickr or Picasa album or six. Laptops are no longer expensive curiosities but indispensable tools and lacking that a good smartphone has the processing power and software to edit simple documents, manage a half-dozen conversations (plus a voice call), and sometimes provide a gateway to the Net for nearby devices (depending on how kind your cell provider is or how much you want to hack your kit). With the advent of the netbook practically everyone has access to all the processing power they need in a form factor comparable to a trade paperback. Shortly before the Web became a household term online services started giving away free e-mail addresses for the asking; while the user interface was new, the idea of providing message services for people is actually a very old one which dates back to the dialup BBS era. Following the model of "make it free and not suck and they will buy accounts with extra features" other online services like LiveJournal and Dreamwidth started to supply webspace where people can maintain journals and communicate with one another. Not to be outdone, Google Docs means that you don't have to e-mail documents back and forth anymore, and in fact you might not even need word processing software. Now that it's possible to rent virtual machines from companies like Amazon that you can't tell from a machine under your desk and set up websites with effectively limitless disk space and bandwidth you really don't need to run your own servers anymore. You can reach all of those systems from a terminal down at your local library or the nearest airport, if not you cellular phone. The buzzword that refers to this collection of web applications is cloud computing (and I really should get around to writing an essay about it one of these days).
Heck, if you really wanted to, you could set up service accounts all over the Web and tie them together with a simple HTML page on Google Sites.
An often spoken of possible convergence of neurology, biotechnology, psychology, and computer technology (and I say 'possible' because there simply is no way to know for sure if such is possible) is uploading, or the replication of the thoughts, memories, and identity of a living mind within a computer. Perhaps this will take the form of hardware in the fashion of flash memory and FPGAs. Maybe in the end it'll be possible to copy the contents of the brain in such a fashion that it'll run on the bare metal, just like an operating system. Recent conjecture and some research suggests that it's more likely that an uploaded mind would more readily execute as an application, in a manner similar to a web browser or mail client. While the prospect of staying online more or less permanently sounds appealing, there would likely be other benefits to existing as software. The mind's memory storage and retrieval mechanisms could operate atop a mechanism akin to the host computer's file system, strongly implying that the mind could effectively have eidetic recall. Probably the biggest advantage would be that, like any other files, they can be copied and hence backed up, plus there is the possibility (or frightening possibility, depending upon your point of view) that the files which comprise an uploaded mind could be patched and edited like any other executable... While it wouldn't be a perfect form of immortality because if anything happened to the currently running copy of the mind the memories recorded since the last backup would be lost, you have to admit it's not a bad one either.
You can't really discuss transhumanism without getting into the subject of nanotechnology, or the manipulation of matter on the scale of individual atoms to build things, rework the body, implement ubiquitous surveillance...
Nanotech, if it ever becomes a feasible technology, will not be a panacea for everything that ails the human race. Neither will it be the end of the human race. The same things were said about electricity, x-rays, genetic engineering and atomic energy, and while there are significant risks inherent to those technologies they have also brought about significant advances in the human condition and I see no reason why nanotechnology would be any different. In the twenty-first century we are just now taking baby steps toward manipulating individual atoms with tools like focused electron beams, atomic force microscopes, lithographic techniques, and scanning tunneling microscopes. We can make buckyballs and buckytubes pretty easily nowadays. We can fabricate circuitry as small as 65 nanometers (65 billionths of a meter) and AMD and IBM are working on fabrication techniques that go down to 45 nanometers. Our control over things of that scale is still pretty crude; we do not yet have robotic manipulators which can pick up the right type of atom (carbon, silicon, oxygen, what have you) individually and position them. Rather than go out on a limb I'll be honest and say that I have no idea when or even if those will be developed. Various and sundry people with more letters after their names than I have in my full have over the years given various dates for the widespread implementation of nanotechnology but thus far they have either come and gone uneventfully or are a few short years away.. and given the human race's lack of skill at manipulating individual atoms relative to the rest of what they can do I think there's a pretty good chance that the longer term estimates aren't going to pan out, either.
What we do have right now are rapid prototyping and fabrication technologies that use feedstock of some kind (aluminum, polylactic acid, ABS plastic, steel, titanium alloy) and construct solid pieces by either extruding and bonding (deposition), reshaping, or cutting, lathing, and drilling (milling) the material. Those pieces are then assembled by another robot or a person. Rapid fabrication actually has its roots in the manufacture of topographic relief maps though years later the basic principles were applied in the manufacture of industrial and transportation machinery (remember those "Science and Technology!" educational films that the Dravo Corporation used to produce for schoolkids?) and has since evolved into fully operational CNC setups about the size of a desk that you could easily put in your basement or garage, though the cost for the research models alone starts up in the five digit range. I have no idea what the full-scale industrial fabbers go for but you can bet that the cost would be equivalent to buying a house in northwestern DC (and about as difficult). However, like any good technology rapid prototyping and fabrication hit street level a couple of years ago. Open source fabrication systems like the RepRap and Fab@Home, which use readily available feedstock materials like silicone gel and ABS plastic filament can be constructed for less than $500us but right now the downside is that you have to have considerable mechanical and electrical engineering skill to construct one. I've worked on a RepRap and it was a humbling experience indeed. I am told that the Makerbot, which is based on the RepRap is easier to construct (thus lowering the bar to entry) and geared more toward hackability (making new toolheads, more robust, more modular) but trading off for cost ($750us) and lack of ability to replicate some of its components. While everyone may not eventually have a fabber in their home, I do think that it is possible in the future for there to be outlets which have a number of fabbers on site with access to databases of blueprints for things that could be made from a combination of milling and fusing feedstock of some kind - why ship hinges, nails, or pipes when you could construct them locally at a fraction of the cost, and built to spec, too?
In transhumanism, there is a sizable contingent that believes that something called the Singularity will eventually come to pass. The Singularity is said to be a time when everything we know about the world will change: wars will stop (or begin), people would henceforth live forever (or the human race would be annihilated), an artificial general intelligence with the processing power of a god would spring into existence and solve all of our problems in the blink of an eye (or annihilate us), and/or we'd upload our minds, abandon our bodies, and explore cyberspace until the eventual heat death of the universe unravels everything that exists. Usually some combination of the above is postulated by singulatarians with a timeframe ranging from the year 2000 (obviously, that didn't happen), to 2010 (I'm seeing no signs of a singularity event from where I'm sitting), to 2012, to 2030, to... you get the point. The news media, since covering the Singularity Summit a few weeks ago has taken to calling this the Rapture of the Nerds (though the phrase goes back quite a few years) and most of the coverage of same hasn't been particularly flattering.
I think I should buck up: I don't think there will be a singularity.
The process of evolution, be it natural, directed, or somehow accelerated isn't an instantaneous process. Even though technology changes rapidly and the fruits of our advances permit us to reach farther, travel longer, learn more, see in more detail, and accomplish more faster, these changes take time to spread and take hold. Mutations in genomes, be they for better or for worse, require several generations to propagate through a species before they become ingrained in a species. The invention of a new technology doesn't actually change a whole lot in the grand scheme of things until it gets out of the lab, into the factory, and then onto the street where people can use and hack around with it. New technologies have to hit a sufficiently low price for a critical mass of people to get hold of them, too. While medical science is sufficiently advanced to allow people to reach ages unheard of just fifty years ago (my grandfather's 93 years old and just now starting to slow down) our understanding of life and its processes is still insufficient to reliably alter even single genes in a living creature let alone halt the process of senescence. Rational drug design is an infant science. Our understanding of how the mind, let alone the brain works isn't enough yet to bring us any closer to the development of an AGI, let alone one which is capable of reprogramming itself fast enough to become... something else.
While the effects of something can be felt around the world (like the first use of the atomic bomb in World War II), the resulting changes take time to manifest. It isn't like you can throw a switch and suddenly everything in the world has taken a turn for the better, or the worse for that matter. Maybe I'm being a killjoy, but I don't think it's realistic to wait for a questionably defined future event to save us; if the human condition is going to improve, it has to start at street level, with people working toward the same eventual goal: making the world a better place.