A 3D printed laser cutter, aerosol solar cells, and reversing neural networks.

3D printers are great for making things, including more of themselves. The first really accessible 3D printer, the RepRap was designed to be buildable from locally sourceable components - metal rods, bolds, screws, and wires, and the rest can be run off on another 3D printer. There is even a variant called the JunkStrap which, as the name implies, involves repurposing electromechanical junk for basic components. There are other useful shop tools which don't necessarily have open source equivalents, though, like laser cutters for precisely cutting, carving, and etching solid materials. Lasers are finicky beasts - they require lots of power, they need to be cooled so they don't fry themselves, they can produce toxic smoke when firing (because whatever they're burning oxidizes), and if you're not careful the other wavelengths of light produced when they fire can damage your eyes permanently. All of that said they're extremely handy tools to have around the shop, and can be as easy to use as a printer once you know how (protip: Take the training course more than once. I took HacDC's once and I don't feel qualified to operate their cutter yet.) Cutting to the chase (way too late) someone on Thingiverse using the handle Villamany has created an open source, 3D printable laser cutter out of recycled components. Called the 3dpBurner, it's an open frame laser cutter that takes after the RepRap in a lot of ways (namely, it was originally built out of recycled RepRap parts) and is something that a fairly skilled maker could assemble in a weekend or two, provided that all the parts were handy. Villamany has documented the project online to assist in the assembly of this device and makes a point of warning everyone that this is a potentially dangerous project and that proper precautions should be taken when testing and using it. Not included yet are plans for building a suitable safety enclosure for the unit, so my conscience will not let me advise that anyone try building one just yet; this is way out of my league so it's probably out of yours, too. That said, the 3dpBurner uses fairly easy to find high power chip lasers to do the dirty work; if this sounds far fetched people have been doing this for a while, to good effect at that. The 3dpBurner uses an Arduino as its CPU running the GRBL firmware that was designed as a more-or-less universal CNC firmware implementation to drive the motors.

If you want to download the greyprints for it you can do so from its Thingiverse page. I also have a mirror of the .stl files here, in case you can't get to Thingiverse from wherever you are for some reason. I've also put mirrors of the latest checkout of the GRBL source code and associated wiki up just in case; they're clones of the Git repositories so the entire project history and documentation are there. You're on your own for assembly (right now) due to the hazardous nature of this project; get in touch with Villamany and get involved in the project. It's for your own good.

Electronic toys are nice - I've got 'em, you've got em, they pretty much drive our daily lives - but, as always, power is a problem. Batteries run out at inconvenient times and it's not always possible to find someplace to plug in and recharge. Solar power is one possible solution but to get any real juice out of them they need to be fairly large in size, usually larger than the device you want to power. Exploiting pecular properties of semiconductors on the nanometer scale, however, seems promising. This next bit was first published about last summer but it's only recently gotten a little more love in the science news. Research teams collaborating at the Edward S. Rogers Sr. Department of Electrical and Computer Engineering at the University of Toronto and IBM Canada's R&D Center are steadily breaking new ground on what could eventually wind up being cheap and practical aerosol solar cells for power generation. Yep, aerosol as in "spray on." A little bit of background so this makes sense: Quantum dots are basically crystals of semiconducting compounds that are nanoscopic in scale (their sizes are measured in billionths of a meter), small enough that depending on how you treat them they act like either semiconducting components (like those you can comfortably balance on a fingertip) or individual molecules. Colloidal quantum dots are synthesized in solution, which means they readily lend themselves to being layered on surfaces via aerosol deposition, at which time they self-organize just enough that you can do practical things with them. Like convert a flow of photons into a flow of electrons, or generate electrical power in other words. The research team has figured out how to synthesize lead-sulfide quantum colloidal dots that don't oxidize in air but can generate power. Right now they're only around 9% efficiency; most solar panels are between 11% and 15% efficient, with the current world record of 44.7% efficiency held by the Fraunhofer Institute for Solar Energy Systems' concentrator photovoltaics. They've got a ways to go before they're comparable to solar panels that you or I are likely to get hold of but, the Fraunhofer Institute aside, 8% and 11% efficiency aren't that far off, and they've improved their techniques somewhat in the intervening seven months. Definitely something to keep an eye on.

Image recognition is a weird, weird field of software engineering, involving pattern recognition, signal analysis, and a bunch of other stuff that I can't go into because I frankly don't get it. It's not my field so I can't really do it any justice. Suffice it to say that the last few generations of image recognition software are pretty amazing and surprisingly accurate. This is due in no small part to advancements in the field of deep learning, part of the field of artificial intelligence which attempts to build software systems that work much more like the cognitive processes of living minds. Techniques encompass everything from statistical analysis to artificial neural networks (learning algorithms designed after the fashion of successive layers of simulated neurons) to even more rarefied and esoteric techniques. As for how they actually work when you pop the hood open and go digging around in the engine, that's a very good question. Nobody's really sure how software learning systems work, just like nobody's really sure how the webworks of neurons inside your skull do what they do, but the nice thing is that you can dissect and observe them in ways that you can't organic systems. Recently, research teams at the University of Wyoming and Cornell have been experimenting with image analysis systems to figure out how just how they function. They took one such system called AlexNet and did something not many would probably think to do - they asked it what it thought a guitar looked like. Their copy of AlexNet had never been trained on pictures of guitars, so it dumped its internal state to a file, which unsurprisingly didn't look anything like a guitar. The contents of the file looked more like Jackson Pollock trying his hand at game glitching.

The next phase of the experiment involved taking a copy of AlexNet that had been trained to recognize guitars and feeding it that weird image generated by the first copy. They took the confidence rating from the trained copy of AlexNet (roughly, how much it thought its input resembled what it had been trained on) and fed that metric into the first, untrained copy, which they then asked again what it thought a guitar looked like. They repeated this cycle thousands of times over until the first instance of AlexNet had essentially been trained to generate images that could fool other copies of AlexNet, and the second copy of AlexNet was recognizing the graphical hash as guitars with 99% accuracy. What the results of this idiosyncratic suggest is that image recognition systems don't operate like organic minds. They don't look at overall shapes or pick out the strings or the tuning pegs, but they look for things like clusters of pixels with related colors, or patterns of abstract patterns or color relationships. In short, they do something else entirely, unlike organic minds. This does and does not make sense when you think about it a little. On one hand we're talking about software systems that at best only symbolically model the functionality of their corresponding inspirations. Organic neural networks tend to not be fully connected while software neural nets are. There's a lot going on inside of organic neurons that we aren't aware of yet, while the internals of individual software neurons are pretty well understood. The simplest are individual cells in arrays, and the arrays themselves have certain constraints on the values they contain and how they can be arranged. On the other hand, what does that say about organic brains? If software neural nets are to be considered reasonable representations of organic nets, just how much complexity is present in the brain, and what do all of them do? How many discrete nets are there, or is it one big mostly-connected network? How much complexity is required for consciousness to arise, anyway, let alone sapience?

The Doctor | 09 January 2015, 09:30 hours | default | One comment

A couple of thoughts on microblogging.

The thing about microblogging, or services which allow posts that are very short (around 140 characters) and are disseminated in the fashion of a broadcast medium is that it lends itself to fire-and-forget posting. See something, post it, maybe attach a photograph or a link and be done with it. If your goal is to get information out to lots of people at once leveraging one's social network is criticial: Post something, a couple of the users following you repost it so that more people see it, a couple of their followers repost it in turn... like ripples on the surface of a pond information propagates across the Net like radio waves through the air. Unfortunately, this also lends itself to people taking things at face value. By just looking at the text posted (say, the title of an article) without following the link and reading the article it's very easy for people to let the title or the text mislead them. News sites call this clickbait, and either use it quietly, because the goal is to get people to click in and get the ads and not actually have decent articles, or they religiously swear against using it and put forth the effort to write articles that don't suck.

There is another thing that is worth noting: Microblogging sites like Twitter also carry out location-based trend analysis of what's being posted and offer each user a list of the terms that are statistically significant near them. It's a little tricky to get just trending terms but sometimes you can make an end run with the mobile version of the site. By default trending terms are tailored to the user's history and perceived geographic location, but this can be turned off. At a glance it's very easy to look at whatever happens to be trending, check out the top ten or twenty tweets, and not bother digging any deeper because that seems to be what's happening. However, that can be misleading in the extreme for several reasons. First of all, as mentioned earlier trending terms are regional first and foremost - just because your neighborhood seems boring and quiet doesn't mean that the next town over isn't on fire and crying for help. Second, it's already known that regional censorship is being practiced to keep certain bits of information completely away from certain parts of the world without resorting to "block the site entirely" censorship tactics used in some countries. Of course, the reverse is also true: It's possible to manipulate trends to make things pop to the surface, either to ensure that something gets seen (in the right way, possibly) or to push other terms off the bottom of the trending terms list.

For some time I've been writing and deploying bots that interface with Twitter's user API, the service they offer which makes it possible to write code which interacts with their back end directly without having to write code that loads a page, parses the HTML, and susses out the stuff I'm interested in. It's ugly, unreliable, and a real pain in the ass, and I'd much rather do that only as a last resort if at all. Anyway, one of the things my bots do is interface with Twitter's trending terms in various places API as well as Twitter's keyword search API, download anything that fits their criteria, and then run their own statistical analysis to see if anything interesting shakes out. If their sensor nets do see anything I get paged in various ways depending on how serious the search terms are (ranging from "send an e-mail" to "generate speech and call me"). Sometimes it's the e-mails that wind up being the most interesting.

More under the cut...

The Doctor | 07 January 2015, 09:30 hours | default | No comments

Linux on the Dell XPS 15 (9530)

Midway through December of 2014 Windbringer suffered a catastrophic hardware failure following several months of what I've come to term the Dell Death Spiral (nontrivial CPU overheating even while in single user mode, flaky wireless, USB3 ports fail, USB2 ports fail, complete system collapse). Consequently I was in a bit of a scramble to get new hardware, and after researching my options (as much as I love my Inspiron at work they don't let you finance purchases) I spec'd out a brand new Dell XPS 15.

Behind the cut I'll list Windbringer's new hardware specs and everything I did to get up and running.

More under the cut...

The Doctor | 05 January 2015, 09:00 hours | content | No comments

Speakers' Bureau Contact Page

I now have a contact page for the Brighter Brains Speakers Bureau. If you are interested in having me present on a professional basis, please look over my bio and contact me through that route. We'll work it out from there.

More under the cut...

The Doctor | 02 January 2015, 19:39 hours | default | No comments

Happy 2015, everyone.

Happy New Years, everyone.

I'll have more of a benediction after I wake up some more...

The Doctor | 01 January 2015, 16:42 hours | default | No comments

Merry Christmas and a Joyous Yule, everyone.

May all your toys come with batteries, your books have ample margins for note taking, your clothes be just what you like to wear, and your chance to sleep in be long enough to get a good night's rest.

The Doctor | 25 December 2014, 09:00 hours | default | One comment

Fabbing tools in orbit and with memory materials, and new structural configurations of DNA.

A couple of weeks ago before Windbringer's untimely hardware failure I did an article about NASA installing a 3D printer on board the International Space Station and running some test prints on it to see how well additive manufacturing, or stacking successive layers of feedstock atop one another to build up a more complex structure would work in a microgravity environment. The answer is "quite well," incidentally. Well enough, in fact, to solve the problem of not having the right tools on hand. Let me explain.

In low earth orbit if you don't have the right equipment - a hard drive, replacement parts, or something as simple as a hand tool - it can be months until the next resupply mission arrives and brings with it what you need. That could be merely inconvenient or it could be catastrophic, situation depending. Not too long ago Barry Wilmore, one of the astronauts on board the current ISS mission mentioned that the ISS needed a socket wrench to carry out some tasks on board the station. Word was passed along to Made In Space, the California company which designed and manufactured the 3D printer installed on board the ISS. So, they designed a working socket wrench using CAD software groundside, converted the model into greyprints compatible with the 3D printer's software, and e-mailed them to Wilmore aboard the ISS. Wilmore ran the greyprints through the ISS' 3D printer. End result: A working socket wrench that was used to fix stuff in low earth orbit. One small step for 3D printing, one giant leap for on-demand microfacture.

In other 3D printing news, we now have a new kind of feedstock that can be used to fabricate objects. In addition to ABS and PLA plastics for home printers, and any number of alloys used for industrial direct metal laser-sintered fabbers there is something that we could carefully count as the first memory material suitable for additive manufacture. Kai Parthy, who has invented nearly a dozen and counting different kinds of feedstock for 3D printers has announced his latest invention, a viscoelastic memory foam. Called Layfoam and derived from his line of PORO-LAY plastics, you can run Layfoam through a 'printer per usual, but after it sets you can soak the object in water for a couple of days and it becomes pliable like rubber without losing much of its structural integrity. This widens the field of things that could potentially be fabbed, including devices for relieving mechanical strain (like washers and vibration dampening struts), custom padding and cushioning components, protective cases, and if bio-neutral analogues are discovered in the future possibly even soft medical implants of the sort that are manufactured out of silicone now.

In the early 20th century the helical structure of deoxyribonucleic acid, the massive molecule which encodes genomes was discovered in vivo. While there are other conformations of DNA that have been observed in the wild only a small number of them are actually encountered in any forms of life. Its data storage and error correction properties aside, one of the most marvelous things about DNA is that it's virtually self-assembling. A couple of weeks ago a research team at MIT published a paper entitled Lattice-Free Prediction of Three Dimensional Structures of Programmed DNA Assemblies in the peer reviewed journal Nature Communications. The research team developed an algorithm into which they can input a set of arbitrary parameters like molecular weights, atomic substitutions, microstructural configurations, and it'll calculate what shape the DNA will take on under those conditions. Woven disks. Baskets. Convex and concave dishes. Even, judging by some of the images generated by the research team, components of more complex self-assembling geometric objects could be synthesized (or would that be fabricated?) at the nanometer scale. Applications for such unusual DNA structures remain open: I think there is going to be a period of "What's this good for?" experimentation, just as there was for virtual reality and later augmented reality, but it seems safe to say that most of them will be biotech-related. Perhaps custom protein synthesis and in vivo gengineering will be involved, or perhaps some other applications will be devised a little farther down the line.

The best thing? They're going to publish the source code for the algorithm under an open source license so we all get to play with it.

Welcome to the future.

The Doctor | 24 December 2014, 09:30 hours | default | No comments

I don't think it was North Korea that pwned Sony.

EDIT: 2014/12/23: Added reference to, a link to, and a local copy of the United Nations' Committee Against Torture report.

I would have written about this earlier in the week when it was trendy, but not having a working laptop (and my day job keeping me too busy lately to write) prevented it. So, here it is:

Unless you've been completely disconnected from the media for the past month (which is entirely possible, it's the holiday season), you've probably heard about the multinational media corporation Sony getting hacked so badly that you'd think it was the climax of a William Gibson story. As near as anybody can tell the entire Sony corporate network, in every last office and studio around the world doesn't belong to them anymore. A crew calling itself the GoP - Guardians of Peace - took credit for the compromise. From what we know of the record-breaking incident it probably took years to set up and may have been an inside job simply due to the fact that an astounding amount of data has been leaked online, possibly in the low terabyte range. From scans of famous actors' passports to executives' e-mail spools, to crypto material already being used to sign malware to make it more difficult to detect, more and more sensitive internal documents are winding up free for the downloading on the public Net.

The US government accused North Korea publically of the hack and are calling it an act of war. This was immediately parroted by the New York Times and NBC.

I don't think North Korea did it.

I think they're lying, and the public accusation that North Korea did it is jetwash. Bollocks. Bullshit. In the words of one of Eclipse Phase's more notorious world building devices, the MRGCNN, LLLLLIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEESSSSSSSSSSSSSSSSSSSSSSSSSSSS!!!!!

Beneath the cut are my reasons for saying this.

More under the cut...

The Doctor | 22 December 2014, 09:00 hours | default | Three comments

A friendly heads-up from work.

Windbringer experienced an unexpected and catastrophic hardware failure last night after months of limping along in weird ways (the classic Dell Death Spiral). My backups are good and I have a restoration plan, but until new hardware arrives my ability to communicate is extremely limited. Please be patient until I get set up again.

The Doctor | 12 December 2014, 18:26 hours | default | No comments

Repurposing memes for presentations.

I'm all for people reading, listening to, and watching the classics of any form of media. They're the basic cultural memes that so many other cultural communications are built on top of, and occasionally get riffed on that we all seem to silently recognize, whether or not we know where they're from or the context they originally had. You may not know who the Grateful Dead are or recognize any of their music (I sure don't), but if you're a USian chances are that you've at least seen the new iterations of the hippie movement and recognize the general style affected by adherents thereof due to the significant overlap between the two. Most of us, at one point or another, recognize scenes from Romeo and Juliet on television even though we may not have read the play or seen a stage production thereof. They're all around us, like the air we breathe or the water fish swim in (whether or not fish are actually aware that they swim in something called water isn't something I intend to touch on in this post).

More under the cut for spoilers, because I'm feeling nice.

More under the cut...

The Doctor | 06 December 2014, 14:11 hours | default | No comments

Robotic martial artists, security guards, and androids.

Quite possibly the holy grail of robotics is the anthroform robot, a robot which is bipedal in configuration, just like a human or other great ape. As it turns out, it's very tricky to build such a robot without it being too heavy or having power requirements that are unreasonable in the extreme (which only exacerbates the former problem). The first real success in this field was Honda's ASIMO in the year 2000.ev, which most recently uses a lithium-ion power cell that permits one hour of continuous runtime for the robot. ASIMO is also, if you've ever seen a live demo, somewhat limited in motion; it can't really jump, raise one leg very far, or run as fast as an average human at a gentle trot. That said, recent Google acquisition Boston Dynamics (whom many like to make fun of because of the BigDog demo video) recently published a demo video for the 6-foot-2-inch, 330 pound anthroform robot called Atlas that children of the 80's will undoubtedly find just as amusing. To demonstrate Atlas' ability to balance stably under adverse conditions (vis a vis, a narrow stack of cinder blocks) they had Atlas assume the crane stance made famous in the movie The Karate Kid. If it seems like a laboratory "gimme," I challenge you to try it and honestly report in the comments how many times you topple over. Yet to come is the jumping and kicking part of the routine, but it's Boston Dynamics we're talking about; if they can figure out how to program a robot to accurately throw hand grenades they can figure out how to get Atlas to finish the stunt. I must point out, however, that Atlas is a tethered robot thus far - the black umbilical you see in the background appears to be a shielded power line.

In related news, a video shot at an exhibit in Japan popped up on some of the more popular geek news sites earlier this week. The exhibit is of two ABB industrial manipulators inside a lexan arena showing off the abilities of their programmers as well as their precision and delicacy by sparring with swords. The video depicts the industrial robots each drawing a sword and moving in relation to one another with only the points touching, then demonstrating edge-on-edge movements and synchronized movements in relation to one another, followed by replacing the swords in their sheaths. If you watch carefully you can even tell who the victor of the bout was.

A common trope in science fiction is the security droid, a robotic sentry that seemingly exists only to pin down the protagonists with a hail of ranged weaponfire or send a brief image back to the security office before being taken out by said protagonists to advance the plot. Perhaps it's for the best that Real Life(tm) is still trying to catch up in that regard... Early last month, Silicon Valley startup Knightscope did a live demonstration of their first-generation semi-autonomous security drone, the K5, on Microsoft's NorCal campus. The K5 is unarmed but has a fairly complex sensor suite on board designed for site security monitoring, threat analysis and recognition, and an uplink to the company's SOC (Security Operations Center) where human analysts in the response loop can respond if necessary. The sensor suite includes high-def video cameras with in-built near-infrared imaging, facial recognition software, audio microphones, LIDAR, license plate recognition hardware, and even an environmental monitoring system that watches everything from ambient air temperature to carbon dioxide levels. The K5's navigation system incorporates GPS, machine learning, and technician-lead training to show a given unit its patrol area, which it then sets out to learn on its own before patrolling its programmed beat. Interestingly, if someone tries to mess with a K5 on the beat, say, by obstructing it, trying to access its chassis, or trying to abscond with it, the K5 will simultaneously sound an audible alarm while sending an alert to the SOC. The K5 line is expected to launch on a commercial basis in 2015.ev on a Machine-As-A-Service basis, meaning that companies won't actually buy the units, they'll be rented for approximately $4500us/month, which includes 24x7x365 monitoring at the SOC.

No, they can't go up stairs. Yes, I went there.

More under the cut...

The Doctor | 04 December 2014, 09:30 hours | default | Three comments

The first successful 3D print job took place aboard the ISS!

There's a funny thing about space exploration: If something goes wrong aboard ship the consequences could easily be terminal. Outer space is one of the most inhospitable environments imaginable, and meat bodies are remarkably resilient as long as you don't remove them from their native environment (which is to say dry land, about one atmosphere of pressure, and a remarkably fiddly chemical composition). Space travel inherently removes meat bodies from their usual environment and puts them into a complex, fragile replica made of alloys, plastics, and engineering; as we all know, the more complex something is, the more things can go wrong and Murphy was, contrary to popular belief, an optimist. Case in point, the Apollo 13 mission, which was saved by by one of the most epic hacks of all time. It's worth noting, however, that the Apollo 13 CO2 scrubber hack was just that - a hack. NASA really worked a miracle to make that square peg fit into a round hole but it could easily have gone the other way, and the mission might never have returned to Earth. Sometimes you can make the parts you have work for you with some modification, but sometimes you can't. Even MacGyver failed once in a while.*

So, you're probably wondering where this is going. On the last resupply trip to the International Space Station one of the pieces of cargo taken up was... you know me, so I'll dispense with the ellipsis - a 3D printer that uses ABS plastic filament as its feedstock. It was loaded on board the ISS as part of an experiment to test how feasible it would be to microfacture replacement parts during a space mission rather than carry as many spare components as possible. It is hoped that this run of experiments will provide insight into better ways of manufacturing replacement parts in a microgravity environment during later space missions. The 3D printer was installed on 17 November 2014 inside a glovebox, connected to a laptop computer (knowing NASA, it was probably an IBM Thinkpad), and a test print was executed. Telemetry from the test print was analyzed groundside and some recalibration instructions were drafted and transmitted to the ISS. Following realignment of the 3D printer a second, successful test print was executed three days later. On 24 November 2014 the 'printer was used to fab a replacement component for itself, namely, a faceplate for the feedstock extruder head. Right off the bat they noticed that ABS feedstock adheres to the print bed a little differently in microgravity, which can cause problems at the end of the fabrication cycle when the user tries to extract the printer's output. An iterative print, analyze, and recalibrate cycle was used to get the 'printer set up just right to microfacture that faceplate. 3D printers are pretty fiddly to begin with and the ISS crew is trying to operate one in a whole new environment, namely, in orbit. The experimental schedule for 2015 involves printing the same objects skyside and groundside and comparing them to see what differences there are (if any), figuring out how to fix any problems and incorporating lessons learned and technical advancements into the state of the art.

NASA's official experimental page for the 3D printer effort can be found here. It's definitely worth keeping a sensor net trained on.

More under the cut...

The Doctor | 01 December 2014, 09:30 hours | default | No comments

Controlling genes by thought, DNA sequencing in 90 minutes, and cellular memory.

A couple of years ago the field of optogenetics, or genetically engineering responsiveness to visible light to exert control over cells was born. In a nutshell, genes can be inserted into living cells that allow certain functions to be switched on or off (such as the production of a certain hormone or protein) in the presence or absence of a certain color of light. Mostly, this has only been done on an experimental basis to bacteria, to figure out what it might be good for. As it happens to turn out, optogenetics is potentially good for quite a lot of things. At the Swiss Federal Institute of Technology in Zurich a research team has figured out how to use an EEG to control gene expression in cells cultured in vitro and published their results in that week's issue of Nature Communications. It's a bit of a haul, so sit back and be patient...

First, the research team spliced a gene into cultured kidney cells that made them sensitive to near-infrared light, which is the kind that's easy to emit with a common LED (such as those in remote controls and much consumer night vision gear). The new gene was inserted into the DNA in a location such that it could control the synthesis of SEAP (secreted embryonic alkaline phosphatase; after punching around for an hour or so I have no idea what it does). Shine IR on the cell culture, they produce SEAP. Turn the IR light off, they stop. Pretty straightforward as such things go. Then, for style points, they rigged an array of IR LEDs to an EEG such that, when the EEG picked up certain kinds of brain activity in the researchers the LEDs turned on, and caused the cultured kidney cells to produce SEAP. This seems like a toy project because they could easily have done the same thing with an SPST toggle switch that cost a fraction of a Euro; however, the implications are deeper than that. What if retroviral gene therapy was used in a patient to add an optogenetic on/off switch to the genes that code for a certain protein, and instead of electrical stimulation (which has its problems) optical fibres could be used to shine (or not) light on the treated patches of cells? While invasive, that sounds rather less invasive to me than Lilly-style biphasic electrical stimulation. Definitely a technology to keep a sensor net on.

A common procedure during a police investigation is to have a cheek swab taken to collect a DNA sample. Prevailing opinions differ - personally, I find myself in the "get a warrant" camp but that's neither here nor there. Having a DNA sample is all well and good but the analytic process - actually getting useful information from that DNA sample - is substantially more problematic. Depending on the process required it can take anywhere from hours to weeks; additionally, the accuracy of the process leaves much to be desired because, as it turns out, collision attacks apply to forensic DNA evidence, too. So, it is with some trepidation that I state that IntegenX has developed a revolutionary new DNA sequencer. Given a DNA sample from a cheek swab or an object thought to have a DNA sample on it (like spit on a cigarette butt or a toothbruth) the RapidHIT can automatically sequence, process, and profile the sample using the most commonly known and trusted laboratory techniques today. The RapidHIT is also capable of searching the FBI's COmbined DNA Indexing System (CODIS) for positive matches. Several aspects of the US government are positioning themselves to integrate this technology into their missions, but CEO of IntegenX Robert Schueren claims that the company does not know how their technology is being applied. In areas of the United States widely known to be hostile if one looks as if they "aren't from these parts" RapidHIT has been just that, and local LEOs are reported quite happy with their new purchases. Time will show what happens, and what the aftershocks of cheap and portable DNA sequencing are.

Most living things on Earth that operate on a level higher than that of tropism seem to possess some form of memory that records environmental encounters and influences the organism's later activities. There are some who postulate that some events may be permanently recorded in one's genome, phenomena variously referred to as genetic memory, racial memory, or ancestral memory though the evidence is scant to null supporting these assertions. When you get right down to it, it's tricky to edit DNA in a meaningful way that does't destroy the cells so altered. On those notes, I find it very interesting that a research team at MIT in Cambridge seems to have figured out a way to go about it, though it's not a straightforward or information-dense process. The process is called SCRIBE (Synthetic Cellular Recorders Integrating Biological Events) and makes it possible for a cell to modify its own DNA in response to certain environmental stimuli. The team's results were published in volume 346, issue number 6211 of Science, but I'll summarize the paper here. In a culture of e.coli bacteria a retron (weird little bits of DNA covalently bonded to bits of RNA which code for reverse transcriptases (enzymes that synthesize DNA using RNA as code templates) that are not found in chromosomal DNA) was installed that would produce a unique DNA sequence in the presence of a certain environmental stimulus, in this case the presence of a certain frequency of light. When the bacteria replicated (and in so doing copied their DNA) the retron would mutate slightly to make another gene that coded for resistence to a particular antibiotic more prominent. At the end of the experiment the antibiotic in question was added to the experimental environments; cells which had built up a memory store of exposure to light were more resistent to the antiobiotic. Prevalence of the antibiotic resistence gene was verified by sequencing the genomes of the bacterial cultures. At this time the total cellular memory provided by this technique isn't much. At best it's enough to gauge in an analog fashion how much of or how long something was present in the environment or not but that's about it. After a few years of development, on the other hand, it might be possible to use this as an in vivo monitoring technique for measuring internal trends over time (such as radiation or chemical exposure). Perhaps farther down the line it could be used as part of a syn/bio computing architecture for in vitro or invo use. The mind boggles.

The Doctor | 24 November 2014, 09:15 hours | default | No comments

Neuromorphic navigation systems, single droplet diagnosis, and a general purpose neuromorphic computing platform?

The field of artificial intelligence has taken many twists and turns on the journey toward its as-yet unrealized goal of building a human-equivalent machine intelligence. We're not there yet, but we've found lots of interesting things along the way. One of the things that has been discovered is that, if you understand it well enough (and there are degrees of approximation, to be sure) it's possible to use what you know to build logic circuits that work the same way - neuromorphic processing. The company AeroVironment recently test-flew a miniature drone which had as its spatial navigation system a prototype neuromorphic processor with 576 synthetic neurons which taught itself how to fly around a sequence of rooms it had never been in before. The drone's navigation system was hooked to a network of positional sensors - ultrasound, infra-red, and optical. This sensor array provided enough information for the chip to teach itself where potential obstacles were, where the drone itself was, and where exits joining rooms where - enough to explore the spaces on its own, without human intervention. When the drone re-entered a room it had already learned (because it recognized it from its already-learned sensor data) it skipped the learning cycle and went right to the "I recognize everything and know how to get around" part of the show, which took a significantly shorter period of time. Drones are pretty difficult to fly at the best of times, so any additional amount of assistance that the drone itself can give would be a real asset (as well as an aid to civilian uptake). The article is otherwise a little light on details. It seems to assume that the reader is already familiar with a lot of the relevant background material. I think I'll cut to the chase and say that this is an interesting, practical breakthrough in neuromorphic computing - in other words, they're doing something fairly tricky yet practical with it.

When you get right down to it, medical diagnosis is a tricky thing. The body is an incredibly complex, interlocking galaxy of physical, chemical, and electrical systems, all with unique indicators. Some of those indicators are so minute that unless you knew exactly what you were looking for, and searched for it in just the right way you might never know something was going on. Earlier I wrote briefly about Theranos, a lab-on-a-chip implementation that can accurately carry out several dozen diagnostic tests on a single drop of blood. Recently, the latest winners of Nokia's Sensing XChallenge prize were announced - the DNA Medical Institute for rHEALTH, a hand-held diagnostic device which can accurately diagnose several hundred medical conditions with blood gathered from a single fingerstick. The rHEALTH hand-held unit also gathers biostatus information from a wireless self-adhesive sensor patch that measures pulse, temperature, and EKG information; the rHEALTH unit is slaved to a smartphone over Bluetooth where presumably an app does something with the information. The inner workings of the rHEALTH lab-on-a-chip are most interesting: The unit's reaction chamber is covered with special purpose reagent patches and (admittedly very early generation) nanotech strips that separate out what they need, add the necessary test components, shine light emitted by chip-lasers and micro-miniature LEDs, and analyze the light reflected and refracted inside the test cell to identify chemical biomarkers indicative of everything from a vitamin-D deficiency to HIV. The unit isn't in deployment yet, it's still in the "we won the prize!" stage of practicality, something that Theranos has on them at this time.

Let's admit an uncomfortable truth to ourselves: We as people take computers for granted. The laptop I write this on, the tablet or phone you're probably reading this on, the racks and racks and racks of servers in buildings scattered all over the world the run pretty much everything important for life today, we scarcely think of them unless something goes wrong. Breaking things down a little bit, computers all do pretty much the same thing in the same way: They have something to store programs and data in, something to pull that data out to process it, someplace to put data while it's being processed, and some way to output (and store) the results. We normally think of the combination of a hard drive, a CPU, RAM, and a display fitting this model, called the von Neumann architecture. Boring, every day stuff today but when it was first conceived of by Alan Turing and John von Neumann in their separate fields of study it was revolutionary because it had never before been done in human history. As very complex things are wont to be, the CPUs themselves we use today are recreations of that very architecture in miniature: For storage there are registers, for the actual manipulation of data there is an arithmatic/logic unit, and one or more buses output the results to other subsystems. ALUs themselves I can best characterize as Deep Magick; I've been studying them off and on for many years and I'm working my way through some of the seminal texts in the field (most recently Carver and Mead's Introduction to VLSI Systems) and when you get right down to it, that so much is possible with long chains of on/of switches is mind boggling, frustrating, humbling, and inspiring.

Getting back out of the philosophical weeds, some interesting developments in the field of neuromorphic computing, or processing information with brain-like circuitry instead of logic chains has come to light. Google's Deepmind operational team has figured out how to marry practical artificial neural networks to the von Neumann architecture, resulting in a neural network with non-static memory that can figure out on its own how to carry out some tasks, such as searching and sorting elements of data without needing to be explicitly programmed to do so. It may sound counter-intuitive, but researchers working with neural network models have not, as far as anybody knows, married what we usually think of RAM to a neural net. Usually, once the 'net is trained it's trained, and that's the end of it. Writeable memory makes them much more flexible because that gives them the capability to put new information aside as well as potentially swap out old stored models. Additionally, such a model is pretty definitively known to be Turing complete: If something can be computed on a hypothetical universal Turing machine it can be computed on neural network-with-RAM (more properly referred to as a neural Turing machine, or NTM). To put it another way, there is nothing preventing an NTM from doing the same thing the CPU in your tablet or laptop can do. The progress they've reported strongly suggests that this isn't just a toy, they can do real world kinds of work with NTM's that don't cause them to break down. They can 'memorize' data sequences of up to 20 entries without errors, between 30 and 50 entries with minimal errors (something that many people might have trouble doing rapidly because that's actually quite a bit of data), and can reliably work on sets of 120 data elements before errors can be expected to start showing up in the output.

What's it good for, you're probably asking. Good question. From what I can tell this is pretty much a proof-of-concept sort of thing right now. The NTM architecture seems to be able to carry out some of the basic programming operations, like searching and sorting; nothing that you can't find in commonly available utility libraries or code yourself in a couple of hours (which you really should do once in a while). I don't think Intel or Arm have anything to worry about just yet. As for what the NTM architecture might be good for in a couple of years, I honestly don't know. It's Turing complete so, hypothetically speaking, anything that could be computed could be computed with one. Applications for sorting and searching data are the first things that come to mind, even on a personal basis. That Google has an interest in this comes as no surprise, when taking into account the volume of data their network deals with on a daily basis (certainly in excess of 30 petabytes of data every day which is.... a lot, and probably much, much more than that). I can't even think that far ahead, so keep an eye on where this is going.

The Doctor | 18 November 2014, 08:00 hours | default | No comments

R.A. Montgomery, of the Choose Your Own Adventure books, dead at age 78.

Children of the 80's will no doubt remember the shelves and shelves of little white paperbacks with red piping from the Choose Your Own Adventure series, where you could play as anything from a deep sea explorer to a shipwrecked mariner, a volunteer time traveler, or anything in between. If you're anything like me, you also spent way too much time looking for mistakes in the sequence of pages to find more interesting twists and no shortage of endings (most of them bad). I can't say they went out of print for a while but they did become harder to find in stores for several years. More recently Chooseco was founded to pick up the torch, reissue some of the older books, and publish new ones. R.A. Montgomery's business practices were unique at the time, which is to say that every author who wrote books in the series was credited with having done so, instead of being credited as the series' founder (which was common in the industry then).

I'm sorry to report that R.A. Montgomery, one of the first authors of the Choose Your Own Adventure series and contributer for nearly the entire history of the series died on 9 November 2014 at the age of 78 at his home in Vermont. It is not known how or what contributed to his death at this time. He is survived by his wife, a son, two grand-daughters, a sister, and a daughter-in-law.

A private memorial service will be held in early 2015.

Mr. Montgomery, thank you for everything you've done and written over the years. Your were an inspiration to me when I was younger, and I still have a few dozen of your books in my collection. You will surely be missed.

The Doctor | 15 November 2014, 18:20 hours | default | Two comments

Reversing progressive memory loss, transplantable 3D printed organs, and improvements in resuscitation.

Possibly the most frightening thing about Alzheimer's Disease is the progressive loss of self; many humans measure their lives by the continuity of their memories, and when that starts to fail, it calls into question all sorts of things about yourself... as long as you're able to think about them. I'm not being cruel, I'm not cracking wise, Alzheimer's is a terrifying disease because it eats everything that makes you, you. Thus, it is with no small feeling of hope that I link to these results at the Buck Institute for Research On Aging - in a small trial at UCLA of patients who were suffering several years of progressive, general memory loss they were able to objectively improve memory functioning and quality of life in 90% of the test subjects between three and six months after beginning the protocol. A late stage Alzheimer's patient in the test group did not improve. The program was carefully tailored to each test subject and makes the assumption that Alzheimer's is not a single disease but a process involving a complex of different phenomena. This is why, it is hypothesized, single-drug treatments have not been successful to date. The treatment protocol tested involved changes of diet, modulation of stress levels, exercise, sleep modulation, a regimen of supplements observed to have some influence over the maintenance and genesis of nerves, and a daily pattern which seemed to serve as a framework to hold everything in balance. The framework is unfortunately fairly complex, and at least at first a caregiver may need to be involved in helping the patient. Looking at how everything fits together it seems to me that there may also be elements of cognitive behaviorial therapy involved, or at least emergent in the process. Interestingly, six of the patients who had to quit their jobs due to encroaching dementia were able to go back to work (it saddens me that there are people who need to work rather than enjoy their lives after a certain age). I don't know if this is going to catch on, protocols like this tend to slip through the cracks of medical science, but it's definitely something worth keeping an eye on.

Longtime readers are no doubt aware that bioprinting, or using 3D printers to fabricate biological structures is an interest of mine. Think of it: Running off replacement organs, specific to the patient with no change of rejection and less possibility of opportunistic infection because immunosuppressants don't have to be used. It's already possible to fab fairly complex biological structures thanks to advances in materials science but now it's time to get ambitious... a company called 3D Bioprinting Solutions just outside of Moscow, Russia announced that by 15 March 2015 they will demonstrate a 3D printed, viable transplantable organ. They claim that they have the ability to fab a functional thyroid gland using cloned stem cells from a laboratory test animal for a proof of concept implementation. The fabbed organ will mature in a bioreactor (which are apparently now advanced enough to be commodity lab equipment) for a certain period of time) before implanting it in the laboratory animal; if all goes according to plan, the lab animal should show no signs of rejection, hormone imbalance or metabolic imbalance. I realize I might be going out on a limb here (I try not to be too optimistic) but, looking at the progression of bioprinting technology since 2006 I think they've got a good chance of success next year. Additionally, I think they might make good on their hopes of fabbing a living, functioning kidney some time next year. And after that? Who knows.

Television to the contrary, resuscitating someone whose heart isn't functional is far from a sure thing. Bodies only have a certain amount of oxygen and glucose dissolved in the bloodstream, and when you factor in the metabolic load of the brain (roughly one-fifth of the body's resting oxygen utilization alone) there isn't much to work with after very long. Additionally... well, I'd be rewriting this excellent article on resuscitation, which pretty clearly explains why the survival rate of cardiac arrest is between 5% and 6% of patients, depending on whom you talk to. Of course, that factors in luck, where and when the patient entered cardiac arrest, how young and healthy they are or are not, and how strong their will to survive is. Due to hypoxia a certain amount of brain damage is almost a certainty; maybe just a few neurons, maybe a couple of neural networks, but sometimes the damage is extreme. About ten years ago the AMA started to look at the data and switched up a few things in the generally accepted resuscitation protocol; the Journal of Emergency Medical Services published an interesting summary recently, of which I'll quote bits and pieces. Assuming a fallen patient in ventricular fibrillation, paramedics gaining access to a long bone in the body for intraosseus infusion because it offers better access to the circulatory system (yeah, I just cringed, too) for drug administration, the induction of medical hypothermia to slow metabolism (which was maintained for a period of time following resuscitation), machine-timed ventilation, and the application of a likely scary number of electrical shocks prior to transportation to the hospital approximately forty minutes later... the survival rate of such situations is now somewhere around 83% (even factoring in a statistical outlier case which lasted 73 minutes). Occurrance of post-cardiac arrest syndrome was minimized by maintenance of medical hypothermia and patients are routinely showing minimal to no measurable neurological impairment.

I'd call that more than a fighting chance.

The Doctor | 13 November 2014, 09:30 hours | default | No comments

Inducing neuroplasticity and the neurological phenomenon of curiosity.

For many years it was believed by medical science that neuroplasticity, the phenomenon in which the human brain rapidly and readily creates neuronal interconnections tapered off as people got older. Children are renowned for learning anything and everything that catches their fancy (not always what we'd wish they'd learn) but the learning process seems to slow down the older they get. As adults, it's much harder to learn complex new skills from scratch. In recent years, a number of compounds have been developed that seem to kickstart neuroplasticity again, but they're mostly used for treating Alzheimer's Disease and not so readily as so-called smart drugs. However, occasionally an interesting clinical trial pops up here and there. Enter donepezil: A cholinesterase inhibitor which increases ambient levels of acetylcholine in the central nervous system. At Boston Children's Hospital not too long ago, Professor Takao Hensch of Harvard University administered a regimen of donapezil to a 14 year old girl being treated for lazy eye, or subnormal visual development in one eye. Similar to using valproate to kickstart critical period learning in the auditory cortex, administration of donepezil seems to have caused the patient's visual cortex to enter a critical period of learning and catch up to the neural circuitry driving her dominant eye. The patient's weaker eye was measurably stronger and her vision was measured to be more acute than before the test program began. What is not clear is whether this is a sense-specific improvement (i.e., does donepezil only improve plasticity in the visual cortex, or will it work in a more wholeistic way upon the human brain). It's too early to tell, and we don't yet have enough data, but the drug's clinical use for treating Alzheimer's seems to imply the latter. Definitely a development to monitor because it may be useful later.

As I mentioned earlier, children are capable of learning incredibly rapidly. This is in part due to neural plasticity, and in part due to a burning curiosity about the world around them which comes from being surrounded by novelty. When one doesn't have a whole lot of life experience, the vistas of the world are bright, shiny, and new. Growing older and building a larger base of knowledge upon which to draw (as well as the public school system punishing curiosity in an attempt to get every student on the same baseline) dims curiosity markedly, and it's hard to hang onto that sense of wonder and inquisitiveness the older one gets. Dr. Matthias Gruber and his research team at U.Cal Davis have been studying the neurological phenomenon of curiosity and their work seems to shore up something that gifted and talented education teachers have been saying for years. In a nutshell, when someone is curious about the topic of a question they are more likely to retain the information for longer periods of time because the mesolimbic dopamine system - the reward pathways of the brain - fire more often, and consequently increase activity in the hippocampus, which is involved in the creation and retrieval of long term memories. To put it another way, if you're interested in what you're learning, you're going to enjoy learning, and consequently what you're learning will stick better. So, what do we do with this information? It seems to inform some strategies for both pedagogy and autodidactism in that it seems possible that it would be easier to learn something less interesting by riding the reward system "high" by studying something more captivating in tandem. Coupled with a strategy of chunking (breaking a body of information into smaller pieces which are studied separately) it might be possible to switch off between more interesting and less interesting subjects in a study session and retain the less interesting stuff more reliably. This is pretty much one of the strategies I used in undergrad; while I didn't gather any metrics for later review and analysis, I did just this when studying things that I found less interesting or problematic and definitely did better on exams and final grades. One thing I did notice is that the subject matter could not be too wildly different; alternating calculus and Japanese didn't work very well, for example, but calculus and computational linguistics worked well together. Experimenting with such a strategy is left as an exercise for the motivated reader.

The Doctor | 24 October 2014, 09:45 hours | default | One comment

Congratulations, Asher Wolf!

Congratulations to Telecomix alumnus Asher Wolf, who was awarded the 2014 Print/Online Award for Journalism along with Oliver Laughland and Paul Farrell at the Amnesty International Australia Media Awards on 21 October 2014!

More under the cut...

The Doctor | 23 October 2014, 20:51 hours | default | No comments

Genetically modified high school grads, stem cell treatment for diabetes, and deciphering memory engrams.

A couple of years ago I did an article on the disclosure that mitochondrial genetic modifications were carried out on thirty embryos in the year 2001 to treat mitochondrial diseases that would probably have been fatal later in life. I also wrote in the article that this does not constitute full scale genetic modification ala the movie Gattaca. It is true that mitochondria are essential to human life but they do not seem to influence any traits that we usually think about, such as increased intelligence or hair color, as they are primarily involved in metabolism. In other words, mitochontrial manipulation does not seem to fundamentally change a person's morphology. While I cannot speak to the accuracy of the news site inquisitr.com they recently published an article that got me thinking: Those children whose mitochondrial configuration was altered before they were born in an attempt to give them healthy, relatively normal lives are probably going to graduate from high school next year. We still don't know who those kids are or where they're living, nor do we really know what health problems they have right now, if they have any that is. We do know that a followup is being done at this time but we're probably not going to find out the results for a while, if at all. We also don't know the implications for the children of those kids years down the line. The mitochondrial transfer process broke new ground when it was carried out and I don't know if it's been done since. My gut says "no, probably not."

I don't actually have a whole lot to say on this particular topic due to privacy concerns. Let's face it, these are kids growing up and trying to figure out their lives and it seems a little creepy to go digging for this kind of information. As far as we know, data's being collected and hopefully some of the results will be published someplace we can read them. Hard data would be nice, too, so we can draw our own conclusions. Definitely food for thought no matter how you cut it.

In other news Type 1 diabetes is a condition in which the patient's body does not manufacture the hormone insulin (warning: Broken JavaScript, some browsers may complain) and thus cannot regulate the use of sugar as fuel. Over time, poorly managed blood sugar levels will wear away the integrity of your body, and your health along with it. I've heard it said that you've got 20 good years at most once the diagnosis comes down the wire. Type one diabetes is treated primarily with the administration of insulin, if not through injection than an implanted pump or biostatus monitor. A research team at Harvard University headed up by professor Doug Melton has made a breakthrough in stem cell technology - they've been able to replicate clinically active numbers of beta cells in vitro, hundreds of thousands at a time, which appear to be usable for implantation. Beta cells reside within pancreatic structures called the islets of langerhans and do the actual work of secreting and releasing insulin on demand. Trials of the replicated beta cells in diabetic nonhuman primates are reportedly looking promising; after implantation they're not being attacked by the immune system, they've been observed to be thriving, and they're producing insulin the way they're supposed to when compared to nondiabetic lifeforms. Word on the street has it that they're ready to begin human clinical trials in a year or two. Whether or not this would constitute a cure of type 1 diabetes in humans on a permanent basis remains to be seen, but I think it prudent to remain hopeful.

One of the bugaboos of philosophy and psychology is qualia - what a sentient mind's experience of life is really like. Is the red I see really the red you see? What about the sound the movement of leaves makes? Are smells really the same to different people? The experience of everything that informs us about the outside world is unique from person to person. A related question that neuroscience has been asking since it first began reverse engineering the human brain is whether or not there is a common data format underlying the same sensory stimuli across different people. If everybody's brain is a little different, will similar patterns of electrical activity arise due to the same stimuli? The implications for neuroscience, bioengineering, and artificial intelligence would be profound if there were. A research team based out of Washington University in Saint Louis, Missouri published a paper in the Proceedings of the National Academy of Sciences with evidence that this is exactly the case. The research team used a scene from an Alfred Hitchcock movie in conjunction with functional magnetic resonance imaging to map the cognitive activity of test subjects for analysis. The idea is that the test subjects watched the same movie clip under observation, and the fMRI scan detected the same kinds of cognitive activity across the test subjects in response. This seems to support the hypothesis that similar patterns of quantifiable neurological activity occurred in the brains of all of the test subjects. To test the hypothesis the process was repeated with two test subjects who have been in persistent vegetative states for multiple years at a time. Long story short, the PVS patients were observed to show quantifiably similar patterns of neurological activity in response to being subjected to the same Hitchcock scene. This implies that, on some level, the patients are capable of interpreting sensory input from the outside world and interpreting it - thinking about the content, context, and meta-context using the executive functions of the brain. This also seems to cast doubt upon the actual level of consciousness that patients in persistent vegetative states possess...

The Doctor | 23 October 2014, 09:15 hours | default | Three comments

Cardiac prosthetics and fully implanted artificial limbs.

No matter how you cut it, heart failure is one of those conditions that sends a chill down your spine. When the heart muscle grows weak and inefficient, it compromises blood flow through the body and can cause a host of other conditions, some weird, some additionally dangerous. Depending on how severe the condition is there are several ways of treating it. For example, my father in law has an implanted defibrillator that monitors his cardiac activity, though fairly simple lifestyle changes have worked miracles for his physical condition in the past several years. Left ventricular assist devices, implantable pumps that connect directly to the heart to assist in its operation are another way of treating heart failure. Recently, a research team at the Wexner Medical Center of Ohio State University reported remarkable results with a new assistive implant called the C-Pulse. The C-Pulse is a flexible cuff that wraps around the aorta and monitors the electrical activity of the heart; when the heart muscle contracts the C-Pulse contracts a fraction of a second later which helps push blood through the aorta to the rest of the body. A lead passes through the skin of the abdomen and connects to an external power pack to drive the unit. The test group consisted of twenty patients with either class III or class IV heart failure. The patients were assessed six and twelve months after the implantation procedure, and amazingly a full 80% of the patients showed significant improvements, and three of them had no symptoms of heart failure. Average quality of life metrics improved a full thirty points among the test subjects. I'm not sure where they're going next, but I think a full clinical trial is on the horizon for the C-Pulse. One to keep an eye on, to be sure.

A common problem with prosthetics, be it a heart, an arm, or what have you is running important bits through the skin to the outside world. Whenever you poke a hole through the skin you open a door to the wide, fun world of opportunistic infections. Anything and everything that can possibly sneak through that gap in the perimeter and set up shop in the much more hospitable environment of the human body will try. This is one of the major reasons why permanently attaching prosthetic limbs has been so difficult. To date various elaborate mechanisms which temporarily attach prosthetic limbs to the remaining lengths of limbs, including straps, fitted cups, and temporary adhesives have been tried with varying degrees of success. At the Royal National Orthopaedic Hospital in London they've begun clinical trials of ITAP, or Intraosseous Transcutaneous Amputation Prosthesis. In a nutshell, they've figured out how to implant attachment sockets in the remaining bones of limb amputees that can penetrate the skin with minimal risk of infection by emulating how deer antlers pass through and bond with the skin. This means that prosthetic limbs can be locked onto the body and receive just as much physical support (if not slightly more) than organic limbs do. Test subject Mark O'Leary of south London received one of the ITAP implants in 2008 (yep, six years ago and only now is it getting any press) and was amazed at not only how well his new prosthetic limb worked, but how being able to feel the road and ground through his prosthetic and into the organic part of his leg. Discomfort on the end of his organic limb is also minimized because there is no direct hard plastic-on-skin contact causing him pain. Apparently not one to do things by halves, O'Leary put his new prosthetic limb to the test by undertaking a 62 mile walk on the installed limb, and for an encore he climbed Mount Kilimanjaro with it.

Another hurdle toward the goal of fully operational prosthetic limbs has been restoring the sense of touch. Experiments have been done over the years with everything from piezoelectric transducers to optical and capacitative pressure sensors, but mostly they've been of use to robotics research and not prosthetics because the bigger problem of figuring out how to patch into nerves on a permanent basis was impeding progress. At Case Western Reserve University a research team successfully accessed the peripheral sensory nerves of amputees and then figured out what patterns of electrical stimulation on which nerves felt like which parts of the patients' missing hands. The inductive nervelinks were connected to patterns of sensors mounted on artificial arms developed at Case Western and the Louis Stokes Veterans Affairs Medical Center in Cleveland, Ohio. Long story short, the patients can not only sense pressure, they can tell the difference between cotton, grapes, and other materials. Even more interesting, sensory input from the prosthetic limbs relieved phantom limb pain suffered by some of the test subjects. Additionally the newly installed sense of touch has given the test subjects heretofore unparalleled dexterity in their prosthetic limbs; one test subject was able to pluck stems from grapes and cherries without crushing the fruit while blindfolded. Elsewhere in the field of limb replacement, a groundbreaking procedure carried out in Sweden in 2013 (I had no idea, one of my net.spiders discovered this by accident) combines the previous two advances. At the Chalmers University of Technology a research team headed up by Max Ortiz Catalan used ITAP techniques in conjunction with transdermal nervelinks to integrate a prosthetic limb into an unnamed patient's body. The patient has been using the limb on the job for over a year now, and can also tie shoelaces and pick up eggs without breaking them. A true cybernetic feedback loop between the brain and the prosthetic limb appears to have been achieved leading to intuitive control over the prosthetic limb. The patient has shown long term ability to maintain control over and sensory access to the prosthetic limb outside of a laboratory environment. The direct skeletal connection to the limb provides mechanical stability and ease of connectivity for the limb without any need for structural adjustment. The nervelinks mean that less effort is required on the part of the wearer to manipulate the limb, greater dexterity by exploiting the intuitive proprioceptive sense of the human brain, and no need for recalibration because the nervelinks don't really change position.

Excited about the future? I am.

The Doctor | 20 October 2014, 08:42 hours | default | No comments
"We, the extraordinary, were conspiring to make the world better."