A friendly heads-up from work.

Windbringer experienced an unexpected and catastrophic hardware failure last night after months of limping along in weird ways (the classic Dell Death Spiral). My backups are good and I have a restoration plan, but until new hardware arrives my ability to communicate is extremely limited. Please be patient until I get set up again.

The Doctor | 12 December 2014, 18:26 hours | default | No comments

Repurposing memes for presentations.

I'm all for people reading, listening to, and watching the classics of any form of media. They're the basic cultural memes that so many other cultural communications are built on top of, and occasionally get riffed on that we all seem to silently recognize, whether or not we know where they're from or the context they originally had. You may not know who the Grateful Dead are or recognize any of their music (I sure don't), but if you're a USian chances are that you've at least seen the new iterations of the hippie movement and recognize the general style affected by adherents thereof due to the significant overlap between the two. Most of us, at one point or another, recognize scenes from Romeo and Juliet on television even though we may not have read the play or seen a stage production thereof. They're all around us, like the air we breathe or the water fish swim in (whether or not fish are actually aware that they swim in something called water isn't something I intend to touch on in this post).

More under the cut for spoilers, because I'm feeling nice.


More under the cut...

The Doctor | 06 December 2014, 14:11 hours | default | No comments

Robotic martial artists, security guards, and androids.

Quite possibly the holy grail of robotics is the anthroform robot, a robot which is bipedal in configuration, just like a human or other great ape. As it turns out, it's very tricky to build such a robot without it being too heavy or having power requirements that are unreasonable in the extreme (which only exacerbates the former problem). The first real success in this field was Honda's ASIMO in the year 2000.ev, which most recently uses a lithium-ion power cell that permits one hour of continuous runtime for the robot. ASIMO is also, if you've ever seen a live demo, somewhat limited in motion; it can't really jump, raise one leg very far, or run as fast as an average human at a gentle trot. That said, recent Google acquisition Boston Dynamics (whom many like to make fun of because of the BigDog demo video) recently published a demo video for the 6-foot-2-inch, 330 pound anthroform robot called Atlas that children of the 80's will undoubtedly find just as amusing. To demonstrate Atlas' ability to balance stably under adverse conditions (vis a vis, a narrow stack of cinder blocks) they had Atlas assume the crane stance made famous in the movie The Karate Kid. If it seems like a laboratory "gimme," I challenge you to try it and honestly report in the comments how many times you topple over. Yet to come is the jumping and kicking part of the routine, but it's Boston Dynamics we're talking about; if they can figure out how to program a robot to accurately throw hand grenades they can figure out how to get Atlas to finish the stunt. I must point out, however, that Atlas is a tethered robot thus far - the black umbilical you see in the background appears to be a shielded power line.

In related news, a video shot at an exhibit in Japan popped up on some of the more popular geek news sites earlier this week. The exhibit is of two ABB industrial manipulators inside a lexan arena showing off the abilities of their programmers as well as their precision and delicacy by sparring with swords. The video depicts the industrial robots each drawing a sword and moving in relation to one another with only the points touching, then demonstrating edge-on-edge movements and synchronized movements in relation to one another, followed by replacing the swords in their sheaths. If you watch carefully you can even tell who the victor of the bout was.

A common trope in science fiction is the security droid, a robotic sentry that seemingly exists only to pin down the protagonists with a hail of ranged weaponfire or send a brief image back to the security office before being taken out by said protagonists to advance the plot. Perhaps it's for the best that Real Life(tm) is still trying to catch up in that regard... Early last month, Silicon Valley startup Knightscope did a live demonstration of their first-generation semi-autonomous security drone, the K5, on Microsoft's NorCal campus. The K5 is unarmed but has a fairly complex sensor suite on board designed for site security monitoring, threat analysis and recognition, and an uplink to the company's SOC (Security Operations Center) where human analysts in the response loop can respond if necessary. The sensor suite includes high-def video cameras with in-built near-infrared imaging, facial recognition software, audio microphones, LIDAR, license plate recognition hardware, and even an environmental monitoring system that watches everything from ambient air temperature to carbon dioxide levels. The K5's navigation system incorporates GPS, machine learning, and technician-lead training to show a given unit its patrol area, which it then sets out to learn on its own before patrolling its programmed beat. Interestingly, if someone tries to mess with a K5 on the beat, say, by obstructing it, trying to access its chassis, or trying to abscond with it, the K5 will simultaneously sound an audible alarm while sending an alert to the SOC. The K5 line is expected to launch on a commercial basis in 2015.ev on a Machine-As-A-Service basis, meaning that companies won't actually buy the units, they'll be rented for approximately $4500us/month, which includes 24x7x365 monitoring at the SOC.

No, they can't go up stairs. Yes, I went there.


More under the cut...

The Doctor | 04 December 2014, 09:30 hours | default | Three comments

The first successful 3D print job took place aboard the ISS!

There's a funny thing about space exploration: If something goes wrong aboard ship the consequences could easily be terminal. Outer space is one of the most inhospitable environments imaginable, and meat bodies are remarkably resilient as long as you don't remove them from their native environment (which is to say dry land, about one atmosphere of pressure, and a remarkably fiddly chemical composition). Space travel inherently removes meat bodies from their usual environment and puts them into a complex, fragile replica made of alloys, plastics, and engineering; as we all know, the more complex something is, the more things can go wrong and Murphy was, contrary to popular belief, an optimist. Case in point, the Apollo 13 mission, which was saved by by one of the most epic hacks of all time. It's worth noting, however, that the Apollo 13 CO2 scrubber hack was just that - a hack. NASA really worked a miracle to make that square peg fit into a round hole but it could easily have gone the other way, and the mission might never have returned to Earth. Sometimes you can make the parts you have work for you with some modification, but sometimes you can't. Even MacGyver failed once in a while.*

So, you're probably wondering where this is going. On the last resupply trip to the International Space Station one of the pieces of cargo taken up was... you know me, so I'll dispense with the ellipsis - a 3D printer that uses ABS plastic filament as its feedstock. It was loaded on board the ISS as part of an experiment to test how feasible it would be to microfacture replacement parts during a space mission rather than carry as many spare components as possible. It is hoped that this run of experiments will provide insight into better ways of manufacturing replacement parts in a microgravity environment during later space missions. The 3D printer was installed on 17 November 2014 inside a glovebox, connected to a laptop computer (knowing NASA, it was probably an IBM Thinkpad), and a test print was executed. Telemetry from the test print was analyzed groundside and some recalibration instructions were drafted and transmitted to the ISS. Following realignment of the 3D printer a second, successful test print was executed three days later. On 24 November 2014 the 'printer was used to fab a replacement component for itself, namely, a faceplate for the feedstock extruder head. Right off the bat they noticed that ABS feedstock adheres to the print bed a little differently in microgravity, which can cause problems at the end of the fabrication cycle when the user tries to extract the printer's output. An iterative print, analyze, and recalibrate cycle was used to get the 'printer set up just right to microfacture that faceplate. 3D printers are pretty fiddly to begin with and the ISS crew is trying to operate one in a whole new environment, namely, in orbit. The experimental schedule for 2015 involves printing the same objects skyside and groundside and comparing them to see what differences there are (if any), figuring out how to fix any problems and incorporating lessons learned and technical advancements into the state of the art.

NASA's official experimental page for the 3D printer effort can be found here. It's definitely worth keeping a sensor net trained on.


More under the cut...

The Doctor | 01 December 2014, 09:30 hours | default | No comments

Controlling genes by thought, DNA sequencing in 90 minutes, and cellular memory.

A couple of years ago the field of optogenetics, or genetically engineering responsiveness to visible light to exert control over cells was born. In a nutshell, genes can be inserted into living cells that allow certain functions to be switched on or off (such as the production of a certain hormone or protein) in the presence or absence of a certain color of light. Mostly, this has only been done on an experimental basis to bacteria, to figure out what it might be good for. As it happens to turn out, optogenetics is potentially good for quite a lot of things. At the Swiss Federal Institute of Technology in Zurich a research team has figured out how to use an EEG to control gene expression in cells cultured in vitro and published their results in that week's issue of Nature Communications. It's a bit of a haul, so sit back and be patient...

First, the research team spliced a gene into cultured kidney cells that made them sensitive to near-infrared light, which is the kind that's easy to emit with a common LED (such as those in remote controls and much consumer night vision gear). The new gene was inserted into the DNA in a location such that it could control the synthesis of SEAP (secreted embryonic alkaline phosphatase; after punching around for an hour or so I have no idea what it does). Shine IR on the cell culture, they produce SEAP. Turn the IR light off, they stop. Pretty straightforward as such things go. Then, for style points, they rigged an array of IR LEDs to an EEG such that, when the EEG picked up certain kinds of brain activity in the researchers the LEDs turned on, and caused the cultured kidney cells to produce SEAP. This seems like a toy project because they could easily have done the same thing with an SPST toggle switch that cost a fraction of a Euro; however, the implications are deeper than that. What if retroviral gene therapy was used in a patient to add an optogenetic on/off switch to the genes that code for a certain protein, and instead of electrical stimulation (which has its problems) optical fibres could be used to shine (or not) light on the treated patches of cells? While invasive, that sounds rather less invasive to me than Lilly-style biphasic electrical stimulation. Definitely a technology to keep a sensor net on.

A common procedure during a police investigation is to have a cheek swab taken to collect a DNA sample. Prevailing opinions differ - personally, I find myself in the "get a warrant" camp but that's neither here nor there. Having a DNA sample is all well and good but the analytic process - actually getting useful information from that DNA sample - is substantially more problematic. Depending on the process required it can take anywhere from hours to weeks; additionally, the accuracy of the process leaves much to be desired because, as it turns out, collision attacks apply to forensic DNA evidence, too. So, it is with some trepidation that I state that IntegenX has developed a revolutionary new DNA sequencer. Given a DNA sample from a cheek swab or an object thought to have a DNA sample on it (like spit on a cigarette butt or a toothbruth) the RapidHIT can automatically sequence, process, and profile the sample using the most commonly known and trusted laboratory techniques today. The RapidHIT is also capable of searching the FBI's COmbined DNA Indexing System (CODIS) for positive matches. Several aspects of the US government are positioning themselves to integrate this technology into their missions, but CEO of IntegenX Robert Schueren claims that the company does not know how their technology is being applied. In areas of the United States widely known to be hostile if one looks as if they "aren't from these parts" RapidHIT has been just that, and local LEOs are reported quite happy with their new purchases. Time will show what happens, and what the aftershocks of cheap and portable DNA sequencing are.

Most living things on Earth that operate on a level higher than that of tropism seem to possess some form of memory that records environmental encounters and influences the organism's later activities. There are some who postulate that some events may be permanently recorded in one's genome, phenomena variously referred to as genetic memory, racial memory, or ancestral memory though the evidence is scant to null supporting these assertions. When you get right down to it, it's tricky to edit DNA in a meaningful way that does't destroy the cells so altered. On those notes, I find it very interesting that a research team at MIT in Cambridge seems to have figured out a way to go about it, though it's not a straightforward or information-dense process. The process is called SCRIBE (Synthetic Cellular Recorders Integrating Biological Events) and makes it possible for a cell to modify its own DNA in response to certain environmental stimuli. The team's results were published in volume 346, issue number 6211 of Science, but I'll summarize the paper here. In a culture of e.coli bacteria a retron (weird little bits of DNA covalently bonded to bits of RNA which code for reverse transcriptases (enzymes that synthesize DNA using RNA as code templates) that are not found in chromosomal DNA) was installed that would produce a unique DNA sequence in the presence of a certain environmental stimulus, in this case the presence of a certain frequency of light. When the bacteria replicated (and in so doing copied their DNA) the retron would mutate slightly to make another gene that coded for resistence to a particular antibiotic more prominent. At the end of the experiment the antibiotic in question was added to the experimental environments; cells which had built up a memory store of exposure to light were more resistent to the antiobiotic. Prevalence of the antibiotic resistence gene was verified by sequencing the genomes of the bacterial cultures. At this time the total cellular memory provided by this technique isn't much. At best it's enough to gauge in an analog fashion how much of or how long something was present in the environment or not but that's about it. After a few years of development, on the other hand, it might be possible to use this as an in vivo monitoring technique for measuring internal trends over time (such as radiation or chemical exposure). Perhaps farther down the line it could be used as part of a syn/bio computing architecture for in vitro or invo use. The mind boggles.

The Doctor | 24 November 2014, 09:15 hours | default | No comments

Neuromorphic navigation systems, single droplet diagnosis, and a general purpose neuromorphic computing platform?

The field of artificial intelligence has taken many twists and turns on the journey toward its as-yet unrealized goal of building a human-equivalent machine intelligence. We're not there yet, but we've found lots of interesting things along the way. One of the things that has been discovered is that, if you understand it well enough (and there are degrees of approximation, to be sure) it's possible to use what you know to build logic circuits that work the same way - neuromorphic processing. The company AeroVironment recently test-flew a miniature drone which had as its spatial navigation system a prototype neuromorphic processor with 576 synthetic neurons which taught itself how to fly around a sequence of rooms it had never been in before. The drone's navigation system was hooked to a network of positional sensors - ultrasound, infra-red, and optical. This sensor array provided enough information for the chip to teach itself where potential obstacles were, where the drone itself was, and where exits joining rooms where - enough to explore the spaces on its own, without human intervention. When the drone re-entered a room it had already learned (because it recognized it from its already-learned sensor data) it skipped the learning cycle and went right to the "I recognize everything and know how to get around" part of the show, which took a significantly shorter period of time. Drones are pretty difficult to fly at the best of times, so any additional amount of assistance that the drone itself can give would be a real asset (as well as an aid to civilian uptake). The article is otherwise a little light on details. It seems to assume that the reader is already familiar with a lot of the relevant background material. I think I'll cut to the chase and say that this is an interesting, practical breakthrough in neuromorphic computing - in other words, they're doing something fairly tricky yet practical with it.

When you get right down to it, medical diagnosis is a tricky thing. The body is an incredibly complex, interlocking galaxy of physical, chemical, and electrical systems, all with unique indicators. Some of those indicators are so minute that unless you knew exactly what you were looking for, and searched for it in just the right way you might never know something was going on. Earlier I wrote briefly about Theranos, a lab-on-a-chip implementation that can accurately carry out several dozen diagnostic tests on a single drop of blood. Recently, the latest winners of Nokia's Sensing XChallenge prize were announced - the DNA Medical Institute for rHEALTH, a hand-held diagnostic device which can accurately diagnose several hundred medical conditions with blood gathered from a single fingerstick. The rHEALTH hand-held unit also gathers biostatus information from a wireless self-adhesive sensor patch that measures pulse, temperature, and EKG information; the rHEALTH unit is slaved to a smartphone over Bluetooth where presumably an app does something with the information. The inner workings of the rHEALTH lab-on-a-chip are most interesting: The unit's reaction chamber is covered with special purpose reagent patches and (admittedly very early generation) nanotech strips that separate out what they need, add the necessary test components, shine light emitted by chip-lasers and micro-miniature LEDs, and analyze the light reflected and refracted inside the test cell to identify chemical biomarkers indicative of everything from a vitamin-D deficiency to HIV. The unit isn't in deployment yet, it's still in the "we won the prize!" stage of practicality, something that Theranos has on them at this time.

Let's admit an uncomfortable truth to ourselves: We as people take computers for granted. The laptop I write this on, the tablet or phone you're probably reading this on, the racks and racks and racks of servers in buildings scattered all over the world the run pretty much everything important for life today, we scarcely think of them unless something goes wrong. Breaking things down a little bit, computers all do pretty much the same thing in the same way: They have something to store programs and data in, something to pull that data out to process it, someplace to put data while it's being processed, and some way to output (and store) the results. We normally think of the combination of a hard drive, a CPU, RAM, and a display fitting this model, called the von Neumann architecture. Boring, every day stuff today but when it was first conceived of by Alan Turing and John von Neumann in their separate fields of study it was revolutionary because it had never before been done in human history. As very complex things are wont to be, the CPUs themselves we use today are recreations of that very architecture in miniature: For storage there are registers, for the actual manipulation of data there is an arithmatic/logic unit, and one or more buses output the results to other subsystems. ALUs themselves I can best characterize as Deep Magick; I've been studying them off and on for many years and I'm working my way through some of the seminal texts in the field (most recently Carver and Mead's Introduction to VLSI Systems) and when you get right down to it, that so much is possible with long chains of on/of switches is mind boggling, frustrating, humbling, and inspiring.

Getting back out of the philosophical weeds, some interesting developments in the field of neuromorphic computing, or processing information with brain-like circuitry instead of logic chains has come to light. Google's Deepmind operational team has figured out how to marry practical artificial neural networks to the von Neumann architecture, resulting in a neural network with non-static memory that can figure out on its own how to carry out some tasks, such as searching and sorting elements of data without needing to be explicitly programmed to do so. It may sound counter-intuitive, but researchers working with neural network models have not, as far as anybody knows, married what we usually think of RAM to a neural net. Usually, once the 'net is trained it's trained, and that's the end of it. Writeable memory makes them much more flexible because that gives them the capability to put new information aside as well as potentially swap out old stored models. Additionally, such a model is pretty definitively known to be Turing complete: If something can be computed on a hypothetical universal Turing machine it can be computed on neural network-with-RAM (more properly referred to as a neural Turing machine, or NTM). To put it another way, there is nothing preventing an NTM from doing the same thing the CPU in your tablet or laptop can do. The progress they've reported strongly suggests that this isn't just a toy, they can do real world kinds of work with NTM's that don't cause them to break down. They can 'memorize' data sequences of up to 20 entries without errors, between 30 and 50 entries with minimal errors (something that many people might have trouble doing rapidly because that's actually quite a bit of data), and can reliably work on sets of 120 data elements before errors can be expected to start showing up in the output.

What's it good for, you're probably asking. Good question. From what I can tell this is pretty much a proof-of-concept sort of thing right now. The NTM architecture seems to be able to carry out some of the basic programming operations, like searching and sorting; nothing that you can't find in commonly available utility libraries or code yourself in a couple of hours (which you really should do once in a while). I don't think Intel or Arm have anything to worry about just yet. As for what the NTM architecture might be good for in a couple of years, I honestly don't know. It's Turing complete so, hypothetically speaking, anything that could be computed could be computed with one. Applications for sorting and searching data are the first things that come to mind, even on a personal basis. That Google has an interest in this comes as no surprise, when taking into account the volume of data their network deals with on a daily basis (certainly in excess of 30 petabytes of data every day which is.... a lot, and probably much, much more than that). I can't even think that far ahead, so keep an eye on where this is going.

The Doctor | 18 November 2014, 08:00 hours | default | No comments

R.A. Montgomery, of the Choose Your Own Adventure books, dead at age 78.

Children of the 80's will no doubt remember the shelves and shelves of little white paperbacks with red piping from the Choose Your Own Adventure series, where you could play as anything from a deep sea explorer to a shipwrecked mariner, a volunteer time traveler, or anything in between. If you're anything like me, you also spent way too much time looking for mistakes in the sequence of pages to find more interesting twists and no shortage of endings (most of them bad). I can't say they went out of print for a while but they did become harder to find in stores for several years. More recently Chooseco was founded to pick up the torch, reissue some of the older books, and publish new ones. R.A. Montgomery's business practices were unique at the time, which is to say that every author who wrote books in the series was credited with having done so, instead of being credited as the series' founder (which was common in the industry then).

I'm sorry to report that R.A. Montgomery, one of the first authors of the Choose Your Own Adventure series and contributer for nearly the entire history of the series died on 9 November 2014 at the age of 78 at his home in Vermont. It is not known how or what contributed to his death at this time. He is survived by his wife, a son, two grand-daughters, a sister, and a daughter-in-law.

A private memorial service will be held in early 2015.

Mr. Montgomery, thank you for everything you've done and written over the years. Your were an inspiration to me when I was younger, and I still have a few dozen of your books in my collection. You will surely be missed.

The Doctor | 15 November 2014, 18:20 hours | default | Two comments

Reversing progressive memory loss, transplantable 3D printed organs, and improvements in resuscitation.

Possibly the most frightening thing about Alzheimer's Disease is the progressive loss of self; many humans measure their lives by the continuity of their memories, and when that starts to fail, it calls into question all sorts of things about yourself... as long as you're able to think about them. I'm not being cruel, I'm not cracking wise, Alzheimer's is a terrifying disease because it eats everything that makes you, you. Thus, it is with no small feeling of hope that I link to these results at the Buck Institute for Research On Aging - in a small trial at UCLA of patients who were suffering several years of progressive, general memory loss they were able to objectively improve memory functioning and quality of life in 90% of the test subjects between three and six months after beginning the protocol. A late stage Alzheimer's patient in the test group did not improve. The program was carefully tailored to each test subject and makes the assumption that Alzheimer's is not a single disease but a process involving a complex of different phenomena. This is why, it is hypothesized, single-drug treatments have not been successful to date. The treatment protocol tested involved changes of diet, modulation of stress levels, exercise, sleep modulation, a regimen of supplements observed to have some influence over the maintenance and genesis of nerves, and a daily pattern which seemed to serve as a framework to hold everything in balance. The framework is unfortunately fairly complex, and at least at first a caregiver may need to be involved in helping the patient. Looking at how everything fits together it seems to me that there may also be elements of cognitive behaviorial therapy involved, or at least emergent in the process. Interestingly, six of the patients who had to quit their jobs due to encroaching dementia were able to go back to work (it saddens me that there are people who need to work rather than enjoy their lives after a certain age). I don't know if this is going to catch on, protocols like this tend to slip through the cracks of medical science, but it's definitely something worth keeping an eye on.

Longtime readers are no doubt aware that bioprinting, or using 3D printers to fabricate biological structures is an interest of mine. Think of it: Running off replacement organs, specific to the patient with no change of rejection and less possibility of opportunistic infection because immunosuppressants don't have to be used. It's already possible to fab fairly complex biological structures thanks to advances in materials science but now it's time to get ambitious... a company called 3D Bioprinting Solutions just outside of Moscow, Russia announced that by 15 March 2015 they will demonstrate a 3D printed, viable transplantable organ. They claim that they have the ability to fab a functional thyroid gland using cloned stem cells from a laboratory test animal for a proof of concept implementation. The fabbed organ will mature in a bioreactor (which are apparently now advanced enough to be commodity lab equipment) for a certain period of time) before implanting it in the laboratory animal; if all goes according to plan, the lab animal should show no signs of rejection, hormone imbalance or metabolic imbalance. I realize I might be going out on a limb here (I try not to be too optimistic) but, looking at the progression of bioprinting technology since 2006 I think they've got a good chance of success next year. Additionally, I think they might make good on their hopes of fabbing a living, functioning kidney some time next year. And after that? Who knows.

Television to the contrary, resuscitating someone whose heart isn't functional is far from a sure thing. Bodies only have a certain amount of oxygen and glucose dissolved in the bloodstream, and when you factor in the metabolic load of the brain (roughly one-fifth of the body's resting oxygen utilization alone) there isn't much to work with after very long. Additionally... well, I'd be rewriting this excellent article on resuscitation, which pretty clearly explains why the survival rate of cardiac arrest is between 5% and 6% of patients, depending on whom you talk to. Of course, that factors in luck, where and when the patient entered cardiac arrest, how young and healthy they are or are not, and how strong their will to survive is. Due to hypoxia a certain amount of brain damage is almost a certainty; maybe just a few neurons, maybe a couple of neural networks, but sometimes the damage is extreme. About ten years ago the AMA started to look at the data and switched up a few things in the generally accepted resuscitation protocol; the Journal of Emergency Medical Services published an interesting summary recently, of which I'll quote bits and pieces. Assuming a fallen patient in ventricular fibrillation, paramedics gaining access to a long bone in the body for intraosseus infusion because it offers better access to the circulatory system (yeah, I just cringed, too) for drug administration, the induction of medical hypothermia to slow metabolism (which was maintained for a period of time following resuscitation), machine-timed ventilation, and the application of a likely scary number of electrical shocks prior to transportation to the hospital approximately forty minutes later... the survival rate of such situations is now somewhere around 83% (even factoring in a statistical outlier case which lasted 73 minutes). Occurrance of post-cardiac arrest syndrome was minimized by maintenance of medical hypothermia and patients are routinely showing minimal to no measurable neurological impairment.

I'd call that more than a fighting chance.

The Doctor | 13 November 2014, 09:30 hours | default | No comments

Inducing neuroplasticity and the neurological phenomenon of curiosity.

For many years it was believed by medical science that neuroplasticity, the phenomenon in which the human brain rapidly and readily creates neuronal interconnections tapered off as people got older. Children are renowned for learning anything and everything that catches their fancy (not always what we'd wish they'd learn) but the learning process seems to slow down the older they get. As adults, it's much harder to learn complex new skills from scratch. In recent years, a number of compounds have been developed that seem to kickstart neuroplasticity again, but they're mostly used for treating Alzheimer's Disease and not so readily as so-called smart drugs. However, occasionally an interesting clinical trial pops up here and there. Enter donepezil: A cholinesterase inhibitor which increases ambient levels of acetylcholine in the central nervous system. At Boston Children's Hospital not too long ago, Professor Takao Hensch of Harvard University administered a regimen of donapezil to a 14 year old girl being treated for lazy eye, or subnormal visual development in one eye. Similar to using valproate to kickstart critical period learning in the auditory cortex, administration of donepezil seems to have caused the patient's visual cortex to enter a critical period of learning and catch up to the neural circuitry driving her dominant eye. The patient's weaker eye was measurably stronger and her vision was measured to be more acute than before the test program began. What is not clear is whether this is a sense-specific improvement (i.e., does donepezil only improve plasticity in the visual cortex, or will it work in a more wholeistic way upon the human brain). It's too early to tell, and we don't yet have enough data, but the drug's clinical use for treating Alzheimer's seems to imply the latter. Definitely a development to monitor because it may be useful later.

As I mentioned earlier, children are capable of learning incredibly rapidly. This is in part due to neural plasticity, and in part due to a burning curiosity about the world around them which comes from being surrounded by novelty. When one doesn't have a whole lot of life experience, the vistas of the world are bright, shiny, and new. Growing older and building a larger base of knowledge upon which to draw (as well as the public school system punishing curiosity in an attempt to get every student on the same baseline) dims curiosity markedly, and it's hard to hang onto that sense of wonder and inquisitiveness the older one gets. Dr. Matthias Gruber and his research team at U.Cal Davis have been studying the neurological phenomenon of curiosity and their work seems to shore up something that gifted and talented education teachers have been saying for years. In a nutshell, when someone is curious about the topic of a question they are more likely to retain the information for longer periods of time because the mesolimbic dopamine system - the reward pathways of the brain - fire more often, and consequently increase activity in the hippocampus, which is involved in the creation and retrieval of long term memories. To put it another way, if you're interested in what you're learning, you're going to enjoy learning, and consequently what you're learning will stick better. So, what do we do with this information? It seems to inform some strategies for both pedagogy and autodidactism in that it seems possible that it would be easier to learn something less interesting by riding the reward system "high" by studying something more captivating in tandem. Coupled with a strategy of chunking (breaking a body of information into smaller pieces which are studied separately) it might be possible to switch off between more interesting and less interesting subjects in a study session and retain the less interesting stuff more reliably. This is pretty much one of the strategies I used in undergrad; while I didn't gather any metrics for later review and analysis, I did just this when studying things that I found less interesting or problematic and definitely did better on exams and final grades. One thing I did notice is that the subject matter could not be too wildly different; alternating calculus and Japanese didn't work very well, for example, but calculus and computational linguistics worked well together. Experimenting with such a strategy is left as an exercise for the motivated reader.

The Doctor | 24 October 2014, 09:45 hours | default | One comment

Congratulations, Asher Wolf!

Congratulations to Telecomix alumnus Asher Wolf, who was awarded the 2014 Print/Online Award for Journalism along with Oliver Laughland and Paul Farrell at the Amnesty International Australia Media Awards on 21 October 2014!


More under the cut...

The Doctor | 23 October 2014, 20:51 hours | default | No comments

Genetically modified high school grads, stem cell treatment for diabetes, and deciphering memory engrams.

A couple of years ago I did an article on the disclosure that mitochondrial genetic modifications were carried out on thirty embryos in the year 2001 to treat mitochondrial diseases that would probably have been fatal later in life. I also wrote in the article that this does not constitute full scale genetic modification ala the movie Gattaca. It is true that mitochondria are essential to human life but they do not seem to influence any traits that we usually think about, such as increased intelligence or hair color, as they are primarily involved in metabolism. In other words, mitochontrial manipulation does not seem to fundamentally change a person's morphology. While I cannot speak to the accuracy of the news site inquisitr.com they recently published an article that got me thinking: Those children whose mitochondrial configuration was altered before they were born in an attempt to give them healthy, relatively normal lives are probably going to graduate from high school next year. We still don't know who those kids are or where they're living, nor do we really know what health problems they have right now, if they have any that is. We do know that a followup is being done at this time but we're probably not going to find out the results for a while, if at all. We also don't know the implications for the children of those kids years down the line. The mitochondrial transfer process broke new ground when it was carried out and I don't know if it's been done since. My gut says "no, probably not."

I don't actually have a whole lot to say on this particular topic due to privacy concerns. Let's face it, these are kids growing up and trying to figure out their lives and it seems a little creepy to go digging for this kind of information. As far as we know, data's being collected and hopefully some of the results will be published someplace we can read them. Hard data would be nice, too, so we can draw our own conclusions. Definitely food for thought no matter how you cut it.

In other news Type 1 diabetes is a condition in which the patient's body does not manufacture the hormone insulin (warning: Broken JavaScript, some browsers may complain) and thus cannot regulate the use of sugar as fuel. Over time, poorly managed blood sugar levels will wear away the integrity of your body, and your health along with it. I've heard it said that you've got 20 good years at most once the diagnosis comes down the wire. Type one diabetes is treated primarily with the administration of insulin, if not through injection than an implanted pump or biostatus monitor. A research team at Harvard University headed up by professor Doug Melton has made a breakthrough in stem cell technology - they've been able to replicate clinically active numbers of beta cells in vitro, hundreds of thousands at a time, which appear to be usable for implantation. Beta cells reside within pancreatic structures called the islets of langerhans and do the actual work of secreting and releasing insulin on demand. Trials of the replicated beta cells in diabetic nonhuman primates are reportedly looking promising; after implantation they're not being attacked by the immune system, they've been observed to be thriving, and they're producing insulin the way they're supposed to when compared to nondiabetic lifeforms. Word on the street has it that they're ready to begin human clinical trials in a year or two. Whether or not this would constitute a cure of type 1 diabetes in humans on a permanent basis remains to be seen, but I think it prudent to remain hopeful.

One of the bugaboos of philosophy and psychology is qualia - what a sentient mind's experience of life is really like. Is the red I see really the red you see? What about the sound the movement of leaves makes? Are smells really the same to different people? The experience of everything that informs us about the outside world is unique from person to person. A related question that neuroscience has been asking since it first began reverse engineering the human brain is whether or not there is a common data format underlying the same sensory stimuli across different people. If everybody's brain is a little different, will similar patterns of electrical activity arise due to the same stimuli? The implications for neuroscience, bioengineering, and artificial intelligence would be profound if there were. A research team based out of Washington University in Saint Louis, Missouri published a paper in the Proceedings of the National Academy of Sciences with evidence that this is exactly the case. The research team used a scene from an Alfred Hitchcock movie in conjunction with functional magnetic resonance imaging to map the cognitive activity of test subjects for analysis. The idea is that the test subjects watched the same movie clip under observation, and the fMRI scan detected the same kinds of cognitive activity across the test subjects in response. This seems to support the hypothesis that similar patterns of quantifiable neurological activity occurred in the brains of all of the test subjects. To test the hypothesis the process was repeated with two test subjects who have been in persistent vegetative states for multiple years at a time. Long story short, the PVS patients were observed to show quantifiably similar patterns of neurological activity in response to being subjected to the same Hitchcock scene. This implies that, on some level, the patients are capable of interpreting sensory input from the outside world and interpreting it - thinking about the content, context, and meta-context using the executive functions of the brain. This also seems to cast doubt upon the actual level of consciousness that patients in persistent vegetative states possess...

The Doctor | 23 October 2014, 09:15 hours | default | Three comments

Cardiac prosthetics and fully implanted artificial limbs.

No matter how you cut it, heart failure is one of those conditions that sends a chill down your spine. When the heart muscle grows weak and inefficient, it compromises blood flow through the body and can cause a host of other conditions, some weird, some additionally dangerous. Depending on how severe the condition is there are several ways of treating it. For example, my father in law has an implanted defibrillator that monitors his cardiac activity, though fairly simple lifestyle changes have worked miracles for his physical condition in the past several years. Left ventricular assist devices, implantable pumps that connect directly to the heart to assist in its operation are another way of treating heart failure. Recently, a research team at the Wexner Medical Center of Ohio State University reported remarkable results with a new assistive implant called the C-Pulse. The C-Pulse is a flexible cuff that wraps around the aorta and monitors the electrical activity of the heart; when the heart muscle contracts the C-Pulse contracts a fraction of a second later which helps push blood through the aorta to the rest of the body. A lead passes through the skin of the abdomen and connects to an external power pack to drive the unit. The test group consisted of twenty patients with either class III or class IV heart failure. The patients were assessed six and twelve months after the implantation procedure, and amazingly a full 80% of the patients showed significant improvements, and three of them had no symptoms of heart failure. Average quality of life metrics improved a full thirty points among the test subjects. I'm not sure where they're going next, but I think a full clinical trial is on the horizon for the C-Pulse. One to keep an eye on, to be sure.

A common problem with prosthetics, be it a heart, an arm, or what have you is running important bits through the skin to the outside world. Whenever you poke a hole through the skin you open a door to the wide, fun world of opportunistic infections. Anything and everything that can possibly sneak through that gap in the perimeter and set up shop in the much more hospitable environment of the human body will try. This is one of the major reasons why permanently attaching prosthetic limbs has been so difficult. To date various elaborate mechanisms which temporarily attach prosthetic limbs to the remaining lengths of limbs, including straps, fitted cups, and temporary adhesives have been tried with varying degrees of success. At the Royal National Orthopaedic Hospital in London they've begun clinical trials of ITAP, or Intraosseous Transcutaneous Amputation Prosthesis. In a nutshell, they've figured out how to implant attachment sockets in the remaining bones of limb amputees that can penetrate the skin with minimal risk of infection by emulating how deer antlers pass through and bond with the skin. This means that prosthetic limbs can be locked onto the body and receive just as much physical support (if not slightly more) than organic limbs do. Test subject Mark O'Leary of south London received one of the ITAP implants in 2008 (yep, six years ago and only now is it getting any press) and was amazed at not only how well his new prosthetic limb worked, but how being able to feel the road and ground through his prosthetic and into the organic part of his leg. Discomfort on the end of his organic limb is also minimized because there is no direct hard plastic-on-skin contact causing him pain. Apparently not one to do things by halves, O'Leary put his new prosthetic limb to the test by undertaking a 62 mile walk on the installed limb, and for an encore he climbed Mount Kilimanjaro with it.

Another hurdle toward the goal of fully operational prosthetic limbs has been restoring the sense of touch. Experiments have been done over the years with everything from piezoelectric transducers to optical and capacitative pressure sensors, but mostly they've been of use to robotics research and not prosthetics because the bigger problem of figuring out how to patch into nerves on a permanent basis was impeding progress. At Case Western Reserve University a research team successfully accessed the peripheral sensory nerves of amputees and then figured out what patterns of electrical stimulation on which nerves felt like which parts of the patients' missing hands. The inductive nervelinks were connected to patterns of sensors mounted on artificial arms developed at Case Western and the Louis Stokes Veterans Affairs Medical Center in Cleveland, Ohio. Long story short, the patients can not only sense pressure, they can tell the difference between cotton, grapes, and other materials. Even more interesting, sensory input from the prosthetic limbs relieved phantom limb pain suffered by some of the test subjects. Additionally the newly installed sense of touch has given the test subjects heretofore unparalleled dexterity in their prosthetic limbs; one test subject was able to pluck stems from grapes and cherries without crushing the fruit while blindfolded. Elsewhere in the field of limb replacement, a groundbreaking procedure carried out in Sweden in 2013 (I had no idea, one of my net.spiders discovered this by accident) combines the previous two advances. At the Chalmers University of Technology a research team headed up by Max Ortiz Catalan used ITAP techniques in conjunction with transdermal nervelinks to integrate a prosthetic limb into an unnamed patient's body. The patient has been using the limb on the job for over a year now, and can also tie shoelaces and pick up eggs without breaking them. A true cybernetic feedback loop between the brain and the prosthetic limb appears to have been achieved leading to intuitive control over the prosthetic limb. The patient has shown long term ability to maintain control over and sensory access to the prosthetic limb outside of a laboratory environment. The direct skeletal connection to the limb provides mechanical stability and ease of connectivity for the limb without any need for structural adjustment. The nervelinks mean that less effort is required on the part of the wearer to manipulate the limb, greater dexterity by exploiting the intuitive proprioceptive sense of the human brain, and no need for recalibration because the nervelinks don't really change position.

Excited about the future? I am.

The Doctor | 20 October 2014, 08:42 hours | default | No comments

Synaesthesia and noise-cancelling headphones.

I've never really gone out of my way to publicize the fact that I'm a synesthete - my senses are cross-wired in ways that aren't within the middle of the bell curve. In particular, my sense of hearing is directly linked to my senses of sight, touch, proprioception, and emotional state. As one might expect, this causes a few problems in day to day life - I can't go to concerts without wearing earplugs because I shut down from sensory overload, and too much noise makes it nearly impossible to see (and thus, get anything done). The new office at work poses a particular problem because it has an open floor plan, and lots of hard and polished surfaces. This makes background noise extremely difficult to deal with because everything echoes and rattles. I'm not the only person at work who's been having trouble due to the noise, either. To help with the noise problem while still allowing us to do what we need to do (including teleconferencing) they bought each of us a pair of Bose QC-25 Noise Cancelling Headphones, which are both incredibly expensive and very helpful.

When I first put them on I was shocked that I could hear absolutely nothing at all, as if I'd put custom-made silicone rubber earplugs in. It was as if everything had suddenly gone away - like a switch had been thrown inside my head. My vision cleared, no phantom sensations... is this what it's like to have a baseline sensory cortex?

Flipping the switch on the side of the headphones to activate the noise cancelling features resulted in even deeper silence with nothing playing through them. I can't be sure but I think the noise cancellation mechanism was filtering out the sound of my breathing, or at least the experimentation I did seems to suggest this. There is quiet, and there is dead silence. These headphones seem to manufacture the latter. As for using them as headphones I'm extremely impressed with the clarity, range, and depth of the sound they generate. Old favorites sound wonderful and new music sounds crisp, clear, and attention-grabbing. These headphones even seem to do a certain amount of noise cancellation and cleanup of whatever sound you run through them; I listened to a couple of old bootlegs from the mid-1980's and they're remarkably clearer. The tape hiss and crackle from age are almost completely gone, which brings out the music and vocals much more sharply. I noted that I was listening to those bootlegs with the volume much lower than usual, and got much more out of the experience at the same time. On the whole, it is significantly easier to concentrate at work now and I find myself much more productive while wearing them because there is significantly less distraction that I have to filter out. They feel great, too - the QC-25's are very light, have earcups large enough to fit over one's ears with room to spare, and do not seem to trap heat and cause uncomfortable sweating. The QC-25 also has a built in voice activated microphone and a standard 4-pin, 2.5mm headphone/microphone plug for a smartphone which works just fine in a regular headphone jack. I know there are some laptops out there which have a smartphone-style combo jack, but Windbringer is not one of them so I can't attest to how well the microphone works. I haven't tried it on my phone yet, so I can't speak to how well it works for teleconferencing.

I think I've fallen in love with this set of headphones, and I'm considering buying a pair for myself.

The Doctor | 16 October 2014, 10:00 hours | default | Three comments

Visiting the Computer History Museum.

A couple of months ago, Amberite and I visited the Computer History Museum in Mountain View, California with his father. I'll admit, I wasn't sure what to expect on the way over there. I've been to the Smithsonian quite a few times but the Computer History Museum is just that: Dedicated to the entire history of computing and nothing but. There are exhibits of the history of robotics, video games, military equipment, and of course one of practically every personal computer ever made, from the Amstrad CPC (which never really had a large community in the States, though it was quite popular in Europe) to my first love and joy, the Commodore 64. I took so many pictures there that the battery in my camera died, and I had to fall back on my cellphone.

Here's my photo album.

The Doctor | 13 October 2014, 10:15 hours | images | One comment

Notes from the Artificial Intellligence and the Singularity conference in September.

As I've mentioned several times before, every couple of week the Brighter Brains Institute in California holds a Transhuman Visions symposium, where every month the topic of presentation and discussion is a little different. Last month's theme was Artificial Intelligence and the Singularity, a topic of no small amount of debate in the community. Per usual, I scribbled down a couple of pages of notes that I hope may be interesting and enlightening to the general public. A few of my own insights may be mixed in. Later on, a lot of stuff got mixed together as I only wrote down assorted interesting bits and stopped separating the speakers in my notes. My bad. As always, all the good stuff is below the cut...


More under the cut...

The Doctor | 08 October 2014, 10:00 hours | default | No comments

More accumulated wit, wisdom, and profanity.

Once again, I've updated my .plan file. As always, use discretion when reading it at work or in public. Once things slow down at work I'll have more time to write actual posts.

The Doctor | 27 September 2014, 23:31 hours | default | No comments

Registering out of state vehicles in California.

If you're in the process of moving to California you have to get your car registered before your existing license plates and car registration expire. It also would behoove you to get your car registered as fast as possible because the longer you wait, the more you'll have to pay to get it done. It could easily run you $700us if you're not careful, and I advise you to not sell internal organs to get your paperwork through if you can avoid it. The first step of the process is to get your California driver's license. To do this you need your current out of state driver's license, your birth certificate, your Social Security card or passport, and money. You'll need to pass a vision test (I had to take mine twice at two different desks and pass both times), you need to fill out a copy of form DL 44 (which you can only get at the DMV office because each has a unique barcode printed on it), and you need to pass a 36 question written exam. You can only miss three questions on the exam, and the questions are very detailed. Study the handbook and take the practice exams until you ace them; they're online here. You'll also have a thumbprint taken with an optical scanner. You don't need to make an appointment to get your driver's license. If you pass they'll punch a hole in your existing DL and give you a printed out temporary license that's valid until your real one arrives in the mail.

As for registering your car, even if you have a paper temporary license, that's fine. Get your paperwork going and it'll sort out. Seriously. You'll need the following documents:

You don't need to make an appointment to get your car registered, either. Just show up at the DMV early, ideally an hour before the DMV office opens. They'll tell you where to park. Park there and wait for someone to give your car a walk-around. They'll fill out a copy of form REG 31 for you. At a minimum whoever checks out your car will look under the hood, write down the VIN, and take an odometer reading. Bring two or three books and be prepared to wait multiple hours. I waited nearly four hours before my number was called. When they do call your number, have all of your paperwork filled out and ready. You'll have time to do it, use it. If your paperwork is in order, they'll tell you how much you need to pay. The online calculator that tells you how much you'll probably pay is accurate to the penny in my experience, assuming that you've entered accurate information. Make sure you have enough in your budget to cover it. How much you pay is, in part, contingent upon when you first drove your car in California as what they consider a resident. You can probably lie but I don't know how deeply they check due to the privacy screens on the displays, nor do I know what would happen if they caught you lying. Good luck.

Assuming that everything's in order you'll get your permanent California license plates and registration stickers immediately. Put them on your car before you leave the parking lot and put the registration stickers on the rear license plate. Bring tools that you know you can use to add and remove license plates. You don't have to give them your old plates but you do have to put on the paperwork what you plan to do with them (send them back to the state you moved out of, keep them, give them to California). They'll honor whatever you put on it but you do have to tell them.

The Doctor | 25 September 2014, 10:00 hours | default | No comments

Video from the Global Existential Risks and Radical Futures Conference is up.

In June of 2014 the Global Existential Risks and Radical Futures conference was held in Piedmont, California, which I was invited to present at. After a delay of a couple of months videos of the presentations have been uploaded to YouTube. Among them is the presentation I gave; the audio's a little quiet due to the accoustics of the building and the Q&A has been cut off at the end but it does have the entire talk (local mirror). The presentation's slides aren't in frame but I uploaded them here shortly therafter.

The Doctor | 15 September 2014, 09:00 hours | default | No comments

DefCon 22 presentation notes

Behind the cut are the notes I took during DefCon 22, organized by name of presentation. Where appropriate I've linked to the precis of the talk. I make no guarantee that they make sense to anybody but me.


More under the cut...

The Doctor | 20 August 2014, 10:00 hours | default | Two comments

DefCon 22: The writeup.

The reason I've been quiet so much lately and letting my constructs handle posting things for me is because I was getting ready to attend DefCon 22, one of the largest hacker cons in the world. It's been quite a few years since I last attended DefCon (the last one was DefCon 9, back in 2001.ev) due to the fact that Vegas is, in point of fact, stupidly expensive and when you get right down to it I need to pay bills more than I need to fly to Las Vegas for most of a week. I'm also in the middle of finishing up moving out of DC, which would tie up most of anybody's energy and money. However, this year $work sent me with two cow-orkers so once the ink was dry we kicked into lockdown mode to get ready in the days leading up to our flight. I'll post later about what all of that entailed, based upon the hypothesis that transparently documented security protocols executed correctly should stand up to a certain amount of scrutiny; additionally, peer review and scrutiny for security protocols isn't a bad thing at all.

Due to the no photography policy at the con I took only a handful of pictures outside of the conference space, and even then only of myself with an eye for keeping as many other people out of the frame as possible. Many of us aren't comfortable being photographed anymore because we as a society are under such tight surveillance in public that it's nice to not be recorded once in a while. So, I've got no pictures of and from DefCon this time around.

Our flight to Vegas wasn't much to write home about. It was pleasant as short flights go and largely inoffensive. Protip: If you're flying Spirit Air and you've got baggage to check, do so at the front desk. Don't check your baggage when you print out your boarding pass even if you do it at home. If you do it'll cost you somewhere in the neighborhood of $50us. if you check your baggage at the front desk as an "Oh, by the way" you'll only pay $16. Save some money, you're flying to Las Vegas. You'll need it. When we stepped out of McCarran Airport to get on the shuttle bus the dry desert air slammed into us like a firm yet fluffy hammer. After a minute or two we were unable to tell the difference between the air and the exhaust from an idling truck.

From the time we flew out of our home airport the three of us were operating in what we called autistic mode, a phrase taken from Ghost In the Shell which refers to the practice of operating while entirely disconnected from the global Net. DefCon's network is renowned as possibly the most hostile network environment on the planet, where no holds are barred, zero fucks are given, and it's aliens-from-Independence Day-nuke-dog-eat-dog. In short, you run at your own risk because there is no telling what's running loose on any of the wireless networks there. There is also no telling which of the wireless access points at any given hotel are legitimate and which might be booby traps. I've heard several people over the years mention that the number of hotel access points triples in the day or two preceeding DefCon and drops abruptly the day after the con wraps up. Additionally, it is generally agreed upon by the security community that the security measures on your average smartphone vary between "laughable" and "criminally negligent"; coupled with the state of the art in GSM and CDMA interception techniques even talking on the phone at DefCon is potentially hazardous. In a later post I'll describe our OPSEC protocol along with what worked, what didn't work, and what the pain points experienced were.


More under the cut...

The Doctor | 18 August 2014, 10:00 hours | default | Two comments
"We, the extraordinary, were conspiring to make the world better."