Neuromorphic navigation systems, single droplet diagnosis, and a general purpose neuromorphic computing platform?

Nov 18, 2014

The field of artificial intelligence has taken many twists and turns on the journey toward its as-yet unrealized goal of building a human-equivalent machine intelligence. We're not there yet, but we've found lots of interesting things along the way. One of the things that has been discovered is that, if you understand it well enough (and there are degrees of approximation, to be sure) it's possible to use what you know to build logic circuits that work the same way - neuromorphic processing. The company AeroVironment recently test-flew a miniature drone which had as its spatial navigation system a prototype neuromorphic processor with 576 synthetic neurons which taught itself how to fly around a sequence of rooms it had never been in before. The drone's navigation system was hooked to a network of positional sensors - ultrasound, infra-red, and optical. This sensor array provided enough information for the chip to teach itself where potential obstacles were, where the drone itself was, and where exits joining rooms where - enough to explore the spaces on its own, without human intervention. When the drone re-entered a room it had already learned (because it recognized it from its already-learned sensor data) it skipped the learning cycle and went right to the "I recognize everything and know how to get around" part of the show, which took a significantly shorter period of time. Drones are pretty difficult to fly at the best of times, so any additional amount of assistance that the drone itself can give would be a real asset (as well as an aid to civilian uptake). The article is otherwise a little light on details. It seems to assume that the reader is already familiar with a lot of the relevant background material. I think I'll cut to the chase and say that this is an interesting, practical breakthrough in neuromorphic computing - in other words, they're doing something fairly tricky yet practical with it.

When you get right down to it, medical diagnosis is a tricky thing. The body is an incredibly complex, interlocking galaxy of physical, chemical, and electrical systems, all with unique indicators. Some of those indicators are so minute that unless you knew exactly what you were looking for, and searched for it in just the right way you might never know something was going on. Earlier I wrote briefly about Theranos, a lab-on-a-chip implementation that can accurately carry out several dozen diagnostic tests on a single drop of blood. Recently, the latest winners of Nokia's Sensing XChallenge prize were announced - the DNA Medical Institute for rHEALTH, a hand-held diagnostic device which can accurately diagnose several hundred medical conditions with blood gathered from a single fingerstick. The rHEALTH hand-held unit also gathers biostatus information from a wireless self-adhesive sensor patch that measures pulse, temperature, and EKG information; the rHEALTH unit is slaved to a smartphone over Bluetooth where presumably an app does something with the information. The inner workings of the rHEALTH lab-on-a-chip are most interesting: The unit's reaction chamber is covered with special purpose reagent patches and (admittedly very early generation) nanotech strips that separate out what they need, add the necessary test components, shine light emitted by chip-lasers and micro-miniature LEDs, and analyze the light reflected and refracted inside the test cell to identify chemical biomarkers indicative of everything from a vitamin-D deficiency to HIV. The unit isn't in deployment yet, it's still in the "we won the prize!" stage of practicality, something that Theranos has on them at this time.

Let's admit an uncomfortable truth to ourselves: We as people take computers for granted. The laptop I write this on, the tablet or phone you're probably reading this on, the racks and racks and racks of servers in buildings scattered all over the world the run pretty much everything important for life today, we scarcely think of them unless something goes wrong. Breaking things down a little bit, computers all do pretty much the same thing in the same way: They have something to store programs and data in, something to pull that data out to process it, someplace to put data while it's being processed, and some way to output (and store) the results. We normally think of the combination of a hard drive, a CPU, RAM, and a display fitting this model, called the von Neumann architecture. Boring, every day stuff today but when it was first conceived of by Alan Turing and John von Neumann in their separate fields of study it was revolutionary because it had never before been done in human history. As very complex things are wont to be, the CPUs themselves we use today are recreations of that very architecture in miniature: For storage there are registers, for the actual manipulation of data there is an arithmatic/logic unit, and one or more buses output the results to other subsystems. ALUs themselves I can best characterize as Deep Magick; I've been studying them off and on for many years and I'm working my way through some of the seminal texts in the field (most recently Carver and Mead's Introduction to VLSI Systems) and when you get right down to it, that so much is possible with long chains of on/of switches is mind boggling, frustrating, humbling, and inspiring.

Getting back out of the philosophical weeds, some interesting developments in the field of neuromorphic computing, or processing information with brain-like circuitry instead of logic chains has come to light. Google's Deepmind operational team has figured out how to marry practical artificial neural networks to the von Neumann architecture, resulting in a neural network with non-static memory that can figure out on its own how to carry out some tasks, such as searching and sorting elements of data without needing to be explicitly programmed to do so. It may sound counter-intuitive, but researchers working with neural network models have not, as far as anybody knows, married what we usually think of RAM to a neural net. Usually, once the 'net is trained it's trained, and that's the end of it. Writeable memory makes them much more flexible because that gives them the capability to put new information aside as well as potentially swap out old stored models. Additionally, such a model is pretty definitively known to be Turing complete: If something can be computed on a hypothetical universal Turing machine it can be computed on neural network-with-RAM (more properly referred to as a neural Turing machine, or NTM). To put it another way, there is nothing preventing an NTM from doing the same thing the CPU in your tablet or laptop can do. The progress they've reported strongly suggests that this isn't just a toy, they can do real world kinds of work with NTM's that don't cause them to break down. They can 'memorize' data sequences of up to 20 entries without errors, between 30 and 50 entries with minimal errors (something that many people might have trouble doing rapidly because that's actually quite a bit of data), and can reliably work on sets of 120 data elements before errors can be expected to start showing up in the output.

What's it good for, you're probably asking. Good question. From what I can tell this is pretty much a proof-of-concept sort of thing right now. The NTM architecture seems to be able to carry out some of the basic programming operations, like searching and sorting; nothing that you can't find in commonly available utility libraries or code yourself in a couple of hours (which you really should do once in a while). I don't think Intel or Arm have anything to worry about just yet. As for what the NTM architecture might be good for in a couple of years, I honestly don't know. It's Turing complete so, hypothetically speaking, anything that could be computed could be computed with one. Applications for sorting and searching data are the first things that come to mind, even on a personal basis. That Google has an interest in this comes as no surprise, when taking into account the volume of data their network deals with on a daily basis (certainly in excess of 30 petabytes of data every day which is.... a lot, and probably much, much more than that). I can't even think that far ahead, so keep an eye on where this is going.