Deep learning gone wild, direct neural interface techniques, and hardware acceleration of neural networks.

Jun 05 2016

There is a graphic novel that is near and dear to my hearts by Warren Ellis called Planetary, the tagline of which is "It's a strange world. Let's keep it that way." This first article immediately made me go back and reread that graphic novel...

The field of deep learning has been around for just a short period of time insofar as computer science is concerned. To put it in a nutshell deep learning systems are software systems which attempt to model highly complex datasets in abstract ways using multiple layers of other machine learning and nonlinear processing algorithms stacked on top of one another, the output of one feeding the input of another. People are using them for all sorts of wild stuff these days, from sifting vast databases of unstructured data for novel patterns to new and creative ways to game the stock, bond, and currency markets. Or, if you're Terence Broad of London, accidentally get DMCA takedown requests.

Broad is working on his master's degree in Creative Computing, and as part of that work developed a deep learning system which he trained on lots of video footage to see if it became a more effective encoder by letting it teach itself how to watch video, in essence. It's not an obvious thing but representing video as data ("encoding") is a wild, hairy, scary field... there are dozens of algorithms for doing so and even more container formats for combining audio, video, and other kinds of data into a single file suitable for storage and playback. Broad built his deep learning construct to figure out more efficient ways of representing the same data in files all by itself, without human signal processing experts intervening. He then ran the movie Bladerunner through his construct, dumped its memory and uploaded it to video sharing site Vimeo. What happened shortly thereafter was that one of Warner Brothers' copyright infringement detection bots mistook the video output by Broad's deep learning construct by dumping its memory for a direct rip of the movie because the output of his deep learning system was so accurate and sent an automatic takedown request to the site because it couldn't tell the difference from the original. One of the videos in the article is a short side-by-side comparison of the original footage to the construct's memory. There are differences, to be sure - some of the sequences are flickering, rippling blotches of color that are recognizable if you look back at the original every few seconds, but other sequences are as a good a replica as I've ever seen. Some of the details are gone, some of the movement's gone, but a surprising amount of detail remains. If you grew up watching nth-generation bootlegs of the Fox edit of Bladerunner where the color's all messed up, you know what I'm talking about.

Inflatable space station modules, successful gene therapy for aging, and neuromorphic computing.

May 29 2016

Now that I've got some spare time (read: Leandra's grinding up a few score gigabytes of data), I'd like to write up some stuff that's been floating around in my #blogfodder queue for a couple of weeks.

First up, private-sector aerospace engineering and orbital insertion contractor SpaceX announced not too long ago announced that one of their unmanned Dragon spacecraft delivered an inflatable habitat module to the International Space Station. Following liftoff from Cape Canaveral the craft executed a rendezvous with the ISS in low earth orbit, where the ISS' manipulator arm grappled the craft. In addition to supplies and freight necessary for crew and station one of Bigelow Aerospace's inflatable station modules. For a space station peripheral the deflated BEAM (Bigelow Expandable Activity Module) is remarkably small (1,360 kilographs of mass, 1.7 meters long, 2.4 meters in diameter), but when completely filled with atmosphere it grew to a full size of 3.2 meters in length by 4 meters in diameter (I think I got those matched up). The current gameplan is to slowly but carefully inflate but not use the module to see how it acts in microgravity; remember that this has never been attempted before so science is being done at the same time that history is being made. While this seems overly cautious there are good (albeit not well advertised) reasons for this: The phenomenon of outgassing (note: SSL cert was issued by NASA's CA, so your browser probably doesn't trust it), or materials one would expect to be stable beause they're usually on Earth emitting gases that can leave films on surfaces (or are potentially toxic in vivo) was first observed in early photogrammetry satellites. Thus, the experimental module is instrumented, probably to determine whether or not (and if so, how much) the construction materials will outgas while installed; the results will be used to provide data when Bigelow Aerospace designs the next iteration of the BEAM. Outgassing aside (because that's the phenomenon I have the most experience with) NASA and Bigelow are also interested in tracking how the BEAM stands up overall (it's a semiflexible pressurized envelope in a vacuum so how well the seams and structural members hold up are a major concern), how well it withstands micrometeoroid impacts (impacts with space dust, basically), how much radiation makes it inside the module over time (pretty much the big issue if this style of module will ever be used for habitation, to say nothing of experiments being corrupted), and, of course, whether or not it leaks.

At the end of the twenty-four month experiment, the BEAM will be sealed up, detached from the ISS, and jettisoned with the assistance of the MSS, whereupon its orbit will decay and it will eventually burn up upon re-entry.