Advances in transplantable organ preservation, grinders get night vision, and using genehacking to treat lymphoma.

Organ transplants are a fairly hairy aspect of the medical practice and are a crapshoot even with the best medical care money can buy. Tissue matching viable organs seems about as difficult as brute-forcing RSA keys due to the fact that, at the proteomic level even the slightest mismatch between donor and recipient (and there will always be some degree of mismatch unless they are identical twins) will provoke an immune response that will eventually destroy the transplanted organ unless it's not kept under control. Additionally, unless the organ is perfectly cared for prior to installation the tissues will begin to degrade which will further provoke the recipient's immune system into active response. All things being equal (more or less) a new advance in biotechnology seems to have at least brought this last detail under better control. A new device called the XVIVO Perfusion System uses a centripital pump to move chilled oxygenated fluid through the circulatory system of the organ to keep the cells alive while during the time that the donor lungs were carefully assessed for suitability. The process bought those lungs an extra handful of hours of viability prior to implantation. Far from being a laboratory experiment the XPS was used to save the life of one Kyle Clark in Michigan. Clark was born with cystic fibrosis, a genetic disease which causes progressive organ damage, particularly in the lungs. Lung transplants are a not uncommon course of treatment for CF patients. With a great deal of luck CF patients can live well into their 40's or 50's but it's far from a sure thing. When suitable donor lungs were located for Clark the XPS was used to preserve them so that they could be evaluated more carefully, including microscopic examination to ensure that carbon dioxide was being exchanged for oxygen properly inside the life-support device. It's too early to tell but it looks like the transplant has been a success, and it would seem that he has many years of life ahead of him.

Some years ago the novelist Warren Ellis postulated a subculture called grinders in one of his works - people who hack their biology in the same way that one might hack on computers or software. This can involve everything from building and installing DIY implants to using quantified self techniques to optimize one's performance (arguably - I'd call it "soft grinding" because it's usually noninvasive, but opinions probably differ). Last week a group of grinders called Science for the Masses published a paper describing the results of a unique experment - they induced acute night vision in a baseline human through chemical means. SftM dripped a solution of an organic photosensitizing compound called Chlorin E6, saline, and the organic solvent DMSO into the eyes of test subjects under controlled conditions. The hypothesis they were testing was that the Chlorin E6 would permeate the subject's retinas (potentiated by the DMSO, which accelerates uptake of chemical compounds into the human body) and cause the photosensitive pigments therein to be more responsive to light. The Science for the Masses team observed that positive effects began within one hour of administration, necessitating that the test subjects wear both black scleral lenses and sunglasses to protect their eyes from overexposure to light. This was, after all, an experiment... adjusting to the retinal alterations took about two hours, after which their visual acuity was tested after dark somewhere in a grove of trees. Recognition of symbols at distances of 10 meters was consistently better than that of four unaugmented test subjects. In other trials, test subjects were consistently able to recognize other people who moved at whim through the same grove of trees after dark at distances of between 25 and 50 meters. Statistically speaking, the control subjects had a 33% success rate, while test subjects augmented with Ce6 had a 100% success rate. 20 days after the tests, no ill effects were reported.

Early last year I wrote a brief post about CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) technology, which exploits a curious property of DNA that makes it easy to precisely target individual genes in the genomes of living things. Just a year later CRISPR technology is being tested in the field of oncology for treating certain forms of lymphoma. At the Walter and Eliza Hall Institute of Medical Research in Melbourne, Australia a research team published a paper in Cell Reports in which they had successfully destroyed cultures of Burkett lymphoma cells in vitro using the CRISPR/Cas9 technique. Medical science has isolated a gene called MCL-1 which is essential for the metabolism of cancerous cells in humans; the CRISPR/Cas9 technique was used to delete that gene in the cells, thus killing them and causing the cultures to collapse. Hypothetically speaking (and I'm not an oncologist so it would be irresponsible of me to not caveat this), noncancerous cells should not have the MCL-1 gene so it might be possible to unleash a treatment systemically but only the intended cancerous cells would be destroyed (if anyone knows for sure, please leave a comment!). The research team itself called this a proof-of-concept test so in honesty it can't be called a treatment yet, only steps toward a possible one in the future. Still, it seems like a solid step forward that might have implications in other fields of medicine.

The Doctor | 31 March 2015, 09:30 hours | default | No comments

The Zak McKracken fan movie is up!

In years gone by I was a huge fan of the Lucasarts graphical adventure games, including one of their wildest and weirdest ones (natch) called Zak McKracken and the Alien Mindbenders in which you play a group of four adventurous misfits (a tabloid reporter, an archeologist, and two college students who converted their Volkswagon microbus into a space shuttle and traveled to Mars). Just this morning Spadoni Productions, who are known for making short fan films that riff off of the classics released a fan movie based on the video game. It was shot in Italian and overdubbed in English (fair warning) so if the dialogue seems a little off that's why. Here it is:



By the way, if you want to play the original game it's been remade for modern machines so you can buy it from gog.com for Windows XP and later, Mac OSX v10.7.0 or later, and Linux. It doesn't cost much, just $5.99us so it's worth picking up as a fun weekend or travel game.

Just watch out for the two headed squirrel.

The Doctor | 29 March 2015, 13:13 hours | default | No comments

The world's first rigger, patching around the spinal cord, and a 3d printed violin.

In the tabletop RPG Shadowrun there is a character template that players either love or hate: The Rigger, characters who jack directly into vehicles or drones to pilot them as if they were their own bodies. As they are described, a rigger feels the engine of a vehicle as if it was their own pulse and respiration, sensors in a plane's aerodynamic surfaces replace the proprioceptive senses of their limbs, and sensor systems take the place of the senses of sight, sound, hearting, and taste. For all intents and purposes the rigger is the vehicle, android (let me tell you, anthroform drones don't get a whole lot of love in Shadowrun), or drone while they're jacked in. At the Future of War conference held in late February Arati Prabhakar, director of DARPA (Defense Advanced Research Projects Agency) announced that one Jan Scheuermann, who is a quadraplegic successfully piloted an F-35 Lightning flight simulator through implants in her motor cortex. DARPA recruited her because she is already a test subject at the University of Pittsburgh Medical Center where in 2012 she used that cortical interface to successfully manipulate a robotic arm under laboratory conditions. DARPA engineers developed an interface that intermediates the implants inside Scheuermann's skull and the flight simulator and, rather than using a joystick and user control surfaces she successfully piloted the virtual fighter plane as if it was her own body without the assistance of any other sort of vehicular user interface, or even training as a pilot.

Think about that for a minute.

In other DNI news clinical tests of Neurobridge, a cortical implant for quadraplegics which routes around permanent spinal cord injuries are showing great promise. At the Wexner Medical Center of Ohio State University 23 year old Ian Burkhart, paralyzed from the neck down due to a diving accident is the first test subject to use Neurobridge to move his hand in a laboratory setting. The downstream side of Neurobridge was connected to the muscles of his right forearm and thus he was able to move his hand of his own volition. Neurobridge is somewhat tricky as DNI goes because the signal processor that interprets activity in the motor cortex emits data which then has to be re-processed into a format that muscles can use as control signals. It seems a bit roundabout to me, but it's certainly worth taking as given that there is probably a good engineering reason for this design. The motor interface of Neurobridge is a plastic sleeve wrapped around the limb and does not appear to use an invasive electrode network. The upstream side of Neurobridge is as invasive as it gets, it's patched directly into the brain. The tricky bit is figuring out which signals and which electrodes need to be sequenced to make the right muscles move at the right time. Everybody's brain is wired differently even though brain anatomy is more or less the same from person to person so this required a certain amount of trial and error. In addition, Burkhart has been paralyzed for several years so months of work with the electrode sleeve was required to get his forearm muscles to the point where they would be even minimally useful for the experiment. There is a short video of the experiment that doesn't seem to do the work any justice; I highly recommend taking the two or three minutes to sit down and watch it.

If you've been around me for any length of time, chances are you've heard me wax poetic (and occasionally synaesthetic) about violin music. From the traditional four string wooden variety to the high-tech electric and MIDI variants, they never cease to bring tears to mine eyes. So, when some of my search agents discovered this beauty and threw it up in a browser window a couple of days ago it won't take many tries to guess what I binged on for a couple of hours.

Ahem.

The violin in question is a two meter long two stringed piezoelectric designed by Monad Studio and fabricated in a 3D printer as part of an art exhibit entitled Abyecto on display in New York City at the 3D Print Design Show. Looking like a hybrid of an organ you might find inside some benthic sea creature and something H.R. Giger might have glimpsed during one of his more peaceful nights beyond the gates of horn and ivory, the violin (which doesn't seem to have a piece name associated with it) is one of six which comprise the Abyecto exhibit. Unfortunately, I wasn't able to find any recordings of what the violin looks like or how it's actually played (the 'piezoelectric' bit makes me wonder if the instrument's body flexing isn't itself part of the process of playing), else this article would have a lot more instrument squee. If anybody has footage of the violin being played, please leave a comment. I'd very much like to hear it.

The Doctor | 12 March 2015, 09:00 hours | default | No comments

Music written for cats?

If you've been alive for any length of time you've probably been exposed to the wonderful, moving phenomenon that we call music: Patterns of sounds pleasing to the human ear and effective upon the mind. Music is a complex enough phenomenon that people spend their entire lives studying it and its effects upon the human condition. The psychology, the neurology, the mathematics, the accoustics, the physics... or, like some, they are called to compose or perform music of their own to enrich the world around them. (Whether or not some styles of music can be said to enrich the world is not a debate I'll be getting into, thank you very much.) Then a research team consisting of a composer and two psychologists got the idea to study music as it applies to other forms of life, specifically cats. What forms of music, they asked themselves, would cats enjoy? The answer, as it turned out was that cats really don't seem to care a whole lot for human music, specifically human classical music. Beethoven and Bach leave them pretty cold as it happens. Part of this seems to do with how the feline auditory cortex and inner ear are wired; the vocalizations of cats use slightly different frequencies than human speech with different sorts of complexity. Purring is down around 22 Hz, which is nearly at the bottom of what a healthy human ear is capable of discerning under good conditions. There is a fair amount of overlap between the ranges of human and feline sounds, but cats are also known to generate sounds a good deal higher than the larnyxes of most humans are capable of.

What they discovered was that it's possible to compose music which is species-specific using notes which are most commonly found within the range of sounds that a given species makes. It is also possible to figure out the tempo that best describes the sounds a species uses to communicate to help arrange the music in a way that should be most pleasing to the creatures in question. They ran a series of experiments (the parameters are described in one of the articles, check 'em out) and discovered that cats do indeed show a preference for music written with them in mind. There are two clusters in the data correlating to positive responses to the music, one for younger cats and one for older cats, with middle-aged felines less likely to respond compared to the other test subjects.

Which brings me along to the website from which you can purchase some of this music. A couple of other people have written followup articles about this and keep describing the music as 'trippy' for some reason. I'm not quite certain where they're coming up with this characterization to be frank. If you listen to the sample clips on the website they're arranged into three general categories, Ditties, Ballads, and Airs. Listening to the three samples it seems pretty clear that they're based upon the definitions of the terms, and if one wasn't familiar with them then one could use what they heard as a sort of working definition, so that's out. Spook's Ditty is reminiscent of someone playing the harp at a fairly swift tempo, or arpeggios on a harpsichord. Cozmo's Air has as one might expect lots of purring noises, which would be familiar to cat lovers (or anyone who's ever fallen asleep with a cat) and chords on string instruments (cello and viola, I think) that aren't unusual from orchestral movie soundtracks, though in a rather lower octave than usually encountered. Rusty's Ballad has an unusual rhythm underlying the melody - running eighth or sixteenth notes but largo otherwise, which makes me scratch my head because I can't tell what sorts of notes are used. Some whole notes, to be sure, but otherwise... quarter notes? Half notes? The odd fermata? I'd really need to see the sheet music to make heads or tails out of it.

Okay, I'll concede a partial point with respect to Rusty's Ballad. "Trippy"? No. Strange? Yes.

The Doctor | 09 March 2015, 10:00 hours | default | No comments

3D printed jet engines, prosthetic limbs, and car engines.

The state of the art of personal 3D printing is still in a state of flux. Mostly, we're still limited to variants of low-melting point plastics and we're still figuring out new and creative ways of making more complex shapes that are self-supporting to some extent. What isn't getting a whole lot of press right now are some industrial applications of this technology, some of which date back a good decade.

For example, a research team consisting of personnel from Monash University in Australia, the Commonwealth Scientific and Industrial Research Organisation, and Deakin University recently unveiled the world's first 3D printed jet engine. They started the project with the design for an older model gas turbine jet engine, which are nothing to sneeze at anyway and reverse engineered it. Each major component was scanned, probably with a laser, and the data was used to work up a mesh that was then sliced into layers that the 3D printer could lay down successively. A certain amount of geometric jockeying was most assuredly involved in getting each piece positioned optimally for fabrication. The 3D printer used to build the components was based upon selective laser sintering and used alloy dust as its feedstock; each layer laid down was approximately 0.05 mm thick, or about 1/30 the width of a line drawn with a #2 pencil (remember those?) Two copies of each part were fabricated, a process that, all in all took about a year to accomplish. As far as I know the jet engines haven't been spun up yet but are on display. Frankly, they're not sure they'll work as-is so they're going back to the drawing board and double checking their work to make sure that the fruits of their labors won't suddenly turn into so much shrapnel if they're fired up.

In 2002 one Nicolas Huchet lost his right hand at the wrist in an an accident at work. Sounds like a pretty simple way to set up a story that's about to take a hairpin turn into unexpected territory, doesn't it? He eventually acquired a myoelectric prosthesis but soon ran into its functional limits insofar as using and teaching DAW software. In October of 2012 Huchet stepped into a fablab and began a project of epic proportions, designing and building his own prosthetic hand. From the moment he saw his first 3D printer the spark was lit. Add to the volatile mix an Arduino or two and what appears to be a few components from the InMoov project to interface the servomotors and by February of 2013 Huchet and a few hackers from the fablab had finished a prototype prosthetic hand. The superstructure, joints, and phalanges of the hand were run off on a 3D printer and appear to have been assembled using off-the-shelf hardware, like screws and bolts. High test fishing line was used in lieu of tendons for actuating the digits; I've no idea what kind of motors are doing the heavy lifting but their power requirements interest me. Costing something like $250us to construct in total, the open source unit is actuated by picking up and interpreting electrical impulses from the muscles in Huchet's forearm and is nearly (if not just as) functional as a commercial prosthetic limb costing over 300 times as much. Rather than trying to achieve an "opera hand," or as close to normal as possible appearance, Huchet and company seem to have gone for the cyberpunk "high-tech wires and chrome" look. A bunch of talented hackers built that arm and there's no two ways about it.

Incidentally, if anyone out there is interested in getting involved in the open source prosthetics movement I strongly recommend getting in touch with the E-Nabling the Future community.

For quite a few years the automotive industry has been using very sophisticated 3D printers to manufacture engines for cars because they're more efficient to produce that way and tend to be somewhat more sturdy. Having been a Toyota owner for a decade, I can vouch for their sturdiness: They run very quietly, almost silently right up until they're about to die, and then they go out with a bang that the entire neighborhood hears. However, getting back to the story at hand, 3D hacker and mechanical engineer Eric Harrel decided to see if he could reverse engineer a Toyota 4 cylinder 22RE engine and make printable meshes from it to build his own engine. The 22RE has 80 distinct components that fit together with very tight tolerances (as one would expect of a 21st century engine) so the entire design project required around 60 hours from start to finish to engineer each component, including scaling them to 35% of normal so they could be run off in his 'printer (modulo a handful of springs, fasteners, and bearings that had to be purchased or fashioned some other way). Fabbing each of those components took another 72 hours in total. I don't know how long it took to finish and assemble the scaled down engine but my wild guess would put it around another 72 hours in total. At the end, though, the pistons drive, the valves work, and the driveshaft turns. The greyprints are available on Thingiverse for download if you've got a mind to try it yourself.

If 3D printers are toys, they're fabulously capable toys.

The Doctor | 06 March 2015, 09:00 hours | default | No comments

Where have I been lately?

That's an interesting question.

The short answer is, I've been busy. Very much so.

The longer and more accurate answer is that work has been running me ragged lately and I've been trying to conserve my spoons as best I can, lest I run myself into the ground (again). I've been routinely putting in 60 and 70 hour weeks, often over six or seven days so I haven't really been getting a whole lot of downtime. So some hard choices had to be made. Go out for my birthday or keep it low key? Low key, because I'm on call. Get a couple of blog posts written and post dated? Oops, on my one day off I slept sixteen hours. Wouldn't have done so if my body hadn't needed it so I'm not going to cry about it. Come home from work and do some writing? Came home from work at 2200 local time, had something that I think was dinner, and faceplanted. Get up early and socialize, or get up early and shake a few bugs out of one of my bots (the output of which happens to be keeping me sane (more or less))? Nobody else is up that early, so code a little bit and then back to the grind, by way of the gates of horn and ivory.

For those of you who've been worried (and this goes for your bots, too), I'm not dead though I have been dead to the world not a few times in the past two months.

When you get right down to it self-care is what keeps most of us held together. Sometimes, the wise thing to do is to recharge as best one can to keep going. Often that involves kicking everything else off and sleeping for a day or two. Or skipping some fun things because the toll exacted on one's body and spirit is too much, and cratering as a result can make things worse.

If it's one thing I've learned, it's that sometimes it's the right thing to do. Life is full of interesting and fun things to do and see, and getting benched for a while, even though it can feel frustrating or irritating doesn't particularly diminish life as a whole. Certainly not if you don't let it. There is lots more out there, and when things calm down (and they will - it doesn't feel like it right now but I think they will) it'll be time to go for them again.

Just don't forget what it felt like to have those good times. They'll be what remind you to go back to them.

The Doctor | 03 March 2015, 10:00 hours | default | No comments

Now that I've got some time, what happened this year?

I've already done my obligatory post of some version of the song Birthday by The Cruxshadows what happened this last year that I can look back upon?

It's funny. I was sitting there earlier tonight at dinner (yes, I post-dated this entry so it would match up with the other one) and I came up with a bunch of stuff that I'm kicking myself for not having written down. I guess that's the way it goes - thoughts go in, thoughts go out, but unless you trap them somehow they're probably not going to come back. But I'll take a stab at it anyway.

I've learned that the most subtle of accidents, those that you don't even realize happened in slow motion until well after the fact can teach the most profound lessons. And you'll sometimes laugh yourself silly over them later.

I've come to recognize that if one surrounds onself with too much of something, anything, really, it'll cause one's life to change so that it dominates everything they do, and eventually everything they are. Choose wisely. You can't always choose again.

I've learned that one's daily practice, in whatever form it may take is the one stone upon which everything else can be built. When you feel like you can do it the least is when you need it the most. I've also come to accept that sometimes, at the end of the day when you drag yourself home and fall asleep on the couch your daily practice just isn't going to happen. Absconding for a while and coming back can serve best under such circumstances.

I've learned that if you're going to be larger than life you've got to go the extra distance to not only get there but stay there. No matter how close to the Edge you are, no matter how good you just now were, no matter how many augmentations of any kind you've racked up, if you don't keep up with the basic "this is how this works" you'll slip behind. I've also learned the importance of having one or two demonstrations of the Edge up my sleeve that I can bust out at a moment's notice. Know-how and skill are nice, practice is great, but shock value is still a useful tool. Being a little theatrical can't hurt either (but practice first!)

I've learned never to give away the whole game. Never tell anyone everything you are capable of.

I have learned and am slowly coming to accept that, when life throws you on your head and wrecks your plans, fall back on your backup plan (if you don't have at least two backup plans, drop everything and lay them out right bloody now) and start executing. Your backup plans need to be able to throttle back so you don't wreck yourself. Your primary plan needs to be able to be suspended (not abandoned) and you need to build reasons for doing so into it. Sometimes you need to be gentle with yourself to make it through. Listen to the omens. But never, ever stop.

I've learned that people will dump their bad publicity on you if they fuck up badly. Always cultivate a loyal and observant community around your projects with the closest to unfailing honesty you can manage (secrecy doesn't always allow for this - life sucks like that). You won't have to defend yourself overmuch, your community will compare, contrast, and use their brains when you hope they will the most. During this time never, ever stop making progress. Keep it tight.

Sometimes the code you spent all day writing doesn't even work, and is completely terrible to boot. Blow it away and start over. Don't try to salvage it.

I've learned that the older I get, the less I want to break in a new pair of boots. I'm still working on the Doc Martens that I got for Yule and I can just now wear them for longer than four hours at a stretch. It's well worth getting really nice ones up front, even if they cost quite a bit more just so they'll last longer. I'd prefer to have a pair that last ten or fifteen years so I don't have to go through this every three or four years.

I've learned to always keep a little in reserve just so I can really cut loose if I have to.

I've learned again that while one may be recognized as an expert or a teacher in some respect by someone, one must always remain a student. Everybody has their betters out there; learn well from them. This includes making the mistakes of a student and learning from them.

I've learned that sometimes you just need to get out and dance. Take the next day off to recover if you need to. It's good for you.

I am trying to learn that sometimes shutting up is the right thing to do.

The Doctor | 15 February 2015, 15:00 hours | default | Two comments

837 and still kickin'



The year in review... when I finally have a chance to sit and write.

The Doctor | 15 February 2015, 14:10 hours | default | No comments

3D printing circuit boards, photography-resistent clothing, and wireless DNI.

Now that I've had a couple of days to sleep and get most of my brain operational again, how about some stuff that other parts of me have stumbled across?

Building your own electronics is pretty difficult. The actual electrical engineering aside you still have to cut, etch, and drill your own printed circuit boards which is a lengthy and sometimes frustrating task. Doubly so when multi layer circuit boards are involved because they're so fiddly and easy to get wrong. There is one open source project that I know of called the Rabbit Pronto which is a RepRap print head for fabbing circuit boards but it might be a little too experimental for the tastes of some. This constitutes a serious holdup to people being able to fabricate their own computers but that's a separate issue. Enter the Voltera, a rapid prototyping machine for circuitry. Currently clocking in at $237,061us on Kickstarter and still going, the Voltera isn't quite a 3D printer in that it doesn't seem possible to fabricate circuit boards completely from scratch with it, you still need a static baseplate. However, what the Voltera does do is lay down successive layers of conductive and insulating inks on top of the fibreglass board until your entire circuit has been printed out. If surface mount technology is how you roll (and that's increasingly the only game in town) you won't have to worry about drilling holes for components' leads but there is nothing preventing through-hole designs. The firmware is designed to accept industry standard Gerber files so users aren't necessarily tied down to any one CAD package. Even more interesting is that the Voltera includes a solder paste head, so after the board's done it'll lay that out for you as well so that components can be positioned appropriately. Additionally, the bed of the Voltera implements reflow soldering, which means that after the components are positioned the temperature can be slowly raised until the solder paste cooks down and solid electrical connections are made - no more toaster oven. All but one of the Batch-2 runs of the Voltera are spoken for already so if you really want one you'd best jump on it, else you're going to have to wait for them to go into general manufacture.

Privacy runs fast through our fingers in the twenty-first century. If it's not security cameras on the street recording everything and everyone walking by. If it's not securicams it's drones (public and private sector both) on surveillance runs. If it's not drones, sometimes it's people with cameras and smartphones photographing people who really don't want pictures taken (cases in point the photography policies of many hacker cons). In other words, paparazzi are no longer a problem exclusive to the rich and famous. Enter Steve Wheeler of Betabrand, a company that crowdsources and lets people vote on clothing designs as its think tank strategy; projects with good prospects enter a crowdfunding phase so early adopters can gain access to them. If something does really well, the something goes into mass production. Their latest project (which is doing surprisingly well) is called Flashback - anti-photography clothing that reflects so much light into the lens that only the clothes can be seen. Flashback clothing works the same way as the high-visibility vests and strips that urban bicyclists wear by using glass nanospheres bonded to the fabric itself to form what amounts to a flexible, highly reflective surface that refracts as much light as possible. Currently there are only four pieces, a hooded jacket, a scarf, blazer, and trousers but depending on how things go the clothing line might grow. The Wired article I've linked to has a couple of "during the photograph" pictures but their crowdfunding page has execellent before-flash/after-flash pictures. There is some skepticism about how well they actually work (especially from professional photographers) but after reading a bit about the theory it seems sound to me, and I'm considering rounding up all of the reflective strips my cow-orkers wear to do a couple of "Will it or won't it?" pictures over lunch as an experiment. If exotic clothing is your thing you might want to keep an eye on this brand, though you'll pay close to designer prices for their wares.

The slow and steady march toward direct neural interface - creating a bi-directional link between the brain and computer hardware - proceeds apace. In 2011 Dr. Eberhard Fetz was given a $1mus, three year grant to advance his work on implantable neuroprosthetics. Now we have the CerePlex-W, an implantable neural activity receiver which wirelessly transmits its data to nearby computer systems which can act upon those commands. Currently it's on sale only on the research market for use with simian test subjects, but the Braingate Consortium is in talks with the US FDA to begin clinical human trials some time in the near future. The CerePlex-W is a wireless device broadcasting at 30 milliwatts of power so it can be picked up just a meter or two away, yet it's able to transmit data at a speed of 48 megabits per second, princely bandwidth for broadcasting the activity of the cerebral cortex indeed. Whatever is connected to the receiver can use the command signal however it wishes, from manipulating a cursor on a screen all the way to... that's a good question. Entering characters? Driving a wheelchair around? Using a robotic arm to move stuff around? The mind boggles, especially when you take into account the possibility of setting up a tech chain: If you can type, you can both program and send e-mail to vendors and have stuff hooked up for you, and then write the software to control it, and then use the hardware to do other things, and then still other things, and build better prostheses... The device is described as being about the size of an automobile gascap and is not fully contained, which is to say that it still has to have a persistent opening through the skin and skull to connect to an electrode grid placed atop the subject's brain. Major surgery is, of course still required to position the electrode grid on one of the motor cortices. Still, output bandwidth of this device aside it represents a remarkable breakthrough in that it's so small. After ten years of hard work all of the signal processing is done on board without needing to be plugged into racks of computers to do the number crunching. There isn't any word yet on when FDA trials will begin but you can be once they do all hell's going to break loose. Time to start saving our pennies...

The Doctor | 13 February 2015, 09:00 hours | default | Two comments

Ubuntu Syndrome.

Warning: Bitter BOFH ahead.

There is a phenomenon I've come to call Ubuntu Syndrome, after the distribution of Linux which has become the darling of nearly every hosting provider out there (and no, I won't call them bloody cloud providers). All things considered, it seems to have a good balance of stable software, ease of use, availability, and diversity of available software. It also lends itself readily to the following workflow:
Look. I get that virtual machines are, for all intents and purposes disposable. They're cheap to stand up, relatively cheap to operate (up to a point), and trivial to tear down so you can start over. They're certainly more convenient than having to rebuild and reinstall an entire physical server from scratch. On the other hand, there is a lot to be said for doing things right up front so that you can skip over (or at least hopefully postpone) the whole "get pwned" part of the show. A little bit of extra work up front (like running the command apt-get update && apt-get upgrade -y) can save a great deal of time and effort later by installing the latest and greatest security patches. It takes a little while, sure, but why work extra late nights if you don't have to? In addition, there is something to be said for hardening your VMs when you stand them up at the same time you patch them to make it that much harder for the VM to be compromised. It doesn't take long; in fact it can be as simple as copying a handful of files and rebooting the VM. Here's my private stash of hardened configs for Ubuntu v12.04 and v14.04 LTS that I deploy on all of my servers (virtual and otherwise, when I have to use Ubuntu). There are other resources out there, sure, but these are mine and you're welcome to use them.

Put a little thought into it. Just because something is disposable doesn't necessarily mean that it's worth extra trouble and hassle later. Save yourselves the energy for more interesting things later.


More under the cut...

The Doctor | 11 February 2015, 09:30 hours | default | No comments

Photographs from the Monterey Bay Aquarium, December 2014.

I know I haven't posted much (at all, really) for most of a month. I'd love to say that I've been out having wacky adventures and gallivanting about Time and Space, but I haven't. Work has been, well, work, and eating me alive to boot. This is the first evening in quite a while (because I'm writing this as a timed post) I haven't gone straight to bed after getting home. So, no interesting news articles, no attempts at humor, no witty insights, However, last December I took the opportunity to pay the Monterey Bay Aquarium a visit. I don't have a whole lot else to say because I frankly don't have it in me. I will say, however, that there were two octopodes at the aquarium that were seriously out of social and seemed to want nothing more than to be left alone for a couple of precious hours.

Anyway, here are the pictures. Some of them aren't of the greatest quality because parts of the aquarium were pretty dark but I kept the best ones. Enjoy.

The Doctor | 09 February 2015, 09:00 hours | images | No comments

A 3D printed laser cutter, aerosol solar cells, and reversing neural networks.

3D printers are great for making things, including more of themselves. The first really accessible 3D printer, the RepRap was designed to be buildable from locally sourceable components - metal rods, bolds, screws, and wires, and the rest can be run off on another 3D printer. There is even a variant called the JunkStrap which, as the name implies, involves repurposing electromechanical junk for basic components. There are other useful shop tools which don't necessarily have open source equivalents, though, like laser cutters for precisely cutting, carving, and etching solid materials. Lasers are finicky beasts - they require lots of power, they need to be cooled so they don't fry themselves, they can produce toxic smoke when firing (because whatever they're burning oxidizes), and if you're not careful the other wavelengths of light produced when they fire can damage your eyes permanently. All of that said they're extremely handy tools to have around the shop, and can be as easy to use as a printer once you know how (protip: Take the training course more than once. I took HacDC's once and I don't feel qualified to operate their cutter yet.) Cutting to the chase (way too late) someone on Thingiverse using the handle Villamany has created an open source, 3D printable laser cutter out of recycled components. Called the 3dpBurner, it's an open frame laser cutter that takes after the RepRap in a lot of ways (namely, it was originally built out of recycled RepRap parts) and is something that a fairly skilled maker could assemble in a weekend or two, provided that all the parts were handy. Villamany has documented the project online to assist in the assembly of this device and makes a point of warning everyone that this is a potentially dangerous project and that proper precautions should be taken when testing and using it. Not included yet are plans for building a suitable safety enclosure for the unit, so my conscience will not let me advise that anyone try building one just yet; this is way out of my league so it's probably out of yours, too. That said, the 3dpBurner uses fairly easy to find high power chip lasers to do the dirty work; if this sounds far fetched people have been doing this for a while, to good effect at that. The 3dpBurner uses an Arduino as its CPU running the GRBL firmware that was designed as a more-or-less universal CNC firmware implementation to drive the motors.

If you want to download the greyprints for it you can do so from its Thingiverse page. I also have a mirror of the .stl files here, in case you can't get to Thingiverse from wherever you are for some reason. I've also put mirrors of the latest checkout of the GRBL source code and associated wiki up just in case; they're clones of the Git repositories so the entire project history and documentation are there. You're on your own for assembly (right now) due to the hazardous nature of this project; get in touch with Villamany and get involved in the project. It's for your own good.

Electronic toys are nice - I've got 'em, you've got em, they pretty much drive our daily lives - but, as always, power is a problem. Batteries run out at inconvenient times and it's not always possible to find someplace to plug in and recharge. Solar power is one possible solution but to get any real juice out of them they need to be fairly large in size, usually larger than the device you want to power. Exploiting pecular properties of semiconductors on the nanometer scale, however, seems promising. This next bit was first published about last summer but it's only recently gotten a little more love in the science news. Research teams collaborating at the Edward S. Rogers Sr. Department of Electrical and Computer Engineering at the University of Toronto and IBM Canada's R&D Center are steadily breaking new ground on what could eventually wind up being cheap and practical aerosol solar cells for power generation. Yep, aerosol as in "spray on." A little bit of background so this makes sense: Quantum dots are basically crystals of semiconducting compounds that are nanoscopic in scale (their sizes are measured in billionths of a meter), small enough that depending on how you treat them they act like either semiconducting components (like those you can comfortably balance on a fingertip) or individual molecules. Colloidal quantum dots are synthesized in solution, which means they readily lend themselves to being layered on surfaces via aerosol deposition, at which time they self-organize just enough that you can do practical things with them. Like convert a flow of photons into a flow of electrons, or generate electrical power in other words. The research team has figured out how to synthesize lead-sulfide quantum colloidal dots that don't oxidize in air but can generate power. Right now they're only around 9% efficiency; most solar panels are between 11% and 15% efficient, with the current world record of 44.7% efficiency held by the Fraunhofer Institute for Solar Energy Systems' concentrator photovoltaics. They've got a ways to go before they're comparable to solar panels that you or I are likely to get hold of but, the Fraunhofer Institute aside, 8% and 11% efficiency aren't that far off, and they've improved their techniques somewhat in the intervening seven months. Definitely something to keep an eye on.

Image recognition is a weird, weird field of software engineering, involving pattern recognition, signal analysis, and a bunch of other stuff that I can't go into because I frankly don't get it. It's not my field so I can't really do it any justice. Suffice it to say that the last few generations of image recognition software are pretty amazing and surprisingly accurate. This is due in no small part to advancements in the field of deep learning, part of the field of artificial intelligence which attempts to build software systems that work much more like the cognitive processes of living minds. Techniques encompass everything from statistical analysis to artificial neural networks (learning algorithms designed after the fashion of successive layers of simulated neurons) to even more rarefied and esoteric techniques. As for how they actually work when you pop the hood open and go digging around in the engine, that's a very good question. Nobody's really sure how software learning systems work, just like nobody's really sure how the webworks of neurons inside your skull do what they do, but the nice thing is that you can dissect and observe them in ways that you can't organic systems. Recently, research teams at the University of Wyoming and Cornell have been experimenting with image analysis systems to figure out how just how they function. They took one such system called AlexNet and did something not many would probably think to do - they asked it what it thought a guitar looked like. Their copy of AlexNet had never been trained on pictures of guitars, so it dumped its internal state to a file, which unsurprisingly didn't look anything like a guitar. The contents of the file looked more like Jackson Pollock trying his hand at game glitching.

The next phase of the experiment involved taking a copy of AlexNet that had been trained to recognize guitars and feeding it that weird image generated by the first copy. They took the confidence rating from the trained copy of AlexNet (roughly, how much it thought its input resembled what it had been trained on) and fed that metric into the first, untrained copy, which they then asked again what it thought a guitar looked like. They repeated this cycle thousands of times over until the first instance of AlexNet had essentially been trained to generate images that could fool other copies of AlexNet, and the second copy of AlexNet was recognizing the graphical hash as guitars with 99% accuracy. What the results of this idiosyncratic suggest is that image recognition systems don't operate like organic minds. They don't look at overall shapes or pick out the strings or the tuning pegs, but they look for things like clusters of pixels with related colors, or patterns of abstract patterns or color relationships. In short, they do something else entirely, unlike organic minds. This does and does not make sense when you think about it a little. On one hand we're talking about software systems that at best only symbolically model the functionality of their corresponding inspirations. Organic neural networks tend to not be fully connected while software neural nets are. There's a lot going on inside of organic neurons that we aren't aware of yet, while the internals of individual software neurons are pretty well understood. The simplest are individual cells in arrays, and the arrays themselves have certain constraints on the values they contain and how they can be arranged. On the other hand, what does that say about organic brains? If software neural nets are to be considered reasonable representations of organic nets, just how much complexity is present in the brain, and what do all of them do? How many discrete nets are there, or is it one big mostly-connected network? How much complexity is required for consciousness to arise, anyway, let alone sapience?

The Doctor | 09 January 2015, 09:30 hours | default | One comment

A couple of thoughts on microblogging.

The thing about microblogging, or services which allow posts that are very short (around 140 characters) and are disseminated in the fashion of a broadcast medium is that it lends itself to fire-and-forget posting. See something, post it, maybe attach a photograph or a link and be done with it. If your goal is to get information out to lots of people at once leveraging one's social network is criticial: Post something, a couple of the users following you repost it so that more people see it, a couple of their followers repost it in turn... like ripples on the surface of a pond information propagates across the Net like radio waves through the air. Unfortunately, this also lends itself to people taking things at face value. By just looking at the text posted (say, the title of an article) without following the link and reading the article it's very easy for people to let the title or the text mislead them. News sites call this clickbait, and either use it quietly, because the goal is to get people to click in and get the ads and not actually have decent articles, or they religiously swear against using it and put forth the effort to write articles that don't suck.

There is another thing that is worth noting: Microblogging sites like Twitter also carry out location-based trend analysis of what's being posted and offer each user a list of the terms that are statistically significant near them. It's a little tricky to get just trending terms but sometimes you can make an end run with the mobile version of the site. By default trending terms are tailored to the user's history and perceived geographic location, but this can be turned off. At a glance it's very easy to look at whatever happens to be trending, check out the top ten or twenty tweets, and not bother digging any deeper because that seems to be what's happening. However, that can be misleading in the extreme for several reasons. First of all, as mentioned earlier trending terms are regional first and foremost - just because your neighborhood seems boring and quiet doesn't mean that the next town over isn't on fire and crying for help. Second, it's already known that regional censorship is being practiced to keep certain bits of information completely away from certain parts of the world without resorting to "block the site entirely" censorship tactics used in some countries. Of course, the reverse is also true: It's possible to manipulate trends to make things pop to the surface, either to ensure that something gets seen (in the right way, possibly) or to push other terms off the bottom of the trending terms list.

For some time I've been writing and deploying bots that interface with Twitter's user API, the service they offer which makes it possible to write code which interacts with their back end directly without having to write code that loads a page, parses the HTML, and susses out the stuff I'm interested in. It's ugly, unreliable, and a real pain in the ass, and I'd much rather do that only as a last resort if at all. Anyway, one of the things my bots do is interface with Twitter's trending terms in various places API as well as Twitter's keyword search API, download anything that fits their criteria, and then run their own statistical analysis to see if anything interesting shakes out. If their sensor nets do see anything I get paged in various ways depending on how serious the search terms are (ranging from "send an e-mail" to "generate speech and call me"). Sometimes it's the e-mails that wind up being the most interesting.


More under the cut...

The Doctor | 07 January 2015, 09:30 hours | default | No comments

Linux on the Dell XPS 15 (9530)

Midway through December of 2014 Windbringer suffered a catastrophic hardware failure following several months of what I've come to term the Dell Death Spiral (nontrivial CPU overheating even while in single user mode, flaky wireless, USB3 ports fail, USB2 ports fail, complete system collapse). Consequently I was in a bit of a scramble to get new hardware, and after researching my options (as much as I love my Inspiron at work they don't let you finance purchases) I spec'd out a brand new Dell XPS 15.

Behind the cut I'll list Windbringer's new hardware specs and everything I did to get up and running.


More under the cut...

The Doctor | 05 January 2015, 09:00 hours | content | No comments

Speakers' Bureau Contact Page

I now have a contact page for the Brighter Brains Speakers Bureau. If you are interested in having me present on a professional basis, please look over my bio and contact me through that route. We'll work it out from there.


More under the cut...

The Doctor | 02 January 2015, 19:39 hours | default | No comments

Happy 2015, everyone.

Happy New Years, everyone.

I'll have more of a benediction after I wake up some more...

The Doctor | 01 January 2015, 16:42 hours | default | No comments

Merry Christmas and a Joyous Yule, everyone.

May all your toys come with batteries, your books have ample margins for note taking, your clothes be just what you like to wear, and your chance to sleep in be long enough to get a good night's rest.

The Doctor | 25 December 2014, 09:00 hours | default | One comment

Fabbing tools in orbit and with memory materials, and new structural configurations of DNA.

A couple of weeks ago before Windbringer's untimely hardware failure I did an article about NASA installing a 3D printer on board the International Space Station and running some test prints on it to see how well additive manufacturing, or stacking successive layers of feedstock atop one another to build up a more complex structure would work in a microgravity environment. The answer is "quite well," incidentally. Well enough, in fact, to solve the problem of not having the right tools on hand. Let me explain.

In low earth orbit if you don't have the right equipment - a hard drive, replacement parts, or something as simple as a hand tool - it can be months until the next resupply mission arrives and brings with it what you need. That could be merely inconvenient or it could be catastrophic, situation depending. Not too long ago Barry Wilmore, one of the astronauts on board the current ISS mission mentioned that the ISS needed a socket wrench to carry out some tasks on board the station. Word was passed along to Made In Space, the California company which designed and manufactured the 3D printer installed on board the ISS. So, they designed a working socket wrench using CAD software groundside, converted the model into greyprints compatible with the 3D printer's software, and e-mailed them to Wilmore aboard the ISS. Wilmore ran the greyprints through the ISS' 3D printer. End result: A working socket wrench that was used to fix stuff in low earth orbit. One small step for 3D printing, one giant leap for on-demand microfacture.

In other 3D printing news, we now have a new kind of feedstock that can be used to fabricate objects. In addition to ABS and PLA plastics for home printers, and any number of alloys used for industrial direct metal laser-sintered fabbers there is something that we could carefully count as the first memory material suitable for additive manufacture. Kai Parthy, who has invented nearly a dozen and counting different kinds of feedstock for 3D printers has announced his latest invention, a viscoelastic memory foam. Called Layfoam and derived from his line of PORO-LAY plastics, you can run Layfoam through a 'printer per usual, but after it sets you can soak the object in water for a couple of days and it becomes pliable like rubber without losing much of its structural integrity. This widens the field of things that could potentially be fabbed, including devices for relieving mechanical strain (like washers and vibration dampening struts), custom padding and cushioning components, protective cases, and if bio-neutral analogues are discovered in the future possibly even soft medical implants of the sort that are manufactured out of silicone now.

In the early 20th century the helical structure of deoxyribonucleic acid, the massive molecule which encodes genomes was discovered in vivo. While there are other conformations of DNA that have been observed in the wild only a small number of them are actually encountered in any forms of life. Its data storage and error correction properties aside, one of the most marvelous things about DNA is that it's virtually self-assembling. A couple of weeks ago a research team at MIT published a paper entitled Lattice-Free Prediction of Three Dimensional Structures of Programmed DNA Assemblies in the peer reviewed journal Nature Communications. The research team developed an algorithm into which they can input a set of arbitrary parameters like molecular weights, atomic substitutions, microstructural configurations, and it'll calculate what shape the DNA will take on under those conditions. Woven disks. Baskets. Convex and concave dishes. Even, judging by some of the images generated by the research team, components of more complex self-assembling geometric objects could be synthesized (or would that be fabricated?) at the nanometer scale. Applications for such unusual DNA structures remain open: I think there is going to be a period of "What's this good for?" experimentation, just as there was for virtual reality and later augmented reality, but it seems safe to say that most of them will be biotech-related. Perhaps custom protein synthesis and in vivo gengineering will be involved, or perhaps some other applications will be devised a little farther down the line.

The best thing? They're going to publish the source code for the algorithm under an open source license so we all get to play with it.

Welcome to the future.

The Doctor | 24 December 2014, 09:30 hours | default | No comments

I don't think it was North Korea that pwned Sony.

EDIT: 2014/12/23: Added reference to, a link to, and a local copy of the United Nations' Committee Against Torture report.

I would have written about this earlier in the week when it was trendy, but not having a working laptop (and my day job keeping me too busy lately to write) prevented it. So, here it is:

Unless you've been completely disconnected from the media for the past month (which is entirely possible, it's the holiday season), you've probably heard about the multinational media corporation Sony getting hacked so badly that you'd think it was the climax of a William Gibson story. As near as anybody can tell the entire Sony corporate network, in every last office and studio around the world doesn't belong to them anymore. A crew calling itself the GoP - Guardians of Peace - took credit for the compromise. From what we know of the record-breaking incident it probably took years to set up and may have been an inside job simply due to the fact that an astounding amount of data has been leaked online, possibly in the low terabyte range. From scans of famous actors' passports to executives' e-mail spools, to crypto material already being used to sign malware to make it more difficult to detect, more and more sensitive internal documents are winding up free for the downloading on the public Net.

The US government accused North Korea publically of the hack and are calling it an act of war. This was immediately parroted by the New York Times and NBC.

I don't think North Korea did it.

I think they're lying, and the public accusation that North Korea did it is jetwash. Bollocks. Bullshit. In the words of one of Eclipse Phase's more notorious world building devices, the MRGCNN, LLLLLIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEESSSSSSSSSSSSSSSSSSSSSSSSSSSS!!!!!

Beneath the cut are my reasons for saying this.


More under the cut...

The Doctor | 22 December 2014, 09:00 hours | default | Four comments

A friendly heads-up from work.

Windbringer experienced an unexpected and catastrophic hardware failure last night after months of limping along in weird ways (the classic Dell Death Spiral). My backups are good and I have a restoration plan, but until new hardware arrives my ability to communicate is extremely limited. Please be patient until I get set up again.

The Doctor | 12 December 2014, 18:26 hours | default | No comments
"We, the extraordinary, were conspiring to make the world better."