Where have I been lately?

That's an interesting question.

The short answer is, I've been busy. Very much so.

The longer and more accurate answer is that work has been running me ragged lately and I've been trying to conserve my spoons as best I can, lest I run myself into the ground (again). I've been routinely putting in 60 and 70 hour weeks, often over six or seven days so I haven't really been getting a whole lot of downtime. So some hard choices had to be made. Go out for my birthday or keep it low key? Low key, because I'm on call. Get a couple of blog posts written and post dated? Oops, on my one day off I slept sixteen hours. Wouldn't have done so if my body hadn't needed it so I'm not going to cry about it. Come home from work and do some writing? Came home from work at 2200 local time, had something that I think was dinner, and faceplanted. Get up early and socialize, or get up early and shake a few bugs out of one of my bots (the output of which happens to be keeping me sane (more or less))? Nobody else is up that early, so code a little bit and then back to the grind, by way of the gates of horn and ivory.

For those of you who've been worried (and this goes for your bots, too), I'm not dead though I have been dead to the world not a few times in the past two months.

When you get right down to it self-care is what keeps most of us held together. Sometimes, the wise thing to do is to recharge as best one can to keep going. Often that involves kicking everything else off and sleeping for a day or two. Or skipping some fun things because the toll exacted on one's body and spirit is too much, and cratering as a result can make things worse.

If it's one thing I've learned, it's that sometimes it's the right thing to do. Life is full of interesting and fun things to do and see, and getting benched for a while, even though it can feel frustrating or irritating doesn't particularly diminish life as a whole. Certainly not if you don't let it. There is lots more out there, and when things calm down (and they will - it doesn't feel like it right now but I think they will) it'll be time to go for them again.

Just don't forget what it felt like to have those good times. They'll be what remind you to go back to them.

The Doctor | 03 March 2015, 10:00 hours | default | No comments

Now that I've got some time, what happened this year?

I've already done my obligatory post of some version of the song Birthday by The Cruxshadows what happened this last year that I can look back upon?

It's funny. I was sitting there earlier tonight at dinner (yes, I post-dated this entry so it would match up with the other one) and I came up with a bunch of stuff that I'm kicking myself for not having written down. I guess that's the way it goes - thoughts go in, thoughts go out, but unless you trap them somehow they're probably not going to come back. But I'll take a stab at it anyway.

I've learned that the most subtle of accidents, those that you don't even realize happened in slow motion until well after the fact can teach the most profound lessons. And you'll sometimes laugh yourself silly over them later.

I've come to recognize that if one surrounds onself with too much of something, anything, really, it'll cause one's life to change so that it dominates everything they do, and eventually everything they are. Choose wisely. You can't always choose again.

I've learned that one's daily practice, in whatever form it may take is the one stone upon which everything else can be built. When you feel like you can do it the least is when you need it the most. I've also come to accept that sometimes, at the end of the day when you drag yourself home and fall asleep on the couch your daily practice just isn't going to happen. Absconding for a while and coming back can serve best under such circumstances.

I've learned that if you're going to be larger than life you've got to go the extra distance to not only get there but stay there. No matter how close to the Edge you are, no matter how good you just now were, no matter how many augmentations of any kind you've racked up, if you don't keep up with the basic "this is how this works" you'll slip behind. I've also learned the importance of having one or two demonstrations of the Edge up my sleeve that I can bust out at a moment's notice. Know-how and skill are nice, practice is great, but shock value is still a useful tool. Being a little theatrical can't hurt either (but practice first!)

I've learned never to give away the whole game. Never tell anyone everything you are capable of.

I have learned and am slowly coming to accept that, when life throws you on your head and wrecks your plans, fall back on your backup plan (if you don't have at least two backup plans, drop everything and lay them out right bloody now) and start executing. Your backup plans need to be able to throttle back so you don't wreck yourself. Your primary plan needs to be able to be suspended (not abandoned) and you need to build reasons for doing so into it. Sometimes you need to be gentle with yourself to make it through. Listen to the omens. But never, ever stop.

I've learned that people will dump their bad publicity on you if they fuck up badly. Always cultivate a loyal and observant community around your projects with the closest to unfailing honesty you can manage (secrecy doesn't always allow for this - life sucks like that). You won't have to defend yourself overmuch, your community will compare, contrast, and use their brains when you hope they will the most. During this time never, ever stop making progress. Keep it tight.

Sometimes the code you spent all day writing doesn't even work, and is completely terrible to boot. Blow it away and start over. Don't try to salvage it.

I've learned that the older I get, the less I want to break in a new pair of boots. I'm still working on the Doc Martens that I got for Yule and I can just now wear them for longer than four hours at a stretch. It's well worth getting really nice ones up front, even if they cost quite a bit more just so they'll last longer. I'd prefer to have a pair that last ten or fifteen years so I don't have to go through this every three or four years.

I've learned to always keep a little in reserve just so I can really cut loose if I have to.

I've learned again that while one may be recognized as an expert or a teacher in some respect by someone, one must always remain a student. Everybody has their betters out there; learn well from them. This includes making the mistakes of a student and learning from them.

I've learned that sometimes you just need to get out and dance. Take the next day off to recover if you need to. It's good for you.

I am trying to learn that sometimes shutting up is the right thing to do.

The Doctor | 15 February 2015, 15:00 hours | default | Two comments

837 and still kickin'



The year in review... when I finally have a chance to sit and write.

The Doctor | 15 February 2015, 14:10 hours | default | No comments

3D printing circuit boards, photography-resistent clothing, and wireless DNI.

Now that I've had a couple of days to sleep and get most of my brain operational again, how about some stuff that other parts of me have stumbled across?

Building your own electronics is pretty difficult. The actual electrical engineering aside you still have to cut, etch, and drill your own printed circuit boards which is a lengthy and sometimes frustrating task. Doubly so when multi layer circuit boards are involved because they're so fiddly and easy to get wrong. There is one open source project that I know of called the Rabbit Pronto which is a RepRap print head for fabbing circuit boards but it might be a little too experimental for the tastes of some. This constitutes a serious holdup to people being able to fabricate their own computers but that's a separate issue. Enter the Voltera, a rapid prototyping machine for circuitry. Currently clocking in at $237,061us on Kickstarter and still going, the Voltera isn't quite a 3D printer in that it doesn't seem possible to fabricate circuit boards completely from scratch with it, you still need a static baseplate. However, what the Voltera does do is lay down successive layers of conductive and insulating inks on top of the fibreglass board until your entire circuit has been printed out. If surface mount technology is how you roll (and that's increasingly the only game in town) you won't have to worry about drilling holes for components' leads but there is nothing preventing through-hole designs. The firmware is designed to accept industry standard Gerber files so users aren't necessarily tied down to any one CAD package. Even more interesting is that the Voltera includes a solder paste head, so after the board's done it'll lay that out for you as well so that components can be positioned appropriately. Additionally, the bed of the Voltera implements reflow soldering, which means that after the components are positioned the temperature can be slowly raised until the solder paste cooks down and solid electrical connections are made - no more toaster oven. All but one of the Batch-2 runs of the Voltera are spoken for already so if you really want one you'd best jump on it, else you're going to have to wait for them to go into general manufacture.

Privacy runs fast through our fingers in the twenty-first century. If it's not security cameras on the street recording everything and everyone walking by. If it's not securicams it's drones (public and private sector both) on surveillance runs. If it's not drones, sometimes it's people with cameras and smartphones photographing people who really don't want pictures taken (cases in point the photography policies of many hacker cons). In other words, paparazzi are no longer a problem exclusive to the rich and famous. Enter Steve Wheeler of Betabrand, a company that crowdsources and lets people vote on clothing designs as its think tank strategy; projects with good prospects enter a crowdfunding phase so early adopters can gain access to them. If something does really well, the something goes into mass production. Their latest project (which is doing surprisingly well) is called Flashback - anti-photography clothing that reflects so much light into the lens that only the clothes can be seen. Flashback clothing works the same way as the high-visibility vests and strips that urban bicyclists wear by using glass nanospheres bonded to the fabric itself to form what amounts to a flexible, highly reflective surface that refracts as much light as possible. Currently there are only four pieces, a hooded jacket, a scarf, blazer, and trousers but depending on how things go the clothing line might grow. The Wired article I've linked to has a couple of "during the photograph" pictures but their crowdfunding page has execellent before-flash/after-flash pictures. There is some skepticism about how well they actually work (especially from professional photographers) but after reading a bit about the theory it seems sound to me, and I'm considering rounding up all of the reflective strips my cow-orkers wear to do a couple of "Will it or won't it?" pictures over lunch as an experiment. If exotic clothing is your thing you might want to keep an eye on this brand, though you'll pay close to designer prices for their wares.

The slow and steady march toward direct neural interface - creating a bi-directional link between the brain and computer hardware - proceeds apace. In 2011 Dr. Eberhard Fetz was given a $1mus, three year grant to advance his work on implantable neuroprosthetics. Now we have the CerePlex-W, an implantable neural activity receiver which wirelessly transmits its data to nearby computer systems which can act upon those commands. Currently it's on sale only on the research market for use with simian test subjects, but the Braingate Consortium is in talks with the US FDA to begin clinical human trials some time in the near future. The CerePlex-W is a wireless device broadcasting at 30 milliwatts of power so it can be picked up just a meter or two away, yet it's able to transmit data at a speed of 48 megabits per second, princely bandwidth for broadcasting the activity of the cerebral cortex indeed. Whatever is connected to the receiver can use the command signal however it wishes, from manipulating a cursor on a screen all the way to... that's a good question. Entering characters? Driving a wheelchair around? Using a robotic arm to move stuff around? The mind boggles, especially when you take into account the possibility of setting up a tech chain: If you can type, you can both program and send e-mail to vendors and have stuff hooked up for you, and then write the software to control it, and then use the hardware to do other things, and then still other things, and build better prostheses... The device is described as being about the size of an automobile gascap and is not fully contained, which is to say that it still has to have a persistent opening through the skin and skull to connect to an electrode grid placed atop the subject's brain. Major surgery is, of course still required to position the electrode grid on one of the motor cortices. Still, output bandwidth of this device aside it represents a remarkable breakthrough in that it's so small. After ten years of hard work all of the signal processing is done on board without needing to be plugged into racks of computers to do the number crunching. There isn't any word yet on when FDA trials will begin but you can be once they do all hell's going to break loose. Time to start saving our pennies...

The Doctor | 13 February 2015, 09:00 hours | default | Two comments

Ubuntu Syndrome.

Warning: Bitter BOFH ahead.

There is a phenomenon I've come to call Ubuntu Syndrome, after the distribution of Linux which has become the darling of nearly every hosting provider out there (and no, I won't call them bloody cloud providers). All things considered, it seems to have a good balance of stable software, ease of use, availability, and diversity of available software. It also lends itself readily to the following workflow:
Look. I get that virtual machines are, for all intents and purposes disposable. They're cheap to stand up, relatively cheap to operate (up to a point), and trivial to tear down so you can start over. They're certainly more convenient than having to rebuild and reinstall an entire physical server from scratch. On the other hand, there is a lot to be said for doing things right up front so that you can skip over (or at least hopefully postpone) the whole "get pwned" part of the show. A little bit of extra work up front (like running the command apt-get update && apt-get upgrade -y) can save a great deal of time and effort later by installing the latest and greatest security patches. It takes a little while, sure, but why work extra late nights if you don't have to? In addition, there is something to be said for hardening your VMs when you stand them up at the same time you patch them to make it that much harder for the VM to be compromised. It doesn't take long; in fact it can be as simple as copying a handful of files and rebooting the VM. Here's my private stash of hardened configs for Ubuntu v12.04 and v14.04 LTS that I deploy on all of my servers (virtual and otherwise, when I have to use Ubuntu). There are other resources out there, sure, but these are mine and you're welcome to use them.

Put a little thought into it. Just because something is disposable doesn't necessarily mean that it's worth extra trouble and hassle later. Save yourselves the energy for more interesting things later.


More under the cut...

The Doctor | 11 February 2015, 09:30 hours | default | No comments

Photographs from the Monterey Bay Aquarium, December 2014.

I know I haven't posted much (at all, really) for most of a month. I'd love to say that I've been out having wacky adventures and gallivanting about Time and Space, but I haven't. Work has been, well, work, and eating me alive to boot. This is the first evening in quite a while (because I'm writing this as a timed post) I haven't gone straight to bed after getting home. So, no interesting news articles, no attempts at humor, no witty insights, However, last December I took the opportunity to pay the Monterey Bay Aquarium a visit. I don't have a whole lot else to say because I frankly don't have it in me. I will say, however, that there were two octopodes at the aquarium that were seriously out of social and seemed to want nothing more than to be left alone for a couple of precious hours.

Anyway, here are the pictures. Some of them aren't of the greatest quality because parts of the aquarium were pretty dark but I kept the best ones. Enjoy.

The Doctor | 09 February 2015, 09:00 hours | images | No comments

A 3D printed laser cutter, aerosol solar cells, and reversing neural networks.

3D printers are great for making things, including more of themselves. The first really accessible 3D printer, the RepRap was designed to be buildable from locally sourceable components - metal rods, bolds, screws, and wires, and the rest can be run off on another 3D printer. There is even a variant called the JunkStrap which, as the name implies, involves repurposing electromechanical junk for basic components. There are other useful shop tools which don't necessarily have open source equivalents, though, like laser cutters for precisely cutting, carving, and etching solid materials. Lasers are finicky beasts - they require lots of power, they need to be cooled so they don't fry themselves, they can produce toxic smoke when firing (because whatever they're burning oxidizes), and if you're not careful the other wavelengths of light produced when they fire can damage your eyes permanently. All of that said they're extremely handy tools to have around the shop, and can be as easy to use as a printer once you know how (protip: Take the training course more than once. I took HacDC's once and I don't feel qualified to operate their cutter yet.) Cutting to the chase (way too late) someone on Thingiverse using the handle Villamany has created an open source, 3D printable laser cutter out of recycled components. Called the 3dpBurner, it's an open frame laser cutter that takes after the RepRap in a lot of ways (namely, it was originally built out of recycled RepRap parts) and is something that a fairly skilled maker could assemble in a weekend or two, provided that all the parts were handy. Villamany has documented the project online to assist in the assembly of this device and makes a point of warning everyone that this is a potentially dangerous project and that proper precautions should be taken when testing and using it. Not included yet are plans for building a suitable safety enclosure for the unit, so my conscience will not let me advise that anyone try building one just yet; this is way out of my league so it's probably out of yours, too. That said, the 3dpBurner uses fairly easy to find high power chip lasers to do the dirty work; if this sounds far fetched people have been doing this for a while, to good effect at that. The 3dpBurner uses an Arduino as its CPU running the GRBL firmware that was designed as a more-or-less universal CNC firmware implementation to drive the motors.

If you want to download the greyprints for it you can do so from its Thingiverse page. I also have a mirror of the .stl files here, in case you can't get to Thingiverse from wherever you are for some reason. I've also put mirrors of the latest checkout of the GRBL source code and associated wiki up just in case; they're clones of the Git repositories so the entire project history and documentation are there. You're on your own for assembly (right now) due to the hazardous nature of this project; get in touch with Villamany and get involved in the project. It's for your own good.

Electronic toys are nice - I've got 'em, you've got em, they pretty much drive our daily lives - but, as always, power is a problem. Batteries run out at inconvenient times and it's not always possible to find someplace to plug in and recharge. Solar power is one possible solution but to get any real juice out of them they need to be fairly large in size, usually larger than the device you want to power. Exploiting pecular properties of semiconductors on the nanometer scale, however, seems promising. This next bit was first published about last summer but it's only recently gotten a little more love in the science news. Research teams collaborating at the Edward S. Rogers Sr. Department of Electrical and Computer Engineering at the University of Toronto and IBM Canada's R&D Center are steadily breaking new ground on what could eventually wind up being cheap and practical aerosol solar cells for power generation. Yep, aerosol as in "spray on." A little bit of background so this makes sense: Quantum dots are basically crystals of semiconducting compounds that are nanoscopic in scale (their sizes are measured in billionths of a meter), small enough that depending on how you treat them they act like either semiconducting components (like those you can comfortably balance on a fingertip) or individual molecules. Colloidal quantum dots are synthesized in solution, which means they readily lend themselves to being layered on surfaces via aerosol deposition, at which time they self-organize just enough that you can do practical things with them. Like convert a flow of photons into a flow of electrons, or generate electrical power in other words. The research team has figured out how to synthesize lead-sulfide quantum colloidal dots that don't oxidize in air but can generate power. Right now they're only around 9% efficiency; most solar panels are between 11% and 15% efficient, with the current world record of 44.7% efficiency held by the Fraunhofer Institute for Solar Energy Systems' concentrator photovoltaics. They've got a ways to go before they're comparable to solar panels that you or I are likely to get hold of but, the Fraunhofer Institute aside, 8% and 11% efficiency aren't that far off, and they've improved their techniques somewhat in the intervening seven months. Definitely something to keep an eye on.

Image recognition is a weird, weird field of software engineering, involving pattern recognition, signal analysis, and a bunch of other stuff that I can't go into because I frankly don't get it. It's not my field so I can't really do it any justice. Suffice it to say that the last few generations of image recognition software are pretty amazing and surprisingly accurate. This is due in no small part to advancements in the field of deep learning, part of the field of artificial intelligence which attempts to build software systems that work much more like the cognitive processes of living minds. Techniques encompass everything from statistical analysis to artificial neural networks (learning algorithms designed after the fashion of successive layers of simulated neurons) to even more rarefied and esoteric techniques. As for how they actually work when you pop the hood open and go digging around in the engine, that's a very good question. Nobody's really sure how software learning systems work, just like nobody's really sure how the webworks of neurons inside your skull do what they do, but the nice thing is that you can dissect and observe them in ways that you can't organic systems. Recently, research teams at the University of Wyoming and Cornell have been experimenting with image analysis systems to figure out how just how they function. They took one such system called AlexNet and did something not many would probably think to do - they asked it what it thought a guitar looked like. Their copy of AlexNet had never been trained on pictures of guitars, so it dumped its internal state to a file, which unsurprisingly didn't look anything like a guitar. The contents of the file looked more like Jackson Pollock trying his hand at game glitching.

The next phase of the experiment involved taking a copy of AlexNet that had been trained to recognize guitars and feeding it that weird image generated by the first copy. They took the confidence rating from the trained copy of AlexNet (roughly, how much it thought its input resembled what it had been trained on) and fed that metric into the first, untrained copy, which they then asked again what it thought a guitar looked like. They repeated this cycle thousands of times over until the first instance of AlexNet had essentially been trained to generate images that could fool other copies of AlexNet, and the second copy of AlexNet was recognizing the graphical hash as guitars with 99% accuracy. What the results of this idiosyncratic suggest is that image recognition systems don't operate like organic minds. They don't look at overall shapes or pick out the strings or the tuning pegs, but they look for things like clusters of pixels with related colors, or patterns of abstract patterns or color relationships. In short, they do something else entirely, unlike organic minds. This does and does not make sense when you think about it a little. On one hand we're talking about software systems that at best only symbolically model the functionality of their corresponding inspirations. Organic neural networks tend to not be fully connected while software neural nets are. There's a lot going on inside of organic neurons that we aren't aware of yet, while the internals of individual software neurons are pretty well understood. The simplest are individual cells in arrays, and the arrays themselves have certain constraints on the values they contain and how they can be arranged. On the other hand, what does that say about organic brains? If software neural nets are to be considered reasonable representations of organic nets, just how much complexity is present in the brain, and what do all of them do? How many discrete nets are there, or is it one big mostly-connected network? How much complexity is required for consciousness to arise, anyway, let alone sapience?

The Doctor | 09 January 2015, 09:30 hours | default | One comment

A couple of thoughts on microblogging.

The thing about microblogging, or services which allow posts that are very short (around 140 characters) and are disseminated in the fashion of a broadcast medium is that it lends itself to fire-and-forget posting. See something, post it, maybe attach a photograph or a link and be done with it. If your goal is to get information out to lots of people at once leveraging one's social network is criticial: Post something, a couple of the users following you repost it so that more people see it, a couple of their followers repost it in turn... like ripples on the surface of a pond information propagates across the Net like radio waves through the air. Unfortunately, this also lends itself to people taking things at face value. By just looking at the text posted (say, the title of an article) without following the link and reading the article it's very easy for people to let the title or the text mislead them. News sites call this clickbait, and either use it quietly, because the goal is to get people to click in and get the ads and not actually have decent articles, or they religiously swear against using it and put forth the effort to write articles that don't suck.

There is another thing that is worth noting: Microblogging sites like Twitter also carry out location-based trend analysis of what's being posted and offer each user a list of the terms that are statistically significant near them. It's a little tricky to get just trending terms but sometimes you can make an end run with the mobile version of the site. By default trending terms are tailored to the user's history and perceived geographic location, but this can be turned off. At a glance it's very easy to look at whatever happens to be trending, check out the top ten or twenty tweets, and not bother digging any deeper because that seems to be what's happening. However, that can be misleading in the extreme for several reasons. First of all, as mentioned earlier trending terms are regional first and foremost - just because your neighborhood seems boring and quiet doesn't mean that the next town over isn't on fire and crying for help. Second, it's already known that regional censorship is being practiced to keep certain bits of information completely away from certain parts of the world without resorting to "block the site entirely" censorship tactics used in some countries. Of course, the reverse is also true: It's possible to manipulate trends to make things pop to the surface, either to ensure that something gets seen (in the right way, possibly) or to push other terms off the bottom of the trending terms list.

For some time I've been writing and deploying bots that interface with Twitter's user API, the service they offer which makes it possible to write code which interacts with their back end directly without having to write code that loads a page, parses the HTML, and susses out the stuff I'm interested in. It's ugly, unreliable, and a real pain in the ass, and I'd much rather do that only as a last resort if at all. Anyway, one of the things my bots do is interface with Twitter's trending terms in various places API as well as Twitter's keyword search API, download anything that fits their criteria, and then run their own statistical analysis to see if anything interesting shakes out. If their sensor nets do see anything I get paged in various ways depending on how serious the search terms are (ranging from "send an e-mail" to "generate speech and call me"). Sometimes it's the e-mails that wind up being the most interesting.


More under the cut...

The Doctor | 07 January 2015, 09:30 hours | default | No comments

Linux on the Dell XPS 15 (9530)

Midway through December of 2014 Windbringer suffered a catastrophic hardware failure following several months of what I've come to term the Dell Death Spiral (nontrivial CPU overheating even while in single user mode, flaky wireless, USB3 ports fail, USB2 ports fail, complete system collapse). Consequently I was in a bit of a scramble to get new hardware, and after researching my options (as much as I love my Inspiron at work they don't let you finance purchases) I spec'd out a brand new Dell XPS 15.

Behind the cut I'll list Windbringer's new hardware specs and everything I did to get up and running.


More under the cut...

The Doctor | 05 January 2015, 09:00 hours | content | No comments

Speakers' Bureau Contact Page

I now have a contact page for the Brighter Brains Speakers Bureau. If you are interested in having me present on a professional basis, please look over my bio and contact me through that route. We'll work it out from there.


More under the cut...

The Doctor | 02 January 2015, 19:39 hours | default | No comments

Happy 2015, everyone.

Happy New Years, everyone.

I'll have more of a benediction after I wake up some more...

The Doctor | 01 January 2015, 16:42 hours | default | No comments

Merry Christmas and a Joyous Yule, everyone.

May all your toys come with batteries, your books have ample margins for note taking, your clothes be just what you like to wear, and your chance to sleep in be long enough to get a good night's rest.

The Doctor | 25 December 2014, 09:00 hours | default | One comment

Fabbing tools in orbit and with memory materials, and new structural configurations of DNA.

A couple of weeks ago before Windbringer's untimely hardware failure I did an article about NASA installing a 3D printer on board the International Space Station and running some test prints on it to see how well additive manufacturing, or stacking successive layers of feedstock atop one another to build up a more complex structure would work in a microgravity environment. The answer is "quite well," incidentally. Well enough, in fact, to solve the problem of not having the right tools on hand. Let me explain.

In low earth orbit if you don't have the right equipment - a hard drive, replacement parts, or something as simple as a hand tool - it can be months until the next resupply mission arrives and brings with it what you need. That could be merely inconvenient or it could be catastrophic, situation depending. Not too long ago Barry Wilmore, one of the astronauts on board the current ISS mission mentioned that the ISS needed a socket wrench to carry out some tasks on board the station. Word was passed along to Made In Space, the California company which designed and manufactured the 3D printer installed on board the ISS. So, they designed a working socket wrench using CAD software groundside, converted the model into greyprints compatible with the 3D printer's software, and e-mailed them to Wilmore aboard the ISS. Wilmore ran the greyprints through the ISS' 3D printer. End result: A working socket wrench that was used to fix stuff in low earth orbit. One small step for 3D printing, one giant leap for on-demand microfacture.

In other 3D printing news, we now have a new kind of feedstock that can be used to fabricate objects. In addition to ABS and PLA plastics for home printers, and any number of alloys used for industrial direct metal laser-sintered fabbers there is something that we could carefully count as the first memory material suitable for additive manufacture. Kai Parthy, who has invented nearly a dozen and counting different kinds of feedstock for 3D printers has announced his latest invention, a viscoelastic memory foam. Called Layfoam and derived from his line of PORO-LAY plastics, you can run Layfoam through a 'printer per usual, but after it sets you can soak the object in water for a couple of days and it becomes pliable like rubber without losing much of its structural integrity. This widens the field of things that could potentially be fabbed, including devices for relieving mechanical strain (like washers and vibration dampening struts), custom padding and cushioning components, protective cases, and if bio-neutral analogues are discovered in the future possibly even soft medical implants of the sort that are manufactured out of silicone now.

In the early 20th century the helical structure of deoxyribonucleic acid, the massive molecule which encodes genomes was discovered in vivo. While there are other conformations of DNA that have been observed in the wild only a small number of them are actually encountered in any forms of life. Its data storage and error correction properties aside, one of the most marvelous things about DNA is that it's virtually self-assembling. A couple of weeks ago a research team at MIT published a paper entitled Lattice-Free Prediction of Three Dimensional Structures of Programmed DNA Assemblies in the peer reviewed journal Nature Communications. The research team developed an algorithm into which they can input a set of arbitrary parameters like molecular weights, atomic substitutions, microstructural configurations, and it'll calculate what shape the DNA will take on under those conditions. Woven disks. Baskets. Convex and concave dishes. Even, judging by some of the images generated by the research team, components of more complex self-assembling geometric objects could be synthesized (or would that be fabricated?) at the nanometer scale. Applications for such unusual DNA structures remain open: I think there is going to be a period of "What's this good for?" experimentation, just as there was for virtual reality and later augmented reality, but it seems safe to say that most of them will be biotech-related. Perhaps custom protein synthesis and in vivo gengineering will be involved, or perhaps some other applications will be devised a little farther down the line.

The best thing? They're going to publish the source code for the algorithm under an open source license so we all get to play with it.

Welcome to the future.

The Doctor | 24 December 2014, 09:30 hours | default | No comments

I don't think it was North Korea that pwned Sony.

EDIT: 2014/12/23: Added reference to, a link to, and a local copy of the United Nations' Committee Against Torture report.

I would have written about this earlier in the week when it was trendy, but not having a working laptop (and my day job keeping me too busy lately to write) prevented it. So, here it is:

Unless you've been completely disconnected from the media for the past month (which is entirely possible, it's the holiday season), you've probably heard about the multinational media corporation Sony getting hacked so badly that you'd think it was the climax of a William Gibson story. As near as anybody can tell the entire Sony corporate network, in every last office and studio around the world doesn't belong to them anymore. A crew calling itself the GoP - Guardians of Peace - took credit for the compromise. From what we know of the record-breaking incident it probably took years to set up and may have been an inside job simply due to the fact that an astounding amount of data has been leaked online, possibly in the low terabyte range. From scans of famous actors' passports to executives' e-mail spools, to crypto material already being used to sign malware to make it more difficult to detect, more and more sensitive internal documents are winding up free for the downloading on the public Net.

The US government accused North Korea publically of the hack and are calling it an act of war. This was immediately parroted by the New York Times and NBC.

I don't think North Korea did it.

I think they're lying, and the public accusation that North Korea did it is jetwash. Bollocks. Bullshit. In the words of one of Eclipse Phase's more notorious world building devices, the MRGCNN, LLLLLIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEESSSSSSSSSSSSSSSSSSSSSSSSSSSS!!!!!

Beneath the cut are my reasons for saying this.


More under the cut...

The Doctor | 22 December 2014, 09:00 hours | default | Four comments

A friendly heads-up from work.

Windbringer experienced an unexpected and catastrophic hardware failure last night after months of limping along in weird ways (the classic Dell Death Spiral). My backups are good and I have a restoration plan, but until new hardware arrives my ability to communicate is extremely limited. Please be patient until I get set up again.

The Doctor | 12 December 2014, 18:26 hours | default | No comments

Repurposing memes for presentations.

I'm all for people reading, listening to, and watching the classics of any form of media. They're the basic cultural memes that so many other cultural communications are built on top of, and occasionally get riffed on that we all seem to silently recognize, whether or not we know where they're from or the context they originally had. You may not know who the Grateful Dead are or recognize any of their music (I sure don't), but if you're a USian chances are that you've at least seen the new iterations of the hippie movement and recognize the general style affected by adherents thereof due to the significant overlap between the two. Most of us, at one point or another, recognize scenes from Romeo and Juliet on television even though we may not have read the play or seen a stage production thereof. They're all around us, like the air we breathe or the water fish swim in (whether or not fish are actually aware that they swim in something called water isn't something I intend to touch on in this post).

More under the cut for spoilers, because I'm feeling nice.


More under the cut...

The Doctor | 06 December 2014, 14:11 hours | default | No comments

Robotic martial artists, security guards, and androids.

Quite possibly the holy grail of robotics is the anthroform robot, a robot which is bipedal in configuration, just like a human or other great ape. As it turns out, it's very tricky to build such a robot without it being too heavy or having power requirements that are unreasonable in the extreme (which only exacerbates the former problem). The first real success in this field was Honda's ASIMO in the year 2000.ev, which most recently uses a lithium-ion power cell that permits one hour of continuous runtime for the robot. ASIMO is also, if you've ever seen a live demo, somewhat limited in motion; it can't really jump, raise one leg very far, or run as fast as an average human at a gentle trot. That said, recent Google acquisition Boston Dynamics (whom many like to make fun of because of the BigDog demo video) recently published a demo video for the 6-foot-2-inch, 330 pound anthroform robot called Atlas that children of the 80's will undoubtedly find just as amusing. To demonstrate Atlas' ability to balance stably under adverse conditions (vis a vis, a narrow stack of cinder blocks) they had Atlas assume the crane stance made famous in the movie The Karate Kid. If it seems like a laboratory "gimme," I challenge you to try it and honestly report in the comments how many times you topple over. Yet to come is the jumping and kicking part of the routine, but it's Boston Dynamics we're talking about; if they can figure out how to program a robot to accurately throw hand grenades they can figure out how to get Atlas to finish the stunt. I must point out, however, that Atlas is a tethered robot thus far - the black umbilical you see in the background appears to be a shielded power line.

In related news, a video shot at an exhibit in Japan popped up on some of the more popular geek news sites earlier this week. The exhibit is of two ABB industrial manipulators inside a lexan arena showing off the abilities of their programmers as well as their precision and delicacy by sparring with swords. The video depicts the industrial robots each drawing a sword and moving in relation to one another with only the points touching, then demonstrating edge-on-edge movements and synchronized movements in relation to one another, followed by replacing the swords in their sheaths. If you watch carefully you can even tell who the victor of the bout was.

A common trope in science fiction is the security droid, a robotic sentry that seemingly exists only to pin down the protagonists with a hail of ranged weaponfire or send a brief image back to the security office before being taken out by said protagonists to advance the plot. Perhaps it's for the best that Real Life(tm) is still trying to catch up in that regard... Early last month, Silicon Valley startup Knightscope did a live demonstration of their first-generation semi-autonomous security drone, the K5, on Microsoft's NorCal campus. The K5 is unarmed but has a fairly complex sensor suite on board designed for site security monitoring, threat analysis and recognition, and an uplink to the company's SOC (Security Operations Center) where human analysts in the response loop can respond if necessary. The sensor suite includes high-def video cameras with in-built near-infrared imaging, facial recognition software, audio microphones, LIDAR, license plate recognition hardware, and even an environmental monitoring system that watches everything from ambient air temperature to carbon dioxide levels. The K5's navigation system incorporates GPS, machine learning, and technician-lead training to show a given unit its patrol area, which it then sets out to learn on its own before patrolling its programmed beat. Interestingly, if someone tries to mess with a K5 on the beat, say, by obstructing it, trying to access its chassis, or trying to abscond with it, the K5 will simultaneously sound an audible alarm while sending an alert to the SOC. The K5 line is expected to launch on a commercial basis in 2015.ev on a Machine-As-A-Service basis, meaning that companies won't actually buy the units, they'll be rented for approximately $4500us/month, which includes 24x7x365 monitoring at the SOC.

No, they can't go up stairs. Yes, I went there.


More under the cut...

The Doctor | 04 December 2014, 09:30 hours | default | Three comments

The first successful 3D print job took place aboard the ISS!

There's a funny thing about space exploration: If something goes wrong aboard ship the consequences could easily be terminal. Outer space is one of the most inhospitable environments imaginable, and meat bodies are remarkably resilient as long as you don't remove them from their native environment (which is to say dry land, about one atmosphere of pressure, and a remarkably fiddly chemical composition). Space travel inherently removes meat bodies from their usual environment and puts them into a complex, fragile replica made of alloys, plastics, and engineering; as we all know, the more complex something is, the more things can go wrong and Murphy was, contrary to popular belief, an optimist. Case in point, the Apollo 13 mission, which was saved by by one of the most epic hacks of all time. It's worth noting, however, that the Apollo 13 CO2 scrubber hack was just that - a hack. NASA really worked a miracle to make that square peg fit into a round hole but it could easily have gone the other way, and the mission might never have returned to Earth. Sometimes you can make the parts you have work for you with some modification, but sometimes you can't. Even MacGyver failed once in a while.*

So, you're probably wondering where this is going. On the last resupply trip to the International Space Station one of the pieces of cargo taken up was... you know me, so I'll dispense with the ellipsis - a 3D printer that uses ABS plastic filament as its feedstock. It was loaded on board the ISS as part of an experiment to test how feasible it would be to microfacture replacement parts during a space mission rather than carry as many spare components as possible. It is hoped that this run of experiments will provide insight into better ways of manufacturing replacement parts in a microgravity environment during later space missions. The 3D printer was installed on 17 November 2014 inside a glovebox, connected to a laptop computer (knowing NASA, it was probably an IBM Thinkpad), and a test print was executed. Telemetry from the test print was analyzed groundside and some recalibration instructions were drafted and transmitted to the ISS. Following realignment of the 3D printer a second, successful test print was executed three days later. On 24 November 2014 the 'printer was used to fab a replacement component for itself, namely, a faceplate for the feedstock extruder head. Right off the bat they noticed that ABS feedstock adheres to the print bed a little differently in microgravity, which can cause problems at the end of the fabrication cycle when the user tries to extract the printer's output. An iterative print, analyze, and recalibrate cycle was used to get the 'printer set up just right to microfacture that faceplate. 3D printers are pretty fiddly to begin with and the ISS crew is trying to operate one in a whole new environment, namely, in orbit. The experimental schedule for 2015 involves printing the same objects skyside and groundside and comparing them to see what differences there are (if any), figuring out how to fix any problems and incorporating lessons learned and technical advancements into the state of the art.

NASA's official experimental page for the 3D printer effort can be found here. It's definitely worth keeping a sensor net trained on.


More under the cut...

The Doctor | 01 December 2014, 09:30 hours | default | No comments

Controlling genes by thought, DNA sequencing in 90 minutes, and cellular memory.

A couple of years ago the field of optogenetics, or genetically engineering responsiveness to visible light to exert control over cells was born. In a nutshell, genes can be inserted into living cells that allow certain functions to be switched on or off (such as the production of a certain hormone or protein) in the presence or absence of a certain color of light. Mostly, this has only been done on an experimental basis to bacteria, to figure out what it might be good for. As it happens to turn out, optogenetics is potentially good for quite a lot of things. At the Swiss Federal Institute of Technology in Zurich a research team has figured out how to use an EEG to control gene expression in cells cultured in vitro and published their results in that week's issue of Nature Communications. It's a bit of a haul, so sit back and be patient...

First, the research team spliced a gene into cultured kidney cells that made them sensitive to near-infrared light, which is the kind that's easy to emit with a common LED (such as those in remote controls and much consumer night vision gear). The new gene was inserted into the DNA in a location such that it could control the synthesis of SEAP (secreted embryonic alkaline phosphatase; after punching around for an hour or so I have no idea what it does). Shine IR on the cell culture, they produce SEAP. Turn the IR light off, they stop. Pretty straightforward as such things go. Then, for style points, they rigged an array of IR LEDs to an EEG such that, when the EEG picked up certain kinds of brain activity in the researchers the LEDs turned on, and caused the cultured kidney cells to produce SEAP. This seems like a toy project because they could easily have done the same thing with an SPST toggle switch that cost a fraction of a Euro; however, the implications are deeper than that. What if retroviral gene therapy was used in a patient to add an optogenetic on/off switch to the genes that code for a certain protein, and instead of electrical stimulation (which has its problems) optical fibres could be used to shine (or not) light on the treated patches of cells? While invasive, that sounds rather less invasive to me than Lilly-style biphasic electrical stimulation. Definitely a technology to keep a sensor net on.

A common procedure during a police investigation is to have a cheek swab taken to collect a DNA sample. Prevailing opinions differ - personally, I find myself in the "get a warrant" camp but that's neither here nor there. Having a DNA sample is all well and good but the analytic process - actually getting useful information from that DNA sample - is substantially more problematic. Depending on the process required it can take anywhere from hours to weeks; additionally, the accuracy of the process leaves much to be desired because, as it turns out, collision attacks apply to forensic DNA evidence, too. So, it is with some trepidation that I state that IntegenX has developed a revolutionary new DNA sequencer. Given a DNA sample from a cheek swab or an object thought to have a DNA sample on it (like spit on a cigarette butt or a toothbruth) the RapidHIT can automatically sequence, process, and profile the sample using the most commonly known and trusted laboratory techniques today. The RapidHIT is also capable of searching the FBI's COmbined DNA Indexing System (CODIS) for positive matches. Several aspects of the US government are positioning themselves to integrate this technology into their missions, but CEO of IntegenX Robert Schueren claims that the company does not know how their technology is being applied. In areas of the United States widely known to be hostile if one looks as if they "aren't from these parts" RapidHIT has been just that, and local LEOs are reported quite happy with their new purchases. Time will show what happens, and what the aftershocks of cheap and portable DNA sequencing are.

Most living things on Earth that operate on a level higher than that of tropism seem to possess some form of memory that records environmental encounters and influences the organism's later activities. There are some who postulate that some events may be permanently recorded in one's genome, phenomena variously referred to as genetic memory, racial memory, or ancestral memory though the evidence is scant to null supporting these assertions. When you get right down to it, it's tricky to edit DNA in a meaningful way that does't destroy the cells so altered. On those notes, I find it very interesting that a research team at MIT in Cambridge seems to have figured out a way to go about it, though it's not a straightforward or information-dense process. The process is called SCRIBE (Synthetic Cellular Recorders Integrating Biological Events) and makes it possible for a cell to modify its own DNA in response to certain environmental stimuli. The team's results were published in volume 346, issue number 6211 of Science, but I'll summarize the paper here. In a culture of e.coli bacteria a retron (weird little bits of DNA covalently bonded to bits of RNA which code for reverse transcriptases (enzymes that synthesize DNA using RNA as code templates) that are not found in chromosomal DNA) was installed that would produce a unique DNA sequence in the presence of a certain environmental stimulus, in this case the presence of a certain frequency of light. When the bacteria replicated (and in so doing copied their DNA) the retron would mutate slightly to make another gene that coded for resistence to a particular antibiotic more prominent. At the end of the experiment the antibiotic in question was added to the experimental environments; cells which had built up a memory store of exposure to light were more resistent to the antiobiotic. Prevalence of the antibiotic resistence gene was verified by sequencing the genomes of the bacterial cultures. At this time the total cellular memory provided by this technique isn't much. At best it's enough to gauge in an analog fashion how much of or how long something was present in the environment or not but that's about it. After a few years of development, on the other hand, it might be possible to use this as an in vivo monitoring technique for measuring internal trends over time (such as radiation or chemical exposure). Perhaps farther down the line it could be used as part of a syn/bio computing architecture for in vitro or invo use. The mind boggles.

The Doctor | 24 November 2014, 09:15 hours | default | No comments

Neuromorphic navigation systems, single droplet diagnosis, and a general purpose neuromorphic computing platform?

The field of artificial intelligence has taken many twists and turns on the journey toward its as-yet unrealized goal of building a human-equivalent machine intelligence. We're not there yet, but we've found lots of interesting things along the way. One of the things that has been discovered is that, if you understand it well enough (and there are degrees of approximation, to be sure) it's possible to use what you know to build logic circuits that work the same way - neuromorphic processing. The company AeroVironment recently test-flew a miniature drone which had as its spatial navigation system a prototype neuromorphic processor with 576 synthetic neurons which taught itself how to fly around a sequence of rooms it had never been in before. The drone's navigation system was hooked to a network of positional sensors - ultrasound, infra-red, and optical. This sensor array provided enough information for the chip to teach itself where potential obstacles were, where the drone itself was, and where exits joining rooms where - enough to explore the spaces on its own, without human intervention. When the drone re-entered a room it had already learned (because it recognized it from its already-learned sensor data) it skipped the learning cycle and went right to the "I recognize everything and know how to get around" part of the show, which took a significantly shorter period of time. Drones are pretty difficult to fly at the best of times, so any additional amount of assistance that the drone itself can give would be a real asset (as well as an aid to civilian uptake). The article is otherwise a little light on details. It seems to assume that the reader is already familiar with a lot of the relevant background material. I think I'll cut to the chase and say that this is an interesting, practical breakthrough in neuromorphic computing - in other words, they're doing something fairly tricky yet practical with it.

When you get right down to it, medical diagnosis is a tricky thing. The body is an incredibly complex, interlocking galaxy of physical, chemical, and electrical systems, all with unique indicators. Some of those indicators are so minute that unless you knew exactly what you were looking for, and searched for it in just the right way you might never know something was going on. Earlier I wrote briefly about Theranos, a lab-on-a-chip implementation that can accurately carry out several dozen diagnostic tests on a single drop of blood. Recently, the latest winners of Nokia's Sensing XChallenge prize were announced - the DNA Medical Institute for rHEALTH, a hand-held diagnostic device which can accurately diagnose several hundred medical conditions with blood gathered from a single fingerstick. The rHEALTH hand-held unit also gathers biostatus information from a wireless self-adhesive sensor patch that measures pulse, temperature, and EKG information; the rHEALTH unit is slaved to a smartphone over Bluetooth where presumably an app does something with the information. The inner workings of the rHEALTH lab-on-a-chip are most interesting: The unit's reaction chamber is covered with special purpose reagent patches and (admittedly very early generation) nanotech strips that separate out what they need, add the necessary test components, shine light emitted by chip-lasers and micro-miniature LEDs, and analyze the light reflected and refracted inside the test cell to identify chemical biomarkers indicative of everything from a vitamin-D deficiency to HIV. The unit isn't in deployment yet, it's still in the "we won the prize!" stage of practicality, something that Theranos has on them at this time.

Let's admit an uncomfortable truth to ourselves: We as people take computers for granted. The laptop I write this on, the tablet or phone you're probably reading this on, the racks and racks and racks of servers in buildings scattered all over the world the run pretty much everything important for life today, we scarcely think of them unless something goes wrong. Breaking things down a little bit, computers all do pretty much the same thing in the same way: They have something to store programs and data in, something to pull that data out to process it, someplace to put data while it's being processed, and some way to output (and store) the results. We normally think of the combination of a hard drive, a CPU, RAM, and a display fitting this model, called the von Neumann architecture. Boring, every day stuff today but when it was first conceived of by Alan Turing and John von Neumann in their separate fields of study it was revolutionary because it had never before been done in human history. As very complex things are wont to be, the CPUs themselves we use today are recreations of that very architecture in miniature: For storage there are registers, for the actual manipulation of data there is an arithmatic/logic unit, and one or more buses output the results to other subsystems. ALUs themselves I can best characterize as Deep Magick; I've been studying them off and on for many years and I'm working my way through some of the seminal texts in the field (most recently Carver and Mead's Introduction to VLSI Systems) and when you get right down to it, that so much is possible with long chains of on/of switches is mind boggling, frustrating, humbling, and inspiring.

Getting back out of the philosophical weeds, some interesting developments in the field of neuromorphic computing, or processing information with brain-like circuitry instead of logic chains has come to light. Google's Deepmind operational team has figured out how to marry practical artificial neural networks to the von Neumann architecture, resulting in a neural network with non-static memory that can figure out on its own how to carry out some tasks, such as searching and sorting elements of data without needing to be explicitly programmed to do so. It may sound counter-intuitive, but researchers working with neural network models have not, as far as anybody knows, married what we usually think of RAM to a neural net. Usually, once the 'net is trained it's trained, and that's the end of it. Writeable memory makes them much more flexible because that gives them the capability to put new information aside as well as potentially swap out old stored models. Additionally, such a model is pretty definitively known to be Turing complete: If something can be computed on a hypothetical universal Turing machine it can be computed on neural network-with-RAM (more properly referred to as a neural Turing machine, or NTM). To put it another way, there is nothing preventing an NTM from doing the same thing the CPU in your tablet or laptop can do. The progress they've reported strongly suggests that this isn't just a toy, they can do real world kinds of work with NTM's that don't cause them to break down. They can 'memorize' data sequences of up to 20 entries without errors, between 30 and 50 entries with minimal errors (something that many people might have trouble doing rapidly because that's actually quite a bit of data), and can reliably work on sets of 120 data elements before errors can be expected to start showing up in the output.

What's it good for, you're probably asking. Good question. From what I can tell this is pretty much a proof-of-concept sort of thing right now. The NTM architecture seems to be able to carry out some of the basic programming operations, like searching and sorting; nothing that you can't find in commonly available utility libraries or code yourself in a couple of hours (which you really should do once in a while). I don't think Intel or Arm have anything to worry about just yet. As for what the NTM architecture might be good for in a couple of years, I honestly don't know. It's Turing complete so, hypothetically speaking, anything that could be computed could be computed with one. Applications for sorting and searching data are the first things that come to mind, even on a personal basis. That Google has an interest in this comes as no surprise, when taking into account the volume of data their network deals with on a daily basis (certainly in excess of 30 petabytes of data every day which is.... a lot, and probably much, much more than that). I can't even think that far ahead, so keep an eye on where this is going.

The Doctor | 18 November 2014, 08:00 hours | default | No comments
"We, the extraordinary, were conspiring to make the world better."