One of my bots just received the following message from Google, verified in Google Webmaster Tools:
Notice of removal from Google Search
April 3, 2015
Due to a request under data protection law in Europe, we are no longer able to show one or more pages from your site in our search results in response to some search queries for names or other personal identifiers. Only results on European versions of Google are affected. No action is required from you.
These pages have not been blocked entirely from our search results, and will continue to appear for queries other than those specified by individuals in the European data protection law requests we have honored. Unfortunately, due to individual privacy concerns, we are not able to disclose which queries have been affected.
Please note that in many cases, the affected queries do not relate to the name of any person mentioned prominently on the page. For example, in some cases, the name may appear only in a comment section.
If you believe Google should be aware of additional information regarding this content that might result in a reversal or other change to this removal action, you can use our form at https://www.google.com/webmasters/tools/eu-privacy-webmaster. Please note that we can't guarantee responses to submissions to that form.
The following URLs have been affected by this action:
For reasons I'll go into in a bit, this post didn't start off auspiciously. Just as I was about to put fingers to keyboard extenuating circumstances prevented the composition of text...
Long time readers of this blog are no doubt aware of two things: That I haven't posted much here in past weeks and my long and sordid history of dental problems. As it turns out, the two things are more related than it would otherwise seem.
I haven't had it in me for the past few weeks to sit down and write anything substantial, the queue of notes on Windbringer to the contrary. A note here, a note there, occasionally some news articles but nothing really creative or in-depth on anything of substance because I've been too scatterbrained to do so. After getting home from work I'd usually just pass out and get up the next morning to do it all over again. Not good for creative output or getting much of anything accomplished, to be sure.
Last week I came down with the first cold of the year, a spring cold which started off masquerading as an allergy attack but succeeded in kicking the legs out from under me. They bring with them a certain amount of discomfort, i.e. sinus problems that I'd discounted as really nothing serious, including pain in other parts of my head that isn't uncommon with those sorts of things. The way the facial nerves are networked the phenomenon of referred pain, or pain arising from something in one area being felt in a different location is to be expected. So I figured that the headaches (and scalp aches... and migraine-like eye aches) were just the nerves in my sinuses acting up, and took Advil and went about my business for a few days. Or tried to.
On Friday morning, after two nights of practically no REM sleep I couldn't take the pain anymore at work. Even eating yogurt for breakfast, I was nearly unable to swallow and certainly unable to close my jaws. I rang up Dr. Ken Freeman in the Financial District of San Francisco and got a lucky time slot for a consultation.
3D printing anywhere but in heavy industry comes with a whole host of common complaints that have given it something of a negative reputation. Fabbed objects require additional detailing to get rid of the ridges and imperfetctions (true), you can't really print entirely hollow objects because internal structure has to be in place to support the upper surfaces (also true), a lot of hacks have to be done to the printer to make them more reliable (true... heated beds come to mind)... there are others but I'll spare the electrons. In fact, I think I'll cut to the chase and talk a little about a new fabrication technique from a startup that's just come out of stealth mode called Carbon 3D. Their technique is called CLIP (Continuous Liquid Interface Production) and it involves drawing solid objects upward from a pool of liquid feedstock. They've developed a resin which is sensitive to both atmospheric oxygen and ultraviolet light; UV causes the resin to solidify, oxygen prevents it from doing so. Using a membrane which has many of the properties of contact lenses that forms the bottom of the tank, UV light (probably from a chip laser) is shone upward into the underside of the surface of the resin. The layer of plastic, just a few times the diameter of a red blood cell in thickness, clings to the underside of a metal piston and is carefully pulled upward. The process is repeated thousands upon thousands of times until the finished object has been pulled free of the liquid feedstock tank and is ready to be cleaned off and dried. The technique is significantly faster than other plastic deposition methods - between 10 and 25 times faster in fact, which makes it suitable for industrial applications. Additionally, the CLIP technique can make truly hollow objects, from platonic solids to very complex three dimensional structures like models of the Eiffel Tower. I think Carbon 3D is really on to something here, and they're a company to keep a close eye on.
All but invisible to many today due to its ubiquity is the field of chemistry known as chemical synthesis, or constructing more complex compounds out of simpler ones. Nature does this quite handily - practically every living thing does this day in and day out at the cellular level but in the lab it tends to be a much more difficult process. Buying stockpiles of those simpler compounds is what most labs do but those simpler compounds have to come from someplace, which usually means synthesizing them from scratch. As one might imagine this tends to be significantly more tricky, not just due to the vagaries of synthetic chemistry in general but because it has to be done in industrial quantities and such processes tend to not scale well. So, the question becomes how to make creation of the basic compounds easier, or at least make them more widely available. At the Howard Hughes Medical Institute they've announced that they've figured out a way to rapidly and cheaply synthesize 14 different classes of precursor molecules in a paper in Science Magazine (paywalled). Information is kind of scant so a little digging is required but basically after analyzing the molecular structures of several thousand relatively simple compounds they found exploitable patterns that were automatable and parallelizable. The synthesis machine they constructed can crank out as much of those precursor compounds as they have raw materials for that can then be used for research in the biomedical and pharmaceutical research.
Since 3D printing first made it big a couple of years ago everybody and their backup seems to have gotten into the game, from scrappy open source startups to the big players in industrial manufacturing, including the CAD/CAM company Autodesk. Autodesk recently released a limited number of 3D printers that use liquid resin for feedstock called the Ember Explorer. Pretty straightforward, very polished, very expensive, special feedstock that you can't just jander down to the store and buy.. which, as one might imagine hurts uptake a little. Lock-in is good for profits but on the bleeding edge where rapid experimentation is the norm, if options are limited many will look to other solutions that are more readily available. So, it was with no shortage of interest that Autodesk opened the formula for the PR48, the liquid feedstock for their Ember series of fabbers. That's right, the formula for Polar Resin number 48 now carries a Creative Commons By Attribution/Share Alike v4.0 license, meaning that you can make it yourself (assuming that you have the know-how and access to the chemical precursors to do so), you can share it with whomever you want, however you want, and you can tinker with the formulation, but whatever is derived from or built on top of their work must be published under the same license.
And now, the formula for Polar Resin number 48. All percentages are weight to weight:
Organ transplants are a fairly hairy aspect of the medical practice and are a crapshoot even with the best medical care money can buy. Tissue matching viable organs seems about as difficult as brute-forcing RSA keys due to the fact that, at the proteomic level even the slightest mismatch between donor and recipient (and there will always be some degree of mismatch unless they are identical twins) will provoke an immune response that will eventually destroy the transplanted organ unless it's not kept under control. Additionally, unless the organ is perfectly cared for prior to installation the tissues will begin to degrade which will further provoke the recipient's immune system into active response. All things being equal (more or less) a new advance in biotechnology seems to have at least brought this last detail under better control. A new device called the XVIVO Perfusion System uses a centripital pump to move chilled oxygenated fluid through the circulatory system of the organ to keep the cells alive while during the time that the donor lungs were carefully assessed for suitability. The process bought those lungs an extra handful of hours of viability prior to implantation. Far from being a laboratory experiment the XPS was used to save the life of one Kyle Clark in Michigan. Clark was born with cystic fibrosis, a genetic disease which causes progressive organ damage, particularly in the lungs. Lung transplants are a not uncommon course of treatment for CF patients. With a great deal of luck CF patients can live well into their 40's or 50's but it's far from a sure thing. When suitable donor lungs were located for Clark the XPS was used to preserve them so that they could be evaluated more carefully, including microscopic examination to ensure that carbon dioxide was being exchanged for oxygen properly inside the life-support device. It's too early to tell but it looks like the transplant has been a success, and it would seem that he has many years of life ahead of him.
Some years ago the novelist Warren Ellis postulated a subculture called grinders in one of his works - people who hack their biology in the same way that one might hack on computers or software. This can involve everything from building and installing DIY implants to using quantified self techniques to optimize one's performance (arguably - I'd call it "soft grinding" because it's usually noninvasive, but opinions probably differ). Last week a group of grinders called Science for the Masses published a paper describing the results of a unique experment - they induced acute night vision in a baseline human through chemical means. SftM dripped a solution of an organic photosensitizing compound called Chlorin E6, saline, and the organic solvent DMSO into the eyes of test subjects under controlled conditions. The hypothesis they were testing was that the Chlorin E6 would permeate the subject's retinas (potentiated by the DMSO, which accelerates uptake of chemical compounds into the human body) and cause the photosensitive pigments therein to be more responsive to light. The Science for the Masses team observed that positive effects began within one hour of administration, necessitating that the test subjects wear both black scleral lenses and sunglasses to protect their eyes from overexposure to light. This was, after all, an experiment... adjusting to the retinal alterations took about two hours, after which their visual acuity was tested after dark somewhere in a grove of trees. Recognition of symbols at distances of 10 meters was consistently better than that of four unaugmented test subjects. In other trials, test subjects were consistently able to recognize other people who moved at whim through the same grove of trees after dark at distances of between 25 and 50 meters. Statistically speaking, the control subjects had a 33% success rate, while test subjects augmented with Ce6 had a 100% success rate. 20 days after the tests, no ill effects were reported.
Early last year I wrote a brief post about CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) technology, which exploits a curious property of DNA that makes it easy to precisely target individual genes in the genomes of living things. Just a year later CRISPR technology is being tested in the field of oncology for treating certain forms of lymphoma. At the Walter and Eliza Hall Institute of Medical Research in Melbourne, Australia a research team published a paper in Cell Reports in which they had successfully destroyed cultures of Burkett lymphoma cells in vitro using the CRISPR/Cas9 technique. Medical science has isolated a gene called MCL-1 which is essential for the metabolism of cancerous cells in humans; the CRISPR/Cas9 technique was used to delete that gene in the cells, thus killing them and causing the cultures to collapse. Hypothetically speaking (and I'm not an oncologist so it would be irresponsible of me to not caveat this), noncancerous cells should not have the MCL-1 gene so it might be possible to unleash a treatment systemically but only the intended cancerous cells would be destroyed (if anyone knows for sure, please leave a comment!). The research team itself called this a proof-of-concept test so in honesty it can't be called a treatment yet, only steps toward a possible one in the future. Still, it seems like a solid step forward that might have implications in other fields of medicine.
In years gone by I was a huge fan of the Lucasarts graphical adventure games, including one of their wildest and weirdest ones (natch) called Zak McKracken and the Alien Mindbenders in which you play a group of four adventurous misfits (a tabloid reporter, an archeologist, and two college students who converted their Volkswagon microbus into a space shuttle and traveled to Mars). Just this morning Spadoni Productions, who are known for making short fan films that riff off of the classics released a fan movie based on the video game. It was shot in Italian and overdubbed in English (fair warning) so if the dialogue seems a little off that's why. Here it is:
By the way, if you want to play the original game it's been remade for modern machines so you can buy it from gog.com for Windows XP and later, Mac OSX v10.7.0 or later, and Linux. It doesn't cost much, just $5.99us so it's worth picking up as a fun weekend or travel game.
In other DNI news clinical tests of Neurobridge, a cortical implant for quadraplegics which routes around permanent spinal cord injuries are showing great promise. At the Wexner Medical Center of Ohio State University 23 year old Ian Burkhart, paralyzed from the neck down due to a diving accident is the first test subject to use Neurobridge to move his hand in a laboratory setting. The downstream side of Neurobridge was connected to the muscles of his right forearm and thus he was able to move his hand of his own volition. Neurobridge is somewhat tricky as DNI goes because the signal processor that interprets activity in the motor cortex emits data which then has to be re-processed into a format that muscles can use as control signals. It seems a bit roundabout to me, but it's certainly worth taking as given that there is probably a good engineering reason for this design. The motor interface of Neurobridge is a plastic sleeve wrapped around the limb and does not appear to use an invasive electrode network. The upstream side of Neurobridge is as invasive as it gets, it's patched directly into the brain. The tricky bit is figuring out which signals and which electrodes need to be sequenced to make the right muscles move at the right time. Everybody's brain is wired differently even though brain anatomy is more or less the same from person to person so this required a certain amount of trial and error. In addition, Burkhart has been paralyzed for several years so months of work with the electrode sleeve was required to get his forearm muscles to the point where they would be even minimally useful for the experiment. There is a short video of the experiment that doesn't seem to do the work any justice; I highly recommend taking the two or three minutes to sit down and watch it.
If you've been around me for any length of time, chances are you've heard me wax poetic (and occasionally synaesthetic) about violin music. From the traditional four string wooden variety to the high-tech electric and MIDI variants, they never cease to bring tears to mine eyes. So, when some of my search agents discovered this beauty and threw it up in a browser window a couple of days ago it won't take many tries to guess what I binged on for a couple of hours.
The violin in question is a two meter long two stringed piezoelectric designed by Monad Studio and fabricated in a 3D printer as part of an art exhibit entitled Abyecto on display in New York City at the 3D Print Design Show. Looking like a hybrid of an organ you might find inside some benthic sea creature and something H.R. Giger might have glimpsed during one of his more peaceful nights beyond the gates of horn and ivory, the violin (which doesn't seem to have a piece name associated with it) is one of six which comprise the Abyecto exhibit. Unfortunately, I wasn't able to find any recordings of what the violin looks like or how it's actually played (the 'piezoelectric' bit makes me wonder if the instrument's body flexing isn't itself part of the process of playing), else this article would have a lot more instrument squee. If anybody has footage of the violin being played, please leave a comment. I'd very much like to hear it.
If you've been alive for any length of time you've probably been exposed to the wonderful, moving phenomenon that we call music: Patterns of sounds pleasing to the human ear and effective upon the mind. Music is a complex enough phenomenon that people spend their entire lives studying it and its effects upon the human condition. The psychology, the neurology, the mathematics, the accoustics, the physics... or, like some, they are called to compose or perform music of their own to enrich the world around them. (Whether or not some styles of music can be said to enrich the world is not a debate I'll be getting into, thank you very much.) Then a research team consisting of a composer and two psychologists got the idea to study music as it applies to other forms of life, specifically cats. What forms of music, they asked themselves, would cats enjoy? The answer, as it turned out was that cats really don't seem to care a whole lot for human music, specifically human classical music. Beethoven and Bach leave them pretty cold as it happens. Part of this seems to do with how the feline auditory cortex and inner ear are wired; the vocalizations of cats use slightly different frequencies than human speech with different sorts of complexity. Purring is down around 22 Hz, which is nearly at the bottom of what a healthy human ear is capable of discerning under good conditions. There is a fair amount of overlap between the ranges of human and feline sounds, but cats are also known to generate sounds a good deal higher than the larnyxes of most humans are capable of.
What they discovered was that it's possible to compose music which is species-specific using notes which are most commonly found within the range of sounds that a given species makes. It is also possible to figure out the tempo that best describes the sounds a species uses to communicate to help arrange the music in a way that should be most pleasing to the creatures in question. They ran a series of experiments (the parameters are described in one of the articles, check 'em out) and discovered that cats do indeed show a preference for music written with them in mind. There are two clusters in the data correlating to positive responses to the music, one for younger cats and one for older cats, with middle-aged felines less likely to respond compared to the other test subjects.
Which brings me along to the website from which you can purchase some of this music. A couple of other people have written followup articles about this and keep describing the music as 'trippy' for some reason. I'm not quite certain where they're coming up with this characterization to be frank. If you listen to the sample clips on the website they're arranged into three general categories, Ditties, Ballads, and Airs. Listening to the three samples it seems pretty clear that they're based upon the definitions of the terms, and if one wasn't familiar with them then one could use what they heard as a sort of working definition, so that's out. Spook's Ditty is reminiscent of someone playing the harp at a fairly swift tempo, or arpeggios on a harpsichord. Cozmo's Air has as one might expect lots of purring noises, which would be familiar to cat lovers (or anyone who's ever fallen asleep with a cat) and chords on string instruments (cello and viola, I think) that aren't unusual from orchestral movie soundtracks, though in a rather lower octave than usually encountered. Rusty's Ballad has an unusual rhythm underlying the melody - running eighth or sixteenth notes but largo otherwise, which makes me scratch my head because I can't tell what sorts of notes are used. Some whole notes, to be sure, but otherwise... quarter notes? Half notes? The odd fermata? I'd really need to see the sheet music to make heads or tails out of it.
Okay, I'll concede a partial point with respect to Rusty's Ballad. "Trippy"? No. Strange? Yes.
The state of the art of personal 3D printing is still in a state of flux. Mostly, we're still limited to variants of low-melting point plastics and we're still figuring out new and creative ways of making more complex shapes that are self-supporting to some extent. What isn't getting a whole lot of press right now are some industrial applications of this technology, some of which date back a good decade.
For example, a research team consisting of personnel from Monash University in Australia, the Commonwealth Scientific and Industrial Research Organisation, and Deakin University recently unveiled the world's first 3D printed jet engine. They started the project with the design for an older model gas turbine jet engine, which are nothing to sneeze at anyway and reverse engineered it. Each major component was scanned, probably with a laser, and the data was used to work up a mesh that was then sliced into layers that the 3D printer could lay down successively. A certain amount of geometric jockeying was most assuredly involved in getting each piece positioned optimally for fabrication. The 3D printer used to build the components was based upon selective laser sintering and used alloy dust as its feedstock; each layer laid down was approximately 0.05 mm thick, or about 1/30 the width of a line drawn with a #2 pencil (remember those?) Two copies of each part were fabricated, a process that, all in all took about a year to accomplish. As far as I know the jet engines haven't been spun up yet but are on display. Frankly, they're not sure they'll work as-is so they're going back to the drawing board and double checking their work to make sure that the fruits of their labors won't suddenly turn into so much shrapnel if they're fired up.
In 2002 one Nicolas Huchet lost his right hand at the wrist in an an accident at work. Sounds like a pretty simple way to set up a story that's about to take a hairpin turn into unexpected territory, doesn't it? He eventually acquired a myoelectric prosthesis but soon ran into its functional limits insofar as using and teaching DAW software. In October of 2012 Huchet stepped into a fablab and began a project of epic proportions, designing and building his own prosthetic hand. From the moment he saw his first 3D printer the spark was lit. Add to the volatile mix an Arduino or two and what appears to be a few components from the InMoov project to interface the servomotors and by February of 2013 Huchet and a few hackers from the fablab had finished a prototype prosthetic hand. The superstructure, joints, and phalanges of the hand were run off on a 3D printer and appear to have been assembled using off-the-shelf hardware, like screws and bolts. High test fishing line was used in lieu of tendons for actuating the digits; I've no idea what kind of motors are doing the heavy lifting but their power requirements interest me. Costing something like $250us to construct in total, the open source unit is actuated by picking up and interpreting electrical impulses from the muscles in Huchet's forearm and is nearly (if not just as) functional as a commercial prosthetic limb costing over 300 times as much. Rather than trying to achieve an "opera hand," or as close to normal as possible appearance, Huchet and company seem to have gone for the cyberpunk "high-tech wires and chrome" look. A bunch of talented hackers built that arm and there's no two ways about it.
Incidentally, if anyone out there is interested in getting involved in the open source prosthetics movement I strongly recommend getting in touch with the E-Nabling the Future community.
For quite a few years the automotive industry has been using very sophisticated 3D printers to manufacture engines for cars because they're more efficient to produce that way and tend to be somewhat more sturdy. Having been a Toyota owner for a decade, I can vouch for their sturdiness: They run very quietly, almost silently right up until they're about to die, and then they go out with a bang that the entire neighborhood hears. However, getting back to the story at hand, 3D hacker and mechanical engineer Eric Harrel decided to see if he could reverse engineer a Toyota 4 cylinder 22RE engine and make printable meshes from it to build his own engine. The 22RE has 80 distinct components that fit together with very tight tolerances (as one would expect of a 21st century engine) so the entire design project required around 60 hours from start to finish to engineer each component, including scaling them to 35% of normal so they could be run off in his 'printer (modulo a handful of springs, fasteners, and bearings that had to be purchased or fashioned some other way). Fabbing each of those components took another 72 hours in total. I don't know how long it took to finish and assemble the scaled down engine but my wild guess would put it around another 72 hours in total. At the end, though, the pistons drive, the valves work, and the driveshaft turns. The greyprints are available on Thingiverse for download if you've got a mind to try it yourself.
If 3D printers are toys, they're fabulously capable toys.
The short answer is, I've been busy. Very much so.
The longer and more accurate answer is that work has been running me ragged lately and I've been trying to conserve my spoons as best I can, lest I run myself into the ground (again). I've been routinely putting in 60 and 70 hour weeks, often over six or seven days so I haven't really been getting a whole lot of downtime. So some hard choices had to be made. Go out for my birthday or keep it low key? Low key, because I'm on call. Get a couple of blog posts written and post dated? Oops, on my one day off I slept sixteen hours. Wouldn't have done so if my body hadn't needed it so I'm not going to cry about it. Come home from work and do some writing? Came home from work at 2200 local time, had something that I think was dinner, and faceplanted. Get up early and socialize, or get up early and shake a few bugs out of one of my bots (the output of which happens to be keeping me sane (more or less))? Nobody else is up that early, so code a little bit and then back to the grind, by way of the gates of horn and ivory.
For those of you who've been worried (and this goes for your bots, too), I'm not dead though I have been dead to the world not a few times in the past two months.
When you get right down to it self-care is what keeps most of us held together. Sometimes, the wise thing to do is to recharge as best one can to keep going. Often that involves kicking everything else off and sleeping for a day or two. Or skipping some fun things because the toll exacted on one's body and spirit is too much, and cratering as a result can make things worse.
If it's one thing I've learned, it's that sometimes it's the right thing to do. Life is full of interesting and fun things to do and see, and getting benched for a while, even though it can feel frustrating or irritating doesn't particularly diminish life as a whole. Certainly not if you don't let it. There is lots more out there, and when things calm down (and they will - it doesn't feel like it right now but I think they will) it'll be time to go for them again.
Just don't forget what it felt like to have those good times. They'll be what remind you to go back to them.
It's funny. I was sitting there earlier tonight at dinner (yes, I post-dated this entry so it would match up with the other one) and I came up with a bunch of stuff that I'm kicking myself for not having written down. I guess that's the way it goes - thoughts go in, thoughts go out, but unless you trap them somehow they're probably not going to come back. But I'll take a stab at it anyway.
I've learned that the most subtle of accidents, those that you don't even realize happened in slow motion until well after the fact can teach the most profound lessons. And you'll sometimes laugh yourself silly over them later.
I've come to recognize that if one surrounds onself with too much of something, anything, really, it'll cause one's life to change so that it dominates everything they do, and eventually everything they are. Choose wisely. You can't always choose again.
I've learned that one's daily practice, in whatever form it may take is the one stone upon which everything else can be built. When you feel like you can do it the least is when you need it the most. I've also come to accept that sometimes, at the end of the day when you drag yourself home and fall asleep on the couch your daily practice just isn't going to happen. Absconding for a while and coming back can serve best under such circumstances.
I've learned that if you're going to be larger than life you've got to go the extra distance to not only get there but stay there. No matter how close to the Edge you are, no matter how good you just now were, no matter how many augmentations of any kind you've racked up, if you don't keep up with the basic "this is how this works" you'll slip behind. I've also learned the importance of having one or two demonstrations of the Edge up my sleeve that I can bust out at a moment's notice. Know-how and skill are nice, practice is great, but shock value is still a useful tool. Being a little theatrical can't hurt either (but practice first!)
I've learned never to give away the whole game. Never tell anyone everything you are capable of.
I have learned and am slowly coming to accept that, when life throws you on your head and wrecks your plans, fall back on your backup plan (if you don't have at least two backup plans, drop everything and lay them out right bloody now) and start executing. Your backup plans need to be able to throttle back so you don't wreck yourself. Your primary plan needs to be able to be suspended (not abandoned) and you need to build reasons for doing so into it. Sometimes you need to be gentle with yourself to make it through. Listen to the omens. But never, ever stop.
I've learned that people will dump their bad publicity on you if they fuck up badly. Always cultivate a loyal and observant community around your projects with the closest to unfailing honesty you can manage (secrecy doesn't always allow for this - life sucks like that). You won't have to defend yourself overmuch, your community will compare, contrast, and use their brains when you hope they will the most. During this time never, ever stop making progress. Keep it tight.
Sometimes the code you spent all day writing doesn't even work, and is completely terrible to boot. Blow it away and start over. Don't try to salvage it.
I've learned that the older I get, the less I want to break in a new pair of boots. I'm still working on the Doc Martens that I got for Yule and I can just now wear them for longer than four hours at a stretch. It's well worth getting really nice ones up front, even if they cost quite a bit more just so they'll last longer. I'd prefer to have a pair that last ten or fifteen years so I don't have to go through this every three or four years.
I've learned to always keep a little in reserve just so I can really cut loose if I have to.
I've learned again that while one may be recognized as an expert or a teacher in some respect by someone, one must always remain a student. Everybody has their betters out there; learn well from them. This includes making the mistakes of a student and learning from them.
I've learned that sometimes you just need to get out and dance. Take the next day off to recover if you need to. It's good for you.
I am trying to learn that sometimes shutting up is the right thing to do.
Now that I've had a couple of days to sleep and get most of my brain operational again, how about some stuff that other parts of me have stumbled across?
Building your own electronics is pretty difficult. The actual electrical engineering aside you still have to cut, etch, and drill your own printed circuit boards which is a lengthy and sometimes frustrating task. Doubly so when multi layer circuit boards are involved because they're so fiddly and easy to get wrong. There is one open source project that I know of called the Rabbit Pronto which is a RepRap print head for fabbing circuit boards but it might be a little too experimental for the tastes of some. This constitutes a serious holdup to people being able to fabricate their own computers but that's a separate issue. Enter the Voltera, a rapid prototyping machine for circuitry. Currently clocking in at $237,061us on Kickstarter and still going, the Voltera isn't quite a 3D printer in that it doesn't seem possible to fabricate circuit boards completely from scratch with it, you still need a static baseplate. However, what the Voltera does do is lay down successive layers of conductive and insulating inks on top of the fibreglass board until your entire circuit has been printed out. If surface mount technology is how you roll (and that's increasingly the only game in town) you won't have to worry about drilling holes for components' leads but there is nothing preventing through-hole designs. The firmware is designed to accept industry standard Gerber files so users aren't necessarily tied down to any one CAD package. Even more interesting is that the Voltera includes a solder paste head, so after the board's done it'll lay that out for you as well so that components can be positioned appropriately. Additionally, the bed of the Voltera implements reflow soldering, which means that after the components are positioned the temperature can be slowly raised until the solder paste cooks down and solid electrical connections are made - no more toaster oven. All but one of the Batch-2 runs of the Voltera are spoken for already so if you really want one you'd best jump on it, else you're going to have to wait for them to go into general manufacture.
Privacy runs fast through our fingers in the twenty-first century. If it's not security cameras on the street recording everything and everyone walking by. If it's not securicams it's drones (public and private sector both) on surveillance runs. If it's not drones, sometimes it's people with cameras and smartphones photographing people who really don't want pictures taken (cases in point the photography policies of many hacker cons). In other words, paparazzi are no longer a problem exclusive to the rich and famous. Enter Steve Wheeler of Betabrand, a company that crowdsources and lets people vote on clothing designs as its think tank strategy; projects with good prospects enter a crowdfunding phase so early adopters can gain access to them. If something does really well, the something goes into mass production. Their latest project (which is doing surprisingly well) is called Flashback - anti-photography clothing that reflects so much light into the lens that only the clothes can be seen. Flashback clothing works the same way as the high-visibility vests and strips that urban bicyclists wear by using glass nanospheres bonded to the fabric itself to form what amounts to a flexible, highly reflective surface that refracts as much light as possible. Currently there are only four pieces, a hooded jacket, a scarf, blazer, and trousers but depending on how things go the clothing line might grow. The Wired article I've linked to has a couple of "during the photograph" pictures but their crowdfunding page has execellent before-flash/after-flash pictures. There is some skepticism about how well they actually work (especially from professional photographers) but after reading a bit about the theory it seems sound to me, and I'm considering rounding up all of the reflective strips my cow-orkers wear to do a couple of "Will it or won't it?" pictures over lunch as an experiment. If exotic clothing is your thing you might want to keep an eye on this brand, though you'll pay close to designer prices for their wares.
The slow and steady march toward direct neural interface - creating a bi-directional link between the brain and computer hardware - proceeds apace. In 2011 Dr. Eberhard Fetz was given a $1mus, three year grant to advance his work on implantable neuroprosthetics. Now we have the CerePlex-W, an implantable neural activity receiver which wirelessly transmits its data to nearby computer systems which can act upon those commands. Currently it's on sale only on the research market for use with simian test subjects, but the Braingate Consortium is in talks with the US FDA to begin clinical human trials some time in the near future. The CerePlex-W is a wireless device broadcasting at 30 milliwatts of power so it can be picked up just a meter or two away, yet it's able to transmit data at a speed of 48 megabits per second, princely bandwidth for broadcasting the activity of the cerebral cortex indeed. Whatever is connected to the receiver can use the command signal however it wishes, from manipulating a cursor on a screen all the way to... that's a good question. Entering characters? Driving a wheelchair around? Using a robotic arm to move stuff around? The mind boggles, especially when you take into account the possibility of setting up a tech chain: If you can type, you can both program and send e-mail to vendors and have stuff hooked up for you, and then write the software to control it, and then use the hardware to do other things, and then still other things, and build better prostheses... The device is described as being about the size of an automobile gascap and is not fully contained, which is to say that it still has to have a persistent opening through the skin and skull to connect to an electrode grid placed atop the subject's brain. Major surgery is, of course still required to position the electrode grid on one of the motor cortices. Still, output bandwidth of this device aside it represents a remarkable breakthrough in that it's so small. After ten years of hard work all of the signal processing is done on board without needing to be plugged into racks of computers to do the number crunching. There isn't any word yet on when FDA trials will begin but you can be once they do all hell's going to break loose. Time to start saving our pennies...
There is a phenomenon I've come to call Ubuntu Syndrome, after the distribution of Linux which has become the darling of nearly every hosting provider out there (and no, I won't call them bloody cloud providers). All things considered, it seems to have a good balance of stable software, ease of use, availability, and diversity of available software. It also lends itself readily to the following workflow:
Use a tool like packer.io to automagically instantiate a copy of Ubuntu at the hosting provider of choice.
Check the latest commit of the application in question out of the project's Github repository and run whatever build process is necessary (because yes, today we have to compile freaking scripting languages) to set up the application.
Start the application (thankfully, no longer as the superuser by default).
Don't set up any system level monitoring of any kind. Only make sure the application stays up.
Find out your production VM has been pwned weeks or months later.
Terminate the VM. Archiving the disk image to perform a forensic analysis shortly before the heat death of baryonic matter in the universe is entirely optional.
Start over from step zero.
Look. I get that virtual machines are, for all intents and purposes disposable. They're cheap to stand up, relatively cheap to operate (up to a point), and trivial to tear down so you can start over. They're certainly more convenient than having to rebuild and reinstall an entire physical server from scratch. On the other hand, there is a lot to be said for doing things right up front so that you can skip over (or at least hopefully postpone) the whole "get pwned" part of the show. A little bit of extra work up front (like running the command apt-get update && apt-get upgrade -y) can save a great deal of time and effort later by installing the latest and greatest security patches. It takes a little while, sure, but why work extra late nights if you don't have to? In addition, there is something to be said for hardening your VMs when you stand them up at the same time you patch them to make it that much harder for the VM to be compromised. It doesn't take long; in fact it can be as simple as copying a handful of files and rebooting the VM. Here's my private stash of hardened configs for Ubuntu v12.04 and v14.04 LTS that I deploy on all of my servers (virtual and otherwise, when I have to use Ubuntu). There are other resources out there, sure, but these are mine and you're welcome to use them.
Put a little thought into it. Just because something is disposable doesn't necessarily mean that it's worth extra trouble and hassle later. Save yourselves the energy for more interesting things later.
I know I haven't posted much (at all, really) for most of a month. I'd love to say that I've been out having wacky adventures and gallivanting about Time and Space, but I haven't. Work has been, well, work, and eating me alive to boot. This is the first evening in quite a while (because I'm writing this as a timed post) I haven't gone straight to bed after getting home. So, no interesting news articles, no attempts at humor, no witty insights, However, last December I took the opportunity to pay the Monterey Bay Aquarium a visit. I don't have a whole lot else to say because I frankly don't have it in me. I will say, however, that there were two octopodes at the aquarium that were seriously out of social and seemed to want nothing more than to be left alone for a couple of precious hours.
Anyway, here are the pictures. Some of them aren't of the greatest quality because parts of the aquarium were pretty dark but I kept the best ones. Enjoy.
3D printers are great for making things, including more of themselves. The first really accessible 3D printer, the RepRap was designed to be buildable from locally sourceable components - metal rods, bolds, screws, and wires, and the rest can be run off on another 3D printer. There is even a variant called the JunkStrap which, as the name implies, involves repurposing electromechanical junk for basic components. There are other useful shop tools which don't necessarily have open source equivalents, though, like laser cutters for precisely cutting, carving, and etching solid materials. Lasers are finicky beasts - they require lots of power, they need to be cooled so they don't fry themselves, they can produce toxic smoke when firing (because whatever they're burning oxidizes), and if you're not careful the other wavelengths of light produced when they fire can damage your eyes permanently. All of that said they're extremely handy tools to have around the shop, and can be as easy to use as a printer once you know how (protip: Take the training course more than once. I took HacDC's once and I don't feel qualified to operate their cutter yet.) Cutting to the chase (way too late) someone on Thingiverse using the handle Villamany has created an open source, 3D printable laser cutter out of recycled components. Called the 3dpBurner, it's an open frame laser cutter that takes after the RepRap in a lot of ways (namely, it was originally built out of recycled RepRap parts) and is something that a fairly skilled maker could assemble in a weekend or two, provided that all the parts were handy. Villamany has documented the project online to assist in the assembly of this device and makes a point of warning everyone that this is a potentially dangerous project and that proper precautions should be taken when testing and using it. Not included yet are plans for building a suitable safety enclosure for the unit, so my conscience will not let me advise that anyone try building one just yet; this is way out of my league so it's probably out of yours, too. That said, the 3dpBurner uses fairly easy to find high power chip lasers to do the dirty work; if this sounds far fetched people have been doing this for a while, to good effect at that. The 3dpBurner uses an Arduino as its CPU running the GRBL firmware that was designed as a more-or-less universal CNC firmware implementation to drive the motors.
If you want to download the greyprints for it you can do so from its Thingiverse page. I also have a mirror of the .stl files here, in case you can't get to Thingiverse from wherever you are for some reason. I've also put mirrors of the latest checkout of the GRBL source code and associated wiki up just in case; they're clones of the Git repositories so the entire project history and documentation are there. You're on your own for assembly (right now) due to the hazardous nature of this project; get in touch with Villamany and get involved in the project. It's for your own good.
Electronic toys are nice - I've got 'em, you've got em, they pretty much drive our daily lives - but, as always, power is a problem. Batteries run out at inconvenient times and it's not always possible to find someplace to plug in and recharge. Solar power is one possible solution but to get any real juice out of them they need to be fairly large in size, usually larger than the device you want to power. Exploiting pecular properties of semiconductors on the nanometer scale, however, seems promising. This next bit was first published about last summer but it's only recently gotten a little more love in the science news. Research teams collaborating at the Edward S. Rogers Sr. Department of Electrical and Computer Engineering at the University of Toronto and IBM Canada's R&D Center are steadily breaking new ground on what could eventually wind up being cheap and practical aerosol solar cells for power generation. Yep, aerosol as in "spray on." A little bit of background so this makes sense: Quantum dots are basically crystals of semiconducting compounds that are nanoscopic in scale (their sizes are measured in billionths of a meter), small enough that depending on how you treat them they act like either semiconducting components (like those you can comfortably balance on a fingertip) or individual molecules. Colloidal quantum dots are synthesized in solution, which means they readily lend themselves to being layered on surfaces via aerosol deposition, at which time they self-organize just enough that you can do practical things with them. Like convert a flow of photons into a flow of electrons, or generate electrical power in other words. The research team has figured out how to synthesize lead-sulfide quantum colloidal dots that don't oxidize in air but can generate power. Right now they're only around 9% efficiency; most solar panels are between 11% and 15% efficient, with the current world record of 44.7% efficiency held by the Fraunhofer Institute for Solar Energy Systems' concentrator photovoltaics. They've got a ways to go before they're comparable to solar panels that you or I are likely to get hold of but, the Fraunhofer Institute aside, 8% and 11% efficiency aren't that far off, and they've improved their techniques somewhat in the intervening seven months. Definitely something to keep an eye on.
Image recognition is a weird, weird field of software engineering, involving pattern recognition, signal analysis, and a bunch of other stuff that I can't go into because I frankly don't get it. It's not my field so I can't really do it any justice. Suffice it to say that the last few generations of image recognition software are pretty amazing and surprisingly accurate. This is due in no small part to advancements in the field of deep learning, part of the field of artificial intelligence which attempts to build software systems that work much more like the cognitive processes of living minds. Techniques encompass everything from statistical analysis to artificial neural networks (learning algorithms designed after the fashion of successive layers of simulated neurons) to even more rarefied and esoteric techniques. As for how they actually work when you pop the hood open and go digging around in the engine, that's a very good question. Nobody's really sure how software learning systems work, just like nobody's really sure how the webworks of neurons inside your skull do what they do, but the nice thing is that you can dissect and observe them in ways that you can't organic systems. Recently, research teams at the University of Wyoming and Cornell have been experimenting with image analysis systems to figure out how just how they function. They took one such system called AlexNet and did something not many would probably think to do - they asked it what it thought a guitar looked like. Their copy of AlexNet had never been trained on pictures of guitars, so it dumped its internal state to a file, which unsurprisingly didn't look anything like a guitar. The contents of the file looked more like Jackson Pollock trying his hand at game glitching.
The next phase of the experiment involved taking a copy of AlexNet that had been trained to recognize guitars and feeding it that weird image generated by the first copy. They took the confidence rating from the trained copy of AlexNet (roughly, how much it thought its input resembled what it had been trained on) and fed that metric into the first, untrained copy, which they then asked again what it thought a guitar looked like. They repeated this cycle thousands of times over until the first instance of AlexNet had essentially been trained to generate images that could fool other copies of AlexNet, and the second copy of AlexNet was recognizing the graphical hash as guitars with 99% accuracy. What the results of this idiosyncratic suggest is that image recognition systems don't operate like organic minds. They don't look at overall shapes or pick out the strings or the tuning pegs, but they look for things like clusters of pixels with related colors, or patterns of abstract patterns or color relationships. In short, they do something else entirely, unlike organic minds. This does and does not make sense when you think about it a little. On one hand we're talking about software systems that at best only symbolically model the functionality of their corresponding inspirations. Organic neural networks tend to not be fully connected while software neural nets are. There's a lot going on inside of organic neurons that we aren't aware of yet, while the internals of individual software neurons are pretty well understood. The simplest are individual cells in arrays, and the arrays themselves have certain constraints on the values they contain and how they can be arranged. On the other hand, what does that say about organic brains? If software neural nets are to be considered reasonable representations of organic nets, just how much complexity is present in the brain, and what do all of them do? How many discrete nets are there, or is it one big mostly-connected network? How much complexity is required for consciousness to arise, anyway, let alone sapience?
The thing about microblogging, or services which allow posts that are very short (around 140 characters) and are disseminated in the fashion of a broadcast medium is that it lends itself to fire-and-forget posting. See something, post it, maybe attach a photograph or a link and be done with it. If your goal is to get information out to lots of people at once leveraging one's social network is criticial: Post something, a couple of the users following you repost it so that more people see it, a couple of their followers repost it in turn... like ripples on the surface of a pond information propagates across the Net like radio waves through the air. Unfortunately, this also lends itself to people taking things at face value. By just looking at the text posted (say, the title of an article) without following the link and reading the article it's very easy for people to let the title or the text mislead them. News sites call this clickbait, and either use it quietly, because the goal is to get people to click in and get the ads and not actually have decent articles, or they religiously swear against using it and put forth the effort to write articles that don't suck.
There is another thing that is worth noting: Microblogging sites like Twitter also carry out location-based trend analysis of what's being posted and offer each user a list of the terms that are statistically significant near them. It's a little tricky to get just trending terms but sometimes you can make an end run with the mobile version of the site. By default trending terms are tailored to the user's history and perceived geographic location, but this can be turned off. At a glance it's very easy to look at whatever happens to be trending, check out the top ten or twenty tweets, and not bother digging any deeper because that seems to be what's happening. However, that can be misleading in the extreme for several reasons. First of all, as mentioned earlier trending terms are regional first and foremost - just because your neighborhood seems boring and quiet doesn't mean that the next town over isn't on fire and crying for help. Second, it's already known that regional censorship is being practiced to keep certain bits of information completely away from certain parts of the world without resorting to "block the site entirely" censorship tactics used in some countries. Of course, the reverse is also true: It's possible to manipulate trends to make things pop to the surface, either to ensure that something gets seen (in the right way, possibly) or to push other terms off the bottom of the trending terms list.
Midway through December of 2014 Windbringer suffered a catastrophic hardware failure following several months of what I've come to term the Dell Death Spiral (nontrivial CPU overheating even while in single user mode, flaky wireless, USB3 ports fail, USB2 ports fail, complete system collapse). Consequently I was in a bit of a scramble to get new hardware, and after researching my options (as much as I love my Inspiron at work they don't let you finance purchases) I spec'd out a brand new Dell XPS 15.
Behind the cut I'll list Windbringer's new hardware specs and everything I did to get up and running.