A 3D printed laser cutter, aerosol solar cells, and reversing neural networks.

09 January 2015

3D printers are great for making things, including more of themselves. The first really accessible 3D printer, the RepRap was designed to be buildable from locally sourceable components - metal rods, bolds, screws, and wires, and the rest can be run off on another 3D printer. There is even a variant called the JunkStrap which, as the name implies, involves repurposing electromechanical junk for basic components. There are other useful shop tools which don't necessarily have open source equivalents, though, like laser cutters for precisely cutting, carving, and etching solid materials. Lasers are finicky beasts - they require lots of power, they need to be cooled so they don't fry themselves, they can produce toxic smoke when firing (because whatever they're burning oxidizes), and if you're not careful the other wavelengths of light produced when they fire can damage your eyes permanently. All of that said they're extremely handy tools to have around the shop, and can be as easy to use as a printer once you know how (protip: Take the training course more than once. I took HacDC's once and I don't feel qualified to operate their cutter yet.) Cutting to the chase (way too late) someone on Thingiverse using the handle Villamany has created an open source, 3D printable laser cutter out of recycled components. Called the 3dpBurner, it's an open frame laser cutter that takes after the RepRap in a lot of ways (namely, it was originally built out of recycled RepRap parts) and is something that a fairly skilled maker could assemble in a weekend or two, provided that all the parts were handy. Villamany has documented the project online to assist in the assembly of this device and makes a point of warning everyone that this is a potentially dangerous project and that proper precautions should be taken when testing and using it. Not included yet are plans for building a suitable safety enclosure for the unit, so my conscience will not let me advise that anyone try building one just yet; this is way out of my league so it's probably out of yours, too. That said, the 3dpBurner uses fairly easy to find high power chip lasers to do the dirty work; if this sounds far fetched people have been doing this for a while, to good effect at that. The 3dpBurner uses an Arduino as its CPU running the GRBL firmware that was designed as a more-or-less universal CNC firmware implementation to drive the motors.

If you want to download the greyprints for it you can do so from its Thingiverse page. I also have a mirror of the .stl files here, in case you can't get to Thingiverse from wherever you are for some reason. I've also put mirrors of the latest checkout of the GRBL source code and associated wiki up just in case; they're clones of the Git repositories so the entire project history and documentation are there. You're on your own for assembly (right now) due to the hazardous nature of this project; get in touch with Villamany and get involved in the project. It's for your own good.

Electronic toys are nice - I've got 'em, you've got em, they pretty much drive our daily lives - but, as always, power is a problem. Batteries run out at inconvenient times and it's not always possible to find someplace to plug in and recharge. Solar power is one possible solution but to get any real juice out of them they need to be fairly large in size, usually larger than the device you want to power. Exploiting pecular properties of semiconductors on the nanometer scale, however, seems promising. This next bit was first published about last summer but it's only recently gotten a little more love in the science news. Research teams collaborating at the Edward S. Rogers Sr. Department of Electrical and Computer Engineering at the University of Toronto and IBM Canada's R&D Center are steadily breaking new ground on what could eventually wind up being cheap and practical aerosol solar cells for power generation. Yep, aerosol as in "spray on." A little bit of background so this makes sense: Quantum dots are basically crystals of semiconducting compounds that are nanoscopic in scale (their sizes are measured in billionths of a meter), small enough that depending on how you treat them they act like either semiconducting components (like those you can comfortably balance on a fingertip) or individual molecules. Colloidal quantum dots are synthesized in solution, which means they readily lend themselves to being layered on surfaces via aerosol deposition, at which time they self-organize just enough that you can do practical things with them. Like convert a flow of photons into a flow of electrons, or generate electrical power in other words. The research team has figured out how to synthesize lead-sulfide quantum colloidal dots that don't oxidize in air but can generate power. Right now they're only around 9% efficiency; most solar panels are between 11% and 15% efficient, with the current world record of 44.7% efficiency held by the Fraunhofer Institute for Solar Energy Systems' concentrator photovoltaics. They've got a ways to go before they're comparable to solar panels that you or I are likely to get hold of but, the Fraunhofer Institute aside, 8% and 11% efficiency aren't that far off, and they've improved their techniques somewhat in the intervening seven months. Definitely something to keep an eye on.

Image recognition is a weird, weird field of software engineering, involving pattern recognition, signal analysis, and a bunch of other stuff that I can't go into because I frankly don't get it. It's not my field so I can't really do it any justice. Suffice it to say that the last few generations of image recognition software are pretty amazing and surprisingly accurate. This is due in no small part to advancements in the field of deep learning, part of the field of artificial intelligence which attempts to build software systems that work much more like the cognitive processes of living minds. Techniques encompass everything from statistical analysis to artificial neural networks (learning algorithms designed after the fashion of successive layers of simulated neurons) to even more rarefied and esoteric techniques. As for how they actually work when you pop the hood open and go digging around in the engine, that's a very good question. Nobody's really sure how software learning systems work, just like nobody's really sure how the webworks of neurons inside your skull do what they do, but the nice thing is that you can dissect and observe them in ways that you can't organic systems. Recently, research teams at the University of Wyoming and Cornell have been experimenting with image analysis systems to figure out how just how they function. They took one such system called AlexNet and did something not many would probably think to do - they asked it what it thought a guitar looked like. Their copy of AlexNet had never been trained on pictures of guitars, so it dumped its internal state to a file, which unsurprisingly didn't look anything like a guitar. The contents of the file looked more like Jackson Pollock trying his hand at game glitching.

The next phase of the experiment involved taking a copy of AlexNet that had been trained to recognize guitars and feeding it that weird image generated by the first copy. They took the confidence rating from the trained copy of AlexNet (roughly, how much it thought its input resembled what it had been trained on) and fed that metric into the first, untrained copy, which they then asked again what it thought a guitar looked like. They repeated this cycle thousands of times over until the first instance of AlexNet had essentially been trained to generate images that could fool other copies of AlexNet, and the second copy of AlexNet was recognizing the graphical hash as guitars with 99% accuracy. What the results of this idiosyncratic suggest is that image recognition systems don't operate like organic minds. They don't look at overall shapes or pick out the strings or the tuning pegs, but they look for things like clusters of pixels with related colors, or patterns of abstract patterns or color relationships. In short, they do something else entirely, unlike organic minds. This does and does not make sense when you think about it a little. On one hand we're talking about software systems that at best only symbolically model the functionality of their corresponding inspirations. Organic neural networks tend to not be fully connected while software neural nets are. There's a lot going on inside of organic neurons that we aren't aware of yet, while the internals of individual software neurons are pretty well understood. The simplest are individual cells in arrays, and the arrays themselves have certain constraints on the values they contain and how they can be arranged. On the other hand, what does that say about organic brains? If software neural nets are to be considered reasonable representations of organic nets, just how much complexity is present in the brain, and what do all of them do? How many discrete nets are there, or is it one big mostly-connected network? How much complexity is required for consciousness to arise, anyway, let alone sapience?