It is, in theory, possible to configure any network service to be reachable over the Tor darknet. This includes instant messaging servers, like the XMPP server EjabberD. Conversely, it must be possible to configure your instant messaging client to connect over the Tor network. I used Pidgin as my client, and here's how I did it:
I then created a new XMPP account in my Pidgin client which connects to the XMPP domain the server was configured for (let's say it's 'xmpp-domain', though it could also probably be set to the .onion hostname, I haven't tried yet). On the Advanced tab I set the value of the "Connect server" field to the .onion hostname of the XMPP server (let's say it's '0123456789abcdef.onion'). The kicker is on the Proxy tab - set Proxy Type to 'HTTP', Host to 'localhost', and Port to '8118' (in other words, point Pidgin at the copy of Polipo running on your workstation). If you configure Pidgin to use "Tor/Privacy" and port 9050 as the proxy, it won't work because Pidgin (libpurple, more likely) tries to do DNS lookups on the .onion hostname when connecting, but Pidgin (which uses libpurple) cuts it off because the "Tor/Privacy" proxy setting explicitly disallows DNS lookups. I then enabled my new account, and about thirty seconds later I'd successfully logged into the hidden XMPP server.
We also released two .iso images, one a regular .iso image that you can burn to disk and boot from, and another that you can write to a flash drive and boot from as well as burn to a disk. We recommend that Mac users try the hybrid .iso image first because not every Mac has an optical drive these days.
Regular readers of my blog (when I post... I'll write about what's been going on soon, promise) know that I keep a sensor net focused on the field of microfacture - personal manufacturing and rapid prototyping. Most of the time I natter on about 3D printing, but depositing layers of material to make something isn't the end-all-be-all of small scale manufacture. The other end of the spectrum - milling, or carving feedstock - is just as useful, and for many applications it's a preferable technique for making things. The thing about automills is that they're not yet as common as 3D printers. People have made their own, sure, but there really hasn't been a killer app that made everyone want to jump on the bandwagon. Not even manufacturing your own printed circuit boards, which is traditionally a tricky, nasty process that can cause one hell of a mess.
Say it with me: "Until now."
Presenting the Othermill: An affordable, personal automill for carving and drilling your own circuit boards, metal bits and bobs, jewelry, and reusable molds. The Othermill is a person-carryable (fifteen pounds, ten inches on a side) three-axis CNC designed to plug into any computer with a USB cable, where it can then controlled with software readily available for Windows, MacOSX, and Linux. It's a three-axis mill, meaning that the carving bit can move forward and backward, left and right, and up and down while operating. It was designed to carve printed circuit boards but by switching in different 1/8" cutting bits (which are also readily available from local hardware stores) it can also carve plastic, casting wax, plastic, wood, and metal. The design is as simple as possible for such a complex apparatus to encourage people to modify and hack on it. While the Othermill is designed to work with existing CAD/CAM software they have a software engineering team developing their own easier-to-use CAD/CAM software to lower the barrier to entry for new users. Designs in just about any vector graphics format can be imported into their software to further lower the barriers to entry.
Oh, and it's already 200% funded - the project's raised over twice the money they needed to achieve their goals. This means that they've had a working prototype for a while and are getting ready to go into mass production. They plan on shipping the first batch in August of 2013, so if you want to get in on the ground floor you'd best get a move on.
ANNOUNCING BYZANTIUM LINUX V0.3a (Beach Cat)
Approved for: GENERAL RELEASE, DISTRIBUTION UNLIMITED
NOTE: This is Byzantium Linux for x86-compatible laptops and desktops. This release is not compatible with the Raspberry Pi. We just started work on that port.
Project Byzantium, a working group of HacDC is proud to announce the release of v0.3 alpha of Byzantium Linux, a live distribution of Linux which makes it fast and easy to construct an ad-hoc wireless mesh network which can augment or replace the current telecommunications infrastructure in the event that it is knocked offline (for example, due to a natural disaster) or rendered untrustworthy (through widespread surveillance or disconnection by hostile entities). Byzantium Linux is designed to run on any x86 computer with at least one 802.11 a/b/g/n wireless interface. Byzantium can be burned to a CD- or DVD-ROM (the .iso image is around 372 megabytes in size), booted from an external hard drive, or can even be installed in parallel with an existing operating system without risk to the user's data and software. Byzantium Linux will act as a node of the mesh and will automatically connect to other mesh nodes and act as an access point for wifi-enabled mobile devices.
This release is unique because it is based upon the work we did in New York City in the weeks following Hurricane Sandy in late 2012. We were asked by FEMA (Federal Emergency Management Agency) to help restore the telecommunications network in the neighborhood of Red Hook, and the design requirements were dictated by the needs of the community as described by leaders and elders. v0.3a constitutes a formalization of those requirements rather than the ad-hoc build we deployed in Red Hook.
THIS IS AN ALPHA RELEASE! Do NOT expect Byzantium to be perfect. Some features are not ready yet, others need work. Things are going to break in weird ways and we need to know what those ways are so we can fix them. Please, for the love of LOLcats, do not deploy Byzantium in situations where lives are at stake.
Binary compatible with Slackware-CURRENT. Existing Slackware packages can be converted with a single command.
Automatically configures itself on boot. There is no longer a need for a control panel.
Can act as a gateway to the Internet if a link is available (via Ethernet or tethered smartphone).
Linux kernel v3.4.4
Drivers for dozens of wireless chipsets
KDE Trinity r14.0.0 (Development)
LXDE (2011 release of all components)
SYSTEM REQUIREMENTS (to use)
Minimum of 1GB of RAM (512MB without copy2ram boot option)
i586 CPU or better
CD- or DVD-ROM drive
BIOS must boot removable media
At least one (1) 802.11 a/b/g/n interface
SYSTEM REQUIREMENTS (for persistent changes)
The above requirements to use Byzantium
2+GB of free space on thumbdrive or harddrive
WHAT WE NEED:
No more Bill Ballmer impersonations.
People running Byzantium to find bugs.
People reporting bugs on our Github page (https://github.com/Byzantium/Byzantium/issues). We can't fix what we don't know about!
People booting Byzantium and setting up small meshes (2-5 clients) to tell us how well it works for you with your hardware. We have a hardware compatibility list on our wiki that needs to be expanded.
Help translating the user interface. We especially need people fluent in dialects of Chinese, Arabic, Farsi, and Urdu.
Develop a method for interfacing Byzantium Linux with existing amateur radio mesh networking projects.
Parties interested in joining the development effort are encouraged to join our mailing list. If you do not have or do not wish to set up a Google Account, simply send an e-mail to the addresss byzantium plus subscribe at hacdc dot org (spamblocked, but also documented here in step 4).
A while back I wrote an article about web applications that can live wherever you can store a file and not necessarily on a web server out of your control. I probably should have posted a link to Google Group dedicated to unhosted applications, but that's neither here nor there. To recap briefly, what I discussed in the previous article are called unhosted communications applications, like social networking or instant messaging software. This begs a crucial question: Assuming that you're running an unhosted application in your web browser, how do you tell other people how to connect to you with their own copies? If an application is running on a server someplace then everybody who visits that server can potentially communicate through it with everyone else connected to it. This is how garden variety web applications operate. But what if running a server won't work for your use cases? I talk a lot about decentralized applications because they're harder to shut down. Central points of failure are too easy to find and take out but peer to peer services are harder to contain by their very nature.
(Disclaimer: I hope I got this bit right. If I didn't, expect future edits.)
Let's take everyone's favorite example, BitTorrent. Applications like BitTorrent have two modes of operation: They can connect to one or more centrally located servers called trackers to download .torrent files and advertise their participation in one or more torrents so that other clients partaking of the same torrents can contact them. The other mode of operation involves so-called trackerless torrents, where a distributed hash table takes the place of a tracker. In the BitTorrent DHT every node pseudorandomly picks a hash to identify itself. That hash is computed from addressing information, and by searching the contents of the table it can find other nodes on the public Net which are either running the same torrent or possess addressing information of nodes that are. Nodes which are aware of one another can also ask one another for addressing information which is then locally cached and possibly replicated to other nodes later.
More under the cut...
It's almost taken for granted these days that your data lives Out There Somewhere on the Internet. If you set up a webmail account at a service like Gmail or Hushmail, your e-mail will ultimately be stored on a bunch of servers racked in a data center someplace you will probably never see. Users of social networks implicitly accept that whatever they post - updates, notes, images, videos, comments, what have you - will probably never touch any piece of hardware they own ever again. Everything stays in someone else's server farm whether or not you want it to, and while there are sometimes options for extracting it rarely has anyone written any software which can actually do anything with it (like re-importing it someplace else) because the formats are never identical. It'd be a lot of work. Additionally, if you lose access to your account somehow - for example, if someone manages to successfully guess your password, social engineer their way in, or force a password reset it's exceedingly difficult to get your access back (Why would someone want to do that? Since when have people ever needed a reason to be malicious?)
But what if you didn't have to trust someone else to hold your data? What if you didn't have to worry about logging into a service somewhere and managing yet another password? Granted, we're not completely there yet, but there are definitely options waiting to be assembled into something more...
More under the cut...
Long time readers are no doubt familiar with my facination with the subject of biological computing, using organic structures to process and represent information rather than silicon-hybrid substrates. When you get right down to it, DNA is an information storage and representation system, just like the tape upon which a notional Turing machine reads and writes symbols. Using this metaphor (which isn't nearly as tortured as it sounds), the ribosomes of eukaryotic cells would be the Turing machines that read the tape and carry out the operations (protein synthesis) encoded in the nucleotides.
Not too long ago the field advanced another crucial step. Rather than cutting-and-splicing long strings of DNA to hardcode programs into them, a research team at MIT figured out a way to represent basic operations with much smaller segments of DNA. Bacteria exchange plasmids, short loops of DNA that encode single genes which are used to mix up the gene pool of a species of bacteria in close quarters. Timothy Lu, who lead the research team, designed a set of synthetic plasmids for escherichia coli, which is one of the most commonly encountered (and studied) bacteria by humans. Each plasmid represents a basic logic or arithmatic function, from addition to exclusive-or. In addition to promoter and terminator nucleotide sequences (which mark the beginning and end of the gene in the plasmid) the payload of the nucleotide codes for a protein which glows green. Recombinase enzymes that cut and splice DNA at specific loci are used to encode inputs by splicing and arranging plasmids into longer sequences of DNA, turning them into programs comprised of smaller operations. The DNA is assembled within and expressed by the bacterial culture, and ultimately the bacteria either do or do not produce the fluorescent protein (effectively outputting a 1 or a 0).
So, what does this ultimately mean? What good is either producing or not producing a protein that glows under UV light. Seems kind of pointless.
Looking at it from the 30,000 foot view, what we have here is a way to feed new and potentially novel gene sequences into one of the most common bacteria out there in a more reliable way. It is also now possible to assemble them in a more symbolic fashion, with boolean logic and arithmatic rather than by reverse engineering existing proteins and figuring out which patterns of nucleotides correspond to which amino acids in an organism. It is also now possible to turn those gene sequences on and off after they've been assimilated by the bacterial culture at a later time. This means that synthesis of compounds could be started and stopped without killing the culture at a later time - it is plausible that the cell culture could halt synthesis of whatever it's been programmed for when it reaches a certain concentration or internal threshold.
Late last year I did an article about the simulation of parts of the the human brain on a massive scale called SPAUN that was implemented using software called Nengo. The basic concept behind SPAUN, as you may recall, is that it is a functional model of some aspects of the human brain which duplicate some of the neural networks as well as the myriad connections between them. What isn't obvious is that this connection model was developed in part through the microscopic examination of many human brains post mortem plus many different kinds of scans carried out on living people over many, many years. Also, a great deal of knowledge has come from projects which have analyzed partial connectomes of other species, such as the Open Connectome Project. In certain structural and functional respects, mammalian and reptilian brains are comparable to human brains and the knowledge gained from them has been applied to human neuroanatomy.
This body of knowledge is growing by leaps and bounds thanks to data from an ambitious effort called the Human Connectome Project, a $40mus undertaking which aims to build a much more comprehensive map of the human central nervous system at the cellular level than any previous effort to date. It is hoped that the data gathered by the HCP can be used to better understand certain neurological phenomena, such as dyslexia or autism. It is also hoped that certain cognitive proclivities will be better understood, such as having a talent for mathematics, music, or languages. In addition to the data collected (which you can apply to gain access to, though I have a gut feeling that not just anyone is going to be accepted) funds have been allocated to advance the state of the art in neuro-imaging technology, specifically toward increasing the resolution of scanners.
The techniques used to map the human connectome include EEG, functional MRI to measure oxygen utilization by neurons, and diffusion MRI to track flows of various molecules through neurons to record the patterns they make on the macroscale. The quantities of data collected by each scan are nontrivial to say the least, and require vast amounts of disk space and processing power to store and manage. For example, the Open Connectome Project built a partial connection map of the retina and visual cortex of a lab mouse, and it's about twelve terabytes in size. Human brains are orders of magnetude larger and more complex than a lab mouse's, and the scans are going to be at a much higher resolution, so I can't even begin to guess how big a full human connectome is going to be (though people much smarter than I have no doubt made estimates and back-of-the-envelope computations). The fascinating thing about these data sets is that they can be visualized so that you can see what's happening inside the mapped brains at a moment in time. Discrete functional groups stand out like lightning strikes, and by correlating those bundles of neuronal activity you can start to see how different parts of the brain cooperate (or not) to carry out certain tasks or kinds of thoughts.
So, what can this tell us? Of what use is the Human Cognome Project?
First, the HCP will help the related field of psychology better understand human cognition and behavior. While the chemoelectrical aspects of human thought under certain conditions are known, we don't yet have a great deal of knowledge about what the other mechanisms of the brain are up to at the same time. Electrical activity is not the end-all-be-all of thought; in fact, it seems as if it's a shadow of what is actually happening. Second, the complexity of the interconnections of neurons between one another, and the complexity of the interconnections between all of those networks of neurons is staggering. The human cerebral cortex - the outermost part of the brain - is between 1mm and 4mm thick and is comprised of around 100 billion neurons. The rest of the brain, the matter below the surface that you can't see, seems to be mostly comprised of the connections between that thin layer of neurons on the surface. I've no doubt that there is an amazing amount of emergent activity taking place in such a dense network, and the HCP will allow us to get a better picture of what those emergent patterns look like. Third, by characterizing and correlating what (relatively) baseline people's cortical activity and structures look like, we will gain better insight into how to treat (or at least manage) brain injuries and neurological abnormalities. At the very least, surgical intervention will probably become that much more precise.
There seem to be a couple of problems inherent in the tech field of prosthetic design. First and foremost of them is that comparatively few people need artificial limbs, so not enough of them are manufactured at once to bring the cost down. A second problem is that because so few people tend to need them, designs don't seem to improve very rapidly. When enough of anything are not constructed, there isn't enough pressure for bugs to be ironed out rapidly, nor for designs to evolve in positive directions so relatively simple advances may not appear soon. Business and industry meet science and technology, what can I say?
Which brings me right along to the phenomenon of curious hackers tinkering because they enjoy it. Easton LaChappelle has been building robotic manipulators since the age of 14 for the sheer fun of it. Lacking a background in software development or engineering, he taught himself what he needed to know and won third place in the Colorado State Science Fair in 2011 with the first version of his design. A chance meeting with someone at last year's science fair caused him to blaze the trail still farther: Top of the line prosthetic limbs cost tens of thousands of dollars but they don't grow as the body does so they need to be replaced at full (or highter cost). So, Easton designed a third iteration of his robotic arm, printed as many parts as was feasible on a 3D printer, and used off the shelf components for the rest. Total cost of the prosthetic he's designed? $250us. Easton's v3 is said to be amazingly capable for an artificial limb, and to facilitate that he built a novel user interface system that seems dauntless at first, until one considers the learning curve of a pre-paid smartphone from the grocery store. A consumer EEG is used to actuate functions and control movement of the arm (hey, don't look at the screen that way, they're out there and a lot of fun to hack around with...), and flexing muscles in other parts of the body selects gross movements (such as flexing the elbow and turning the wrist). NASA, impressed by his work, has tapped him to intern on the Robonaut project headquartered at the Johnson Space Center.
More under the cut...
Rather than stay home for my birthday (which I've done for the past few years) I decided to make things interesting this time 'round the sun. Sitwon and Haxwithaxe had secured a hotel room and passes for Shmoocon in downtown DC last weekend, so I threw my hat into the ring more or less at the last minute. Shmoocon is an excellent hacker conference, don't get me wrong, but I don't ordinarily get much out of it. It is, as they say around here, above my pay grade. That said, I decided to go solely to see what I could make of the weekend and went with few preconceived notions and no idea of what was on the schedule for this year. I didn't sign up for any of the competitions, either, and limited myself to only the cash I had in my wallet at the time.
For various reasons I wasn't able to take Friday off so I missed the entire first day of the conference, but I figure that video recordings of presentations given that day will appear presently. After work it was a short Metro ride to get downtown and somewhat to my surprise my phone got me to the hotel without much trouble. I pinged Sitwon on my way in, acquired a hotel room key, conference badge and ski resort-themed con swag, dropped my gear off in our room, and then we met up with Haxwithaxe (who is on the con security staff) to get dinner. A few metro stops Tono Sushi can be found, and everyone treated me to a sushi dinner for my birthday. After taking the Metro back to the hotel I spent a few minutes hanging out at the hotel bar with the rest of the con-goers, but after a long day at work and a couple of subway trips I opted to head back to our room upstairs to read a bit and go to bed early to recuperate.
More under the cut...
In the abstract, it's always been easy to figure out what to post on my birthday. I can think about it in the car on my way in to work and have some ideas of where to go and what to say, but when it comes time to put fingers to keys to actually write something, words scatter like dust in a sunbeam. Funny how that happpens, usually with really personal things. So, time for a little nonlinear text editing, where I scribble random ideas, and go back later to rearrange and flesh them out.
Long-time members of the open source community no doubt remember iBiblio.org, which is one of the first and largest online archives of open source software. It doesn't see as much love as it used to due to how many open source project hosting sites there are out there (including the venerable Sourceforge, Github, and Google Code). Also, because cheap to free personal web hosting is so common, it's trivial to upload your projects these days. In recent years, however, the iBiblio team set up Terasaur, a BitTorrent tracker which makes it much easier to distribute large projects (such as distributions of Linux or application suites) using BitTorrent. They do the work of seeding for you so you don't have to tie up your home net.connection.
I just set up a Terasaur page for Byzantium Linux v0.2a, and here's how I did it. Specifically, here's how I did it after figuring out what I was doing wrong...
I'll start by saying that even though an account has been provisioned for you on Terasaur, it can take up to a week before you'll actually be able to upload anything. Check back periodically but if you don't see the "Editor Options" menu on the right-hand side you can't add anything to the torrent archive. Sit tight until you receive an e-mail from the admins that says that your account has been enabled.
When your account has been enabled, start by making a .torrent file for what you want to upload. I used the Terasaur tutorial for using Transmission to make a .torrent file.
Now, here's where it is less clear: Go to the page for the .torrent you just uploaded and download a new copy of the file from terasaur.org. Put it into a different directory, or give the file a different name. Just keep track of it. On Terasaur click the "Start upload" button on the page. Now load the .torrent file you just re-downloaded into your BitTorrent client and start seeding. You will upload the files to the Terasaur archive, where their seedboxes will cache a copy locally and make them available to whomever wants them. When your upload hits 100% you can shut down your BitTorrent client because the seedboxes at Terasaur have taken over.
Now that I have a little time to breathe, I've updated my .plan file. Per usual, the contents range from the funny to the politically incorrect to the vulgar. Exercise discretion if you're at work or in public.
So, it's been slightly over a week since 2013.ev began (and Happy New Year to everyone, by the bye), and I haven't posted so much as an opening evocation for the new year. Where, one or two of you may be asking, has the Doctor been? Did he dive into the time vortex on 21 December 2012 and get lost (again)?
The answer is no, I didn't go traipsing around time and space, as much as I'd like to have done so. I took the last two weeks of the year off and tried my hardest to take a vacation. Nothing fancy, mind you - I wish I could have gone to visit Antarctica but I had neither the time nor the money to make such an (awesome) journey. I was shooting for simple time at home: Sleeping in, reading, and not drinking too much coffee. Partially, I wanted a vacation because I've been feeling very burned out for the past month or two and wanted some downtime to recharge and clear my head, though not a whole lot of head clearing happened due to malaise and the odd pounding headache (which are just now starting to abate). It was for this reason that my vacation wasn't terribly restful or recuperative.
A couple of months ago after a particularly bad pizza burn, I noticed a little lump on the roof of my mouth. I figured that it was probably a little scar tissue, but mentioned it to my dentist when I went in for a checkup near the end of December. He poked and prodded it for a few minutes and said, "I have no idea what this is. I'm writing you a referral to an endodontic surgeon. Get this checked out as soon as you can."
For those of you who haven't been paying any attention to the news lately (and why should you? it's the holidays.) the president of the National Rifle Association gave a press conference yesterday about what he thought of the recent shootings in Sandy Hook. Predictably, half the Internet blew its buffers and the petitions and sarcastic remarks are flying like paper airplanes when the teacher's back is turned. Once, common sense was the first casualty of tragedy; in recent years common sense ran out of regenerations and was given a viking funeral (video contains spoilers for new season number six of Doctor Who; just take my word for it). Wayne LaPierre, vice-president of the NRA remarked that a comprehensive national database of the mentally ill was necessary to prevent unstable people from acquiring guns - as if nobody ever bought a stolen gun on the black market, nor stole a gun. On this matter, there is a quote which speaks more eloquently than I: "Gun control is the theory that people who are willing to ignore laws prohibiting rape, torture, kidnapping, theft and murder will obey a law which prohibits them from owning a firearm."
What really caught my attention was LaPierre's assertion that armed guards were necessary in schools to protect children. That seems to be what really lit the fuse on this controversy.
For those of you who watch the tech field, you've no doubt heard of Ray Kurzweil, the inventor, technologist, and futurist who's been promulgating the "The Singularity is near!" meme since the 1980's. Love him or hate him, he's a brilliant man who's invented some fantastic, practical things. One of the things he talks about a great deal is how strong AI, which many now refer to as Artificial General Intelligence (i.e., human-like intelligence and sapience) is just a few years away, and he cites Moore's Law as evidence of this. Of course, a lot of people think he's pushing jetwash and get on with their lives. So, when he blogs about things like machine learning and AI a lot of people are prone to ignore his observations about what is being done right now, mistaking them for hypotheses (which he's also prone to, make no mistake).
So, it came with a groan followed by some surprised Google searches when a post hit one of the Zero State mailing lists about a large scale software simulation of part of a human brain.
This is the weblog of the Doctor, who is (in no particular order), a geek, a writer, a musician, a mystech, a coder, a traveler, an adventurer, an engineer, a magickian, a system administrator, a consultant, a transhumanist, and is interested in just about everything to some extent.
The Doctor's life is quite busy (his career doubly so) so he posts whenever the opportunity arises. It isn't as often as he would like.