Machine learning going from merely unnerving to scary.

13 October 2015

It seems like you can't go a day with any exposure to media without hearing about machine learning, or developing software which isn't designed to do anything in particular but is capable of teaching itself to carry out tasks tasks and make educated predictions based upon its training and data already available to it. If you've ever had to deal with a speech recognition system, bought something off of Amazon that you didn't know existed (but seemed really interesting at the time), or used a search engine you've interacted with a machine learning system of some kind. That said, here's a roundup of some fascinating stuff being done with machine learning systems at this time.

First, let's talk about the chess. As board games go it's a tricky one to write software for due to the number of potential moves every turn. Pretty much every chess engine out there, from IBM's Deep Blue to Colossus Chess back in 1984 use more or less the same general technique, which is brute forcing the set of all possible moves for that board configuration, deleting the moves that obviously won't work (i.e., illegal moves) with varying degrees of cleverness, and winnowing down the remaining possible positions to extract the best possible move for that moment. Well engineered systems can run several hundred million possible moves in a few seconds before settling on a move; conversely, human chess players are observed using the fusiform face areas of their brains to evaluate five or six moves per second before picking a move, which is obviously much slower but history has borne out just how efficient a means of playing chess wetware is. Enter Giraffe by Matthew Lai at the Imperial College of London. Giraffe is implemented as a very sophisticated machine learning system which makes use of multiple layers of neural networks, each of which analyzes chess boards in a different way. One layer looks at the state of the game board as a whole, another analyzes the location of each piece relative to others on the board, and another considers the squares each piece can move to as well as the game effects of each possible move. Giraffe started out knowing nothing about the game of chess becasue it was an unformatted, unprogrammed neural network construct. Lai then began feeding into Giraffe carefully selected parts of databases of chess games, where each game is documented move-by-move and annotated every step of the way. This is, incidentally, the important bit about training AI software. Whatever data sets you train them with have to be annotated in some natively machine readable way so that the software has a "native language" to attach ideas to, just as you or I would think in our native languages and mentally translate into a second language learned later in life. All told, it took Giraffe about 72 hours continously to assimilate the information needed to play chess. At the end of the traning process Giraffe was benchmarked against human chess players, and it was discovered that Giraffe ranks as a FIDE International Chess Master. If you're curious, here's the paper Lai wrote about building and training Giraffe. (Disclaimer: I'm not a parent.)

If you've ever watched an infant explore its environment, what we would ordinarily consider play is actually a concerted effort to learn how to interact with objects driven by pure curiosity. Babies don't seem to know how to use their limbs or understand what things are or how they can be moved around because they are all novel experiences, so they're training themselves. AI software is the same way: When it's first started up it has no inherent knowledge of what it's supposed to do, unless it has a saved state file to read in from storage. For many years it's been difficult at best to train robots to carry out tasks: Either a human operator had to grasp a waldo's end effector and walk it through the task multiple times until the robot's software can carry it out unassisted or a teach pendant is used to do the same thing. Lerrel Pinto and Abhinav Gupta at Carnegie-Mellon decided to see what would happen if a generalized learning program not unlike the former was mated to the latter. They took a two-armed industrial manipulator robot named Baxter, hooked a hacked Microsoft Kinect up to it (because let's face it: who actually bought a Kinect to play games with?) to provide vision, and linked a pre-trained object recognition neural network into the control software. Baxter started out being able to move objects away from one another to make them easier to grasp, rotate its grippers 10 degrees at a time, and pick random points on the objects at which to try to pick them up. Baxter tried to pick up the object in question, determined if it was successful or not, and tried again and again and again, 188 times in total. At the end of each sequence of attempts, Baxter's software evaluated the most and least successful attempts, and then moved on to another object. Gupta and Pinto put a table covered with toys (because they tend to be sized to learn basic manual dexterity) in front of Baxter and let it try ten hours a day for seventy days.

At the end of the test run, Baxter had an accuracy rate of somewhere around 80%. They then started the tests over using more pedestrian heuristics to carry out the same set of tasks. They were only about 62% efficient when compared to the "do it yourself" training sessions. Again, if you want to read their research paper you can download it from ArXiv.

When we think of AI we would more correctly think of AGI (artificial general intelligence), or software which reasons in a general fashion about the world instead of being really effective in one narrow domain of expertise (like currency markets or organic chemistry). In other words, software that thinks more like humans and not like computers.

Research teams at the University of Illinois at Chicago and in Hungary have been experimenting with an open source semantic reasoning software package called ConceptNet (source code here), which uses natural language words to stand for concepts. It is interesting to note that ConceptNet does not disambiguate the senses of words, and instead collapses all of the possible definitions into one definition to increase the amount of material with which to reason about a given term. This can potentially result in some questionable logic chains, but also some useful output where other systems may not be able to respond. After they brought a copy of ConceptNet online and trained it with appropriately annotated corpora (ConceptNet comes with its own knowledge bases but you can undoubtedly add your own information to the system) they gave it the same IQ test preschool kids are given, the WPPSI-III, which seems to test a knowledge and reasoning capabilities. At the end of the exam they analyzed the results and discovered that it did as well as an average four year old child: It did well when determining similarities between things and when using its vocabulary (as one would expect of software), less well on raw information because it is highly dependent upon cultural context and interpretation, but not terribly well on textual reasoning or comprehension. This may be an artefact of ConceptNet collapsing senses of words together because that would confuse the context in which words would be interpreted, and the evidence backing that up is found in questions in which words with single concepts attached to them (i.e., one definition only). IQ testing also appears to have poked some major holes in either the training corpora or the complexity of the semantic networks inside of the engine. To be fair, they used ConceptNet4 in their experiments, when ConceptNet5 is the latest release series (v5.0 came out on 28 October 2011, v5.4.1 in September of 2015). Of course, here is their paper at ArXiv.