Notes from the Artificial Intellligence and the Singularity conference in September.

08 October 2014

As I've mentioned several times before, every couple of week the Brighter Brains Institute in California holds a Transhuman Visions symposium, where every month the topic of presentation and discussion is a little different. Last month's theme was Artificial Intelligence and the Singularity, a topic of no small amount of debate in the community. Per usual, I scribbled down a couple of pages of notes that I hope may be interesting and enlightening to the general public. A few of my own insights may be mixed in. Later on, a lot of stuff got mixed together as I only wrote down assorted interesting bits and stopped separating the speakers in my notes. My bad. As always, all the good stuff is below the cut...

Monica Anderson of Sensai and Syntience, Inc. - Doing AI Wrong: We Can Always Pull the Plug

  • Dual process theory - consequences for AI - theme questions
  • Daniel Kahneman - Thinking, Fast and Slow
  • Two modes of thought: Intuitive understanding (fast), logical reasoning (slow)
  • Understanding is a parallel process, very fast, subconscious, involuntary
  • Once on, understanding can't be switched off
  • Bandwidth of the human eye is about 10 megabits per second
  • Bandwidth of reasoning is about 100 megabits per second
  • Data reduction, epistemic reduction
  • Libet delay is about 500 milliseconds (one-half second)
  • Reductionism is exactly the use of models. Simplification of a rich reality.
  • Reasoning requires models. So, AI tried to model the world.
  • Comprehensive world models are intractiable. Frame problem. McCarthy and Hayes. Consequently, limited AI to toy problems.
  • Holistic - contex exploiting - AI to attack understanding problems.
  • Doing what we do without reasoning
  • All intelligent agents are fallible. The world changes. Make mistakes. Limited by world complexity, not technology. More AGI means a more complex world.
  • Recursive self-improvement is inherently limited. Understanding is a requirement.
  • Consciousness is not required. A red herring? Writable long term memory is not required. Knowledgebase freeze after training. No need for multiple modes of sensory input. Text is fine. No mobility, embodiment, or enactment. Even agency is not required. They do what they're told. No personhood.
  • The kind of AGI is more important than its IQ.
  • The same algorithm works for other domains, it's just trainable.
  • Human equivalence will eventually arise as technology progresses.
  • Recognize what you've encountered before. Track failures and successes. Discard old patterns that aren't useful anymore.
  • "You can only learn that which you almost know." --Patrick Winston
  • The AGI software has to make its own models.
  • Machines capable of autonomous reduction. Understanding.
  • Understanding must be implemented without using models. Exploits context.
  • Can operate on scant evidence, resistent to ambiguity and misinformation.
  • Cognitive biases are an emergent property of understanding.
  • Salience - Knowing what's important.

Peter Voss - Approaches to AGI

  • There are sone model-based approaches that are useful.
  • Understanding is not subconscious. Parsing is automatic, and understanding requires conscious thought. (Maybe it's precomputed?)
  • Artificial General Intelligence == human equivalent intelligence, learning systems.
  • Learning, reasoning, general problem solving capability.
  • Capable of acquiring knowledge and learnign how to do new things. Ongoing, cumulative, contextual, adaptive. Autonomous. Experience based improvement and adaptation.
  • Potentialy, many separate techniques can be applied in agent based architectures that aid one another. Inter-disciplinary.
  • Top down versus bottom up approaches
  • Optimize for certain cognitive biases temporarily?
  • http://adaptiveai.com/faq/
  • Human level cognition, not ability. "Helen Hawking."
  • Tool use is a necessity. Goal directed. Over a dozen learning modes.
  • Opaqueness of architecture and implementation makes self improvement of any kind correspondingly difficult and unlikely. (If you can't figure out how it works, the software may not be able to figure out how it works, either. Either way, modifying the AGI software to improve it is significantly more difficult.)
  • "Abundance with a tiny footprint."
  • http://www.agi-3.com/
  • It's not what the software is doing, it's how it's doing it.
  • "peter voss rational ethics"

Nicole Salllak Anderson - Be the AGI You Wish To See In the World.

  • Nicole Sallak Anderson - eHuman Dawn
  • Humans are a mix of emotions, neurosis, and intelligence.
  • How we treat one antoher matters in AGI development.
  • Software is only as good as the engineer.

Gary Marcus - Smart Machines and What They Can Learn From People

  • Bayesian reasoning - P(a|b) == (P(b|a) * P(a))/P(b)
  • Why?
  • People have a lust for silver bullets. Simple ontologies sell.
  • There is no reason to think that the brain is simple. There is complexity at all levels.
  • Cognitive Concillience
  • We don't even know how many kinds of neurons there are.
  • An excessive love of empiricism.
  • Too many researchers try to learn everything from scratch.
  • Neurophilia and physics envy.
  • Brain inspired stuff is limited in domain.
  • Most LAI uses anything and everything other than neuromorphic structures.
  • Watson and Deep Blue are like this.
  • Hawkins' Numenta is arguably the best.
  • Deep learning uses the Hubel-Wisel paradigm. (David Hubel. Torsten Wiesel.)
  • Hierarchies of feature detectors. Categorization. (This is how natural organic visual cortices work.)
  • Google's cat detector.
  • Short limits on scale, positioning, out-of-plane variance.
  • Deep learning is not good at natural language understanding. "Sentiment analysis."
  • Generalization functions interpolate, they don't extrapolate.
  • Classifier models are square pegs in round holes.
  • AGI != human replica
  • Humans are very good at inference.
  • Machines are very good at finding facts.
  • Humans handle contradictions well. Machines don't.
  • Humans are very good at determining which inferential principles apply to a given situation.
  • Humans care about causal principles. Explanations. Like children. AI puts probabilities on curves.
  • Humans are comfortable thinking about things generically.
  • (Could Korzybski's general semantics help?)
  • Humans are good at approximation and taking shortcuts.
  • Humans are comfortable reasoning with incomplete information.
  • We use many different learning mechanisms simultaneously. Software uses just one - whichever one it's designed with.
  • Common sense!
  • Brains are not blank slates.

Anya Petrova - The AI Spectrum

  • "If you study one neuron, you're in neuroscience. If you study two neurons, you're in psychology."
  • Consciousness and the Brain
  • The orbitofrontal cortex of the human brain. Behind the nose, near the amygdala, size of a silver dollar.
  • Where thought as we usually think of it happens.
  • The brain is a system of organs, all of which evolved separately.
  • Neurogenesis is an ongoing process.
  • Neurons are born in the hippocampus and migrate through the brain.
  • It takes about six (6) weeks for neurons to fully mature.
  • The orbitofrontal cortex carries out directed, purposeful question asking.
  • Self awareness comes from the frontal cortex, behind the forehead.
  • Those two things together might make a good model for AGI.
  • Get clubs going with the grad students!
  • One reason we are aware is because the corpus collosum allows the hemispheres of the brain to communicate.