numenta brain theory discoveries of 2016/2017 by jeff hawkins

29
HTM Meetup November 3, 2017 Jeff Hawkins [email protected] Numenta Brain Theory Discoveries of 2016/2017

Upload: numenta

Post on 21-Jan-2018

289 views

Category:

Technology


4 download

TRANSCRIPT

HTM Meetup

November 3, 2017

Jeff [email protected]

Numenta Brain Theory Discoveries of 2016/2017

1) Reverse Engineer the Neocortex- biologically accurate theory- test empirically and via simulation

2) Enable technology based on cortical theory- active open source community- basis for Artificial General Intelligence- IP licensing

We have madesignificant advanceson the cortical theory

(Thomson & Bannister, 2003)(Constantinople and Bruno, 2013)

Cortical Column Anatomy

March 30, 2016

October 25, 2017

See all papers at Numenta.com/papers

3) How columns in cortex model objects through movement

4) Missing Ingredient!

1) Neuron model

2) How layers of neurons in cortex model sequences

Point Neuron Model

x Real neurons are not

like this!

Integrate and fire neuron: Lapicque, 1907

Perceptron: Rosenblatt 1962;

Deep learning: Rumelhart et al. 1986; LeCun et al., 2015

Artificial Neurons

Real and HTM neurons recognize 100’s of unique patterns.Most recognized patterns act as predictions.

5K to 30K excitatory synapses- 10% proximal, can cause spike- 90% distal, cannot cause spike

Dendrites are pattern detectors- 15 co-active, co-located synapses

has big effect

Real Neuron HTM Neuron Model

Learning is by Rewiring, Forming New Synapses

Not by Changing Synaptic Weights

Biology

HTM

Modeling a Cellular Layer

HTM Sequence Memory

A

X B

B

C

C

Y

D

Before learning

A

X B’’

B’

C’’

C’

Y’’

D’

After learning

Same columns,but only one cell active per column.

Sequences A-B-C-D vs. X-B-C-Y

March 30, 2016

October 25, 2016

See all papers at Numenta.com/papers

3) How columns in cortex model objects through movement

4) Missing Ingredient!

1) Neuron model

2) How layers of neurons in cortex model sequences

L6b

Output

Location on object

“allocentric”

L4 (input layer)

L2/3 (output layer)

L5

L6a

HTM Sensorimotor Inference Theory (single column)

1) Every column determines allocentric location of input

2) As sensor moves, column is exposed to differentfeature/locations on object

3) Output layer “pools” feature/locations. Stable over movement.

4) Columns learn models of complete objects

Object

Input

Sensed Feature

45%Feature@Location

Output layer

“Object”

Input layer

“Feature/Location”Location

on object

Column 1 Column 2 Column 3

Sensory

feature

HTM Sensorimotor Inference Theory (multiple columns)

Each column has partial knowledge of object.

Long range connections in output layer allow columns to vote.

Inference is much faster with multiple columns.

FeatureFeatureFeatureLocationLocationLocation

OBJECTS RECOGNIZED BY INTEGRATING INPUTS OVER TIME

Output

Input

FeatureLocationFeatureLocationFeatureLocation

RECOGNITION IS FASTER WITH MULTIPLE COLUMNS

Column 1 Column 2 Column 3

Output

Input

• Yale-CMU-Berkeley (YCB) Object Benchmark (Calli et al, 2017)– Diverse set of objects designed for robotics grasping tasks

– 80 common physical objects

– Includes 78 complete high resolution 3D CAD files

Simulations: YCB Object Benchmark

• Virtual hand using the Unity game engine

• Inputs

– Curvature based sensor on each fingertip

– Both inputs are highly sparse binary vectors

• Network with 4096 neurons per layer per column

• Results

• 98.7% recall accuracy (77/78 uniquely classified)

• Convergence time depends on object and sequence of sensations

Simulations: YCB Benchmark

Pairwise confusion between objects after 1 touch

Convergence over time

Pairwise confusion between objects after 2 touches

Convergence over time

Pairwise confusion between objects after 3 touches

Convergence over time

Pairwise confusion between objects after 4 touches

Convergence over time

Pairwise confusion between objects after 6 touches

Convergence over time

Pairwise confusion between objects after 10 touches

Convergence over time

Convergence with Multiple Columns

• Our model predicts that sensory regions will contain cells tuned to the location of features in an object's reference frame

• Movement dynamically modulates cell responses even in primary sensory regions (Trotter and Celebrini, 1999; Werner-Reiss et al., 2003)

• Grid cells solve a similar problem, location of body in environment

“Border ownership cells”(Willford & von der Heydt, 2015)

Evidence for Allocentric Location in Cortex

Summary

1) HTM Neuron Model

- Biologically more realistic- Functionally more powerful- Recognizes 100’s of unique patterns- Most patterns put neuron into “predictive” state- Learning is via grown of new synapses

2) HTM Cellular Layer Model

- Learns predictive models of sensory input- Applied to temporal sequences- Applied to sensorimotor sequences

3) Deduced Allocentric Location is Determined in Every Column

- Every column learns complete models of objects- Multiple columns infer quickly

4) Allocentric Location Changes “everything”

- Columns and regions are far more powerful thanpreviously thought

- Changes how we think about hierarchy- Progress on understanding rest of cortex will accelerate- Implications for robotics and machine intelligence