Numenta Brings Brain Theory to Machine Learning in New Paper

Print Friendly, PDF & Email

Numerous proposals have been offered for how intelligent machines might learn sequences of patterns, which is believed to be an essential component of any intelligent system. Researchers at Numenta Inc. have published a new study, “Continuous Online Sequence Learning with an Unsupervised Neural Network Model,” which compares their biologically-derived hierarchical temporal memory ( HTM) sequence memory to traditional machine learning algorithms.

The paper has been published in MIT Press Journal’s Neural Computation 28, 2474–2504 (2016). You can read and download the paper HERE.

Authored by Numenta researchers Yuwei Cui, Subutai Ahmad, and Jeff Hawkins, the new paper serves as a companion piece to Numenta’s breakthrough research offered in “Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex,” which appeared in Frontiers in Neural Circuits, in March 2016.

The earlier paper described a biological theory of how networks of neurons in the neocortex learn sequences. In this paper, the authors demonstrate how this theory, HTM sequence memory, can be applied to sequence learning and prediction of streaming data.

Our primary goal at Numenta is to understand, in detail, how the neocortex works. We believe the principles we learn from the brain will be essential for creating intelligent machines, so a second part of our mission is to bridge the two worlds of neuroscience and AI. This new work demonstrates progress towards that goal,” Hawkins commented.

In the new paper, HTM sequence memory is compared with four popular statistical and machine learning techniques: ARIMA, a statistical method for time-series forecasting (Durbin & Koopman 2012); extreme learning machine (ELM), a feedforward network with sequential online learning (Huang, Zhu, & Siew, 2006); and two recurrent networks, long-short term memory (LSTM) (Hochreiter and Schmidhuber 1997) and echo state networks (ESN) (Jaeger and Hass 2004).

The results in this paper show that HTM sequence memory achieves comparable prediction accuracy to these other techniques. However, the HTM model also exhibits several properties that are critical for streaming data applications including:

  • Continuous online learning
  • Ability to make multiple simultaneous predictions
  • Robustness to sensor noise and fault tolerance
  • Good performance without task-specific tuning

Many existing machine learning techniques demonstrate some of these properties,” Cui noted, “but a truly powerful system for streaming analytics should have all of them.”

The HTM sequence memory algorithm is something that machine learning experts can test and incorporate into a broad range of applications. In keeping with Numenta’s open research philosophy, the source Python code for replicating the graphs in the paper can be found here. Numenta also welcome questions and discussion about the paper on the HTM Forum or by contacting the authors directly.

* Yuwei Cui, Subutai Ahmad, Jeff Hawkins (2016) Continuous Online Sequence Learning with an Unsupervised Neural Network Model. Neural Computation 28(11), 2474–2504. doi:10.1162/NECO_a_00893

* Hawkins, J., and Ahmad, S. (2016). Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex. Front. Neural Circuits 10. doi:10.3389/fncir.2016.00023

 

Sign up for the free insideBIGDATA newsletter.

 

 

Speak Your Mind

*