Sign up for our newsletter and get the latest big data news and analysis.

Things You Can Do with a Recurrent Neural Network

In this video presentation from the Linux.conf.au 2015 in Auckland, New Zealand, Douglas Bagnall examines a particularly hot topic in deep learning, namely recurrent neural networks, and all the things you can do with them.

Large Scale Deep Learning with TensorFlow

In this video presentation from the Spark Summit 2016 conference in San Francisco, Google’s Jeff Dean examines large scale deep learning with the TensorFlow framework. Jeff joined Google in 1999 and is currently a Google Senior Fellow.

Visualizing and Understanding Deep Neural Networks

In the video presentation below, Matthew Zeiler, PhD, Founder and CEO of Clarifai Inc, speaks about large convolutional neural networks. These networks have recently demonstrated impressive object recognition performance making real world applications possible. However, there was no clear understanding of why they perform so well, or how they might be improved.

Deep Learning – Theory and Applications

The video presentation below, “Deep Learning – Theory and Applications” is from the July 23rd SF Machine Learning Meetup at the Workday Inc. San Francisco office. The featured speaker is Ilya Sutskever who received his Ph.D. in 2012 from the University of Toronto working with deep learning luminary Geoffrey Hinton.

“Deep Learning” Book Chapter Walk-Throughs by Ian Goodfellow

Here’s a tremendous learning resource for Deep Learning practitioners – a complete set of video walk-through presentations for each chapter from the recent book “Deep Learning” by Goodfellow, Bengio, and Courville. This book is considered one of the finest texts on the subject. The video series is an excellent way to advance through all the material in the book.

TensorFlow Tutorial – Simple Linear Model

In this excellent tutorial video presentation below, Magnus Erik Hvass Pedersen demonstrates the basic workflow of using TensorFlow with a simple linear model. You should be familiar with basic linear algebra, Python and the Jupyter Notebook editor. It also helps if you have a basic understanding of Machine Learning and classification.

How Can We Trust Machine Learning?

How Can We Trust Machine Learning? In this talk, Carlos Guestrin, CEO of Dato, Inc. and Amazon Professor of Machine Learning at the University of Washington, describes recent research and new tools with which companies can start to have the means to gain trust and confidence in the models and predictions behind their core business applications.

New Theory Unveils the Black Box of Deep Learning

In the video presentation below (courtesy of Yandex) – “Deep Learning: Theory, Algorithms, and Applications” – Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, provides evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck.”

Making Computers Smarter with Google’s AI chief John Giannandrea

In the video below from the recent TechCrunch Disrupt SF 2017 event, Google’s John Giannandrea sits down with Frederic Lardinois to discuss the AI hype/worry cycle and the importance, limitations, and acceleration of machine learning. Giannandrea addresses the huge amount of hype surrounding AI right now, specifically the fear-mongering by some of Silicon Valley’s elite.

RMSprop Optimization Algorithm for Gradient Descent with Neural Networks

The video lecture below on the RMSprop optimization method is from the course Neural Networks for Machine Learning, as taught by Geoffrey Hinton (University of Toronto) on Coursera in 2012. For all you AI practitioners out there, this technique should supplement your toolbox in a very useful way.