Sign up for our newsletter and get the latest big data news and analysis.

Visualizing and Understanding Deep Neural Networks

In the video presentation below, Matthew Zeiler, PhD, Founder and CEO of Clarifai Inc, speaks about large convolutional neural networks. These networks have recently demonstrated impressive object recognition performance making real world applications possible. However, there was no clear understanding of why they perform so well, or how they might be improved.

Deep Learning – Theory and Applications

The video presentation below, “Deep Learning – Theory and Applications” is from the July 23rd SF Machine Learning Meetup at the Workday Inc. San Francisco office. The featured speaker is Ilya Sutskever who received his Ph.D. in 2012 from the University of Toronto working with deep learning luminary Geoffrey Hinton.

“Deep Learning” Book Chapter Walk-Throughs by Ian Goodfellow

Here’s a tremendous learning resource for Deep Learning practitioners – a complete set of video walk-through presentations for each chapter from the recent book “Deep Learning” by Goodfellow, Bengio, and Courville. This book is considered one of the finest texts on the subject. The video series is an excellent way to advance through all the material in the book.

TensorFlow Tutorial – Simple Linear Model

In this excellent tutorial video presentation below, Magnus Erik Hvass Pedersen demonstrates the basic workflow of using TensorFlow with a simple linear model. You should be familiar with basic linear algebra, Python and the Jupyter Notebook editor. It also helps if you have a basic understanding of Machine Learning and classification.

How Can We Trust Machine Learning?

How Can We Trust Machine Learning? In this talk, Carlos Guestrin, CEO of Dato, Inc. and Amazon Professor of Machine Learning at the University of Washington, describes recent research and new tools with which companies can start to have the means to gain trust and confidence in the models and predictions behind their core business applications.

New Theory Unveils the Black Box of Deep Learning

In the video presentation below (courtesy of Yandex) – “Deep Learning: Theory, Algorithms, and Applications” – Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, provides evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck.”

Making Computers Smarter with Google’s AI chief John Giannandrea

In the video below from the recent TechCrunch Disrupt SF 2017 event, Google’s John Giannandrea sits down with Frederic Lardinois to discuss the AI hype/worry cycle and the importance, limitations, and acceleration of machine learning. Giannandrea addresses the huge amount of hype surrounding AI right now, specifically the fear-mongering by some of Silicon Valley’s elite.

RMSprop Optimization Algorithm for Gradient Descent with Neural Networks

The video lecture below on the RMSprop optimization method is from the course Neural Networks for Machine Learning, as taught by Geoffrey Hinton (University of Toronto) on Coursera in 2012. For all you AI practitioners out there, this technique should supplement your toolbox in a very useful way.

Intel AI Lounge – Bryce Olson, Global Marketing Director, Health and Life Sciences at Intel

In this Silicon Angle video interview, Bryce Olson, Intel Global Marketing Director, Health and Life Sciences and stage 4 cancer survivor, set out to find better treatment for his illness through technology. By using AI to sift through his DNA data, he is now in remission.

Machine Learning Interpretability with Driverless AI

In this presentation, our friends Andy Steinbach, Head of AI in Financial Services at NVIDIA, and Patrick Hall, Senior Director of Product at H2O.ai discuss Machine Learning Interpretability with Driverless AI. Interpretability is a hugely popular topic in machine learning. Wherever possible, interpretability approaches are deconstructed into more basic components suitable for human storytelling: complexity, scope, understanding, and trust.