TensorFlow: Second Generation Deep Learning System

Print Friendly, PDF & Email

At the recent BayLearn 2015 (Bay Area Machine Learning Symposium), Jeff Dean of Google presented “Large-Scale Deep Learning for Intelligent Computer Systems” for one of the keynotes. The talk’s abstract focused on Google’s work on the newly announced TensorFlow:

Over the past few years, we have built two generations of computer systems for training and deploying neural networks, and then applied these systems to a wide variety of problems that have traditionally been very difficult for computers.  We have made significant improvements in the state-of-the-art in many of these areas, and our software systems and algorithms have been used by dozens of different groups at Google to train state-of-the-art models for speech recognition, image recognition, various visual detection tasks, language modeling, language translation, and many other tasks.  In this talk, I’ll highlight some of the lessons we have learned in using our first-generation distributed training system and discuss some of the design choices in our second-generation system.  I’ll then discuss ways in which we have applied this work to a variety of problems in Google’s products, usually in close collaboration with other teams.

As open source software, you can obtain TensorFlow HERE.

 

 

Download insideBIGDATA: An Insider’s Guide to Apache Spark

 

 

Speak Your Mind

*