RMSprop Optimization Algorithm for Gradient Descent with Neural Networks

Print Friendly, PDF & Email

A big part of AI and Deep Learning these days is the tuning/optimizing of the algorithms for speed and accuracy. Much of today’s deep learning algorithms involve the use of the gradient descent optimization method. And one of the most popular and wildly used ways to enhance gradient descent is a process called RMSprop, or root mean squared propagation. Interestingly, unlike other methods like exponentially weighted averages, bias correction, momentum, Adam, learning decay rate, and many others, this method was not grown out of academia. Rather, RMSprop was first described in a Coursera class on neural networks taught by Geoffrey Hinton.

The video lecture below on the RMSprop optimization method is from the course Neural Networks for Machine Learning, as taught by Geoffrey Hinton (University of Toronto) on Coursera in 2012. For all you AI practitioners out there, this technique should supplement your toolbox in a very useful way. The slides for the presentation are available HERE.



Sign up for the free insideBIGDATA newsletter.

Speak Your Mind