Sign up for our newsletter and get the latest big data news and analysis.

NVIDIA “Drop-In” Software Simplifies Acceleration of Deep Learning Applications

Nvidia_logoNVIDIA has introduced easily deployable software that helps developers harness the power of GPU acceleration for groundbreaking deep learning applications in areas such as image classification, video analytics, speech recognition and natural language processing.

A powerful programming library based on the CUDA® parallel programming model, NVIDIA® cuDNN uses GPUs to accelerate deep learning training processes by up to 10x compared to CPU-only methods. Featuring an easy-to-deploy, drop-in design, cuDNN allows developers to rapidly develop and optimize new training models and build more accurate applications using GPU accelerators.

Deep learning is a fast-growing segment of machine learning that involves the creation of sophisticated, multi-level or “deep” neural networks. These networks enable powerful computer systems to learn to recognize patterns, objects and other items by analyzing massive amounts of training data. The use of GPUs to accelerate deep learning applications is growing dramatically, as researchers and programmers increasingly recognize the significant benefits GPUs offer in accelerating the massively data-intensive training process.

Researchers at the University of California at Berkeley have integrated cuDNN into Caffe, one of the world’s most widely used frameworks for developing deep learning applications.

In addition, over 90 percent of the participating teams and three of the four winners in the prestigious 2014 ImageNet Large Scale Visual Recognition Challenge used GPUs to enable their ground-breaking deep learning work.

 

Sign up for the free insideBIGDATA newsletter.

Leave a Comment

*

Resource Links: