Slidecast: Announcing the Nvidia Deep Learning SDK

Print Friendly, PDF & Email

Marc Hamilton, Nvidia

Marc Hamilton, Nvidia

In this slidecast, Marc Hamilton from Nvidia describes the latest updates to the company’s Deep Learning Platform.

“Great hardware needs great software. To help data scientists and developers make the most of the vast opportunities in deep learning, we’re announcing today at the International Supercomputing show, ISC16, a trio of new capabilities for our deep learning software platform. The three — NVIDIA DIGITS 4, CUDA Deep Neural Network Library (cuDNN) 5.1 and the new GPU Inference Engine (GIE) — are powerful tools that make it even easier to create solutions on our platform.”

  • DIGITS 4 introduces a new object detection workflow, enabling data scientists to train deep neural networks to find faces, pedestrians, traffic signs, vehicles and other objects in a sea of images. This workflow enables advanced deep learning solutions — such as tracking objects from satellite imagery, security and surveillance, advanced driver assistance systems and medical diagnostic screening. When training a deep neural network, researchers must repeatedly tune various parameters to get high accuracy out of a trained model. DIGITS 4 can automatically train neural networks across a range of tuning parameters, significantly reducing the time required to arrive at the most accurate solution. DIGITS 4 release candidate will be available this week as a free download for members of the NVIDIA developer program. Learn more at the DIGITS website.
  • cuDNN provides high-performance building blocks for deep learning used by all leading deep learning frameworks. Version 5.1 delivers accelerated training of deep neural networks, like University of Oxford’s VGG and Microsoft’s ResNet, which won the 2016 ImageNet challenge. Each new version of cuDNN has delivered performance improvements over the previous version, accelerating the latest advances in deep learning neural networks and machine learning algorithms. cuDNN 5.1 release candidate is available today as a free download for members of the NVIDIA developer program. Learn more and download the software at the cuDNN website.
  • GIE. The GPU Inference Engine is a high-performance deep learning inference solution for production environments. GIE optimizes trained deep neural networks for efficient runtime performance, delivering up to 16x better performance per watt on an NVIDIA Tesla M4 GPU vs. the CPU-only systems commonly used for inference today. The amount of time and power it takes to complete inference tasks are two of the most important considerations for deployed deep learning applications. They determine both the quality of the user experience and the cost of deploying the application. Using GIE, cloud service providers can more efficiently process images, video and other data in their hyperscale data center production environments with high throughput. Automotive manufacturers and embedded solutions providers can deploy powerful neural network models with high performance in their low-power platforms.
  • NVIDIA SDK. These software libraries, APIs and tools are used by the most popular game engines; hundreds of game titles and GPU-accelerated applications to power applications and services running on cloud platforms like Amazon AWS, IBM Softlayer and Microsoft Azure; and the most powerful supercomputers in the U.S. and around the world. The tools and libraries within the NVIDIA SDK are organized by application domain, making it easy for developers to quickly gain access to what they need.

Download the MP3 *  Subscribe on iTunes Subscribe to RSS 

Sign up for our insideBIGDATA Newsletter

Speak Your Mind

*