NVIDIA GPU Cloud Now Available to Hundreds of Thousands of AI Researchers Using NVIDIA Desktop GPUs

Print Friendly, PDF & Email

NVIDIA announced that hundreds of thousands of AI researchers using desktop GPUs can now tap into the power of NVIDIA GPU Cloud (NGC) as the company has extended NGC support to NVIDIA TITAN.

NVIDIA also announced expanded NGC capabilities — adding new software and other key updates to the NGC container registry — to provide researchers a broader, more powerful set of tools to advance their AI and high performance computing research and development efforts.

Customers using NVIDIA® Pascal™ architecture-powered TITAN GPUs can sign up immediately for a no-charge NGC account and gain full access to a comprehensive catalog of GPU-optimized deep learning and HPC software and tools. Other supported computing platforms include NVIDIA DGX-1™, DGX Station and NVIDIA Volta-enabled instances on Amazon EC2.

Software available through NGC’s rapidly expanding container registry includes NVIDIA optimized deep learning frameworks such as TensorFlow and PyTorch, third-party managed HPC applications, NVIDIA HPC visualization tools, and NVIDIA’s programmable inference accelerator, NVIDIA TensorRT™ 3.0.

We built NVIDIA GPU Cloud to give AI developers easy access to the software they ned to do groundbreaking work,” said Jim McHugh, vice president and general manager of enterprise systems at NVIDIA. “With GPU-optimized software now available to hundreds of thousands of researchers using NVIDIA desktop GPUs, NGC will be a catalyst for AI breakthroughs and a go-to resource for developers worldwide.”

An early adopter of NGC is GE Healthcare. The first medical device maker to use NGC, the company is tapping the deep learning software in NGC’s container registry to accelerate bringing the most sophisticated AI to its 500,000 imaging devices globally with the goal of improving patient care.

New NGC Containers, Updates and Features

In addition to making NVIDIA TensorRT available on NGC’s container registry, NVIDIA announced the following NGC updates:

  • Open Neural Network Exchange (ONNX) support for TensorRT.
  • Immediate support and availability for the first release of MXNet 1.0
  • Availability of Baidu’s PaddlePaddle AI framework

ONNX is an open format originally created by Facebook and Microsoft through which developers can exchange models across different frameworks. In the TensorRT development container, NVIDIA created a converter to deploy ONNX models to the TensorRT inference engine. This makes it easier for application developers to deploy low-latency, high-throughput models to TensorRT.

Together, these additions give developers a one-stop shop for software that supports a full spectrum of AI computing needs — from research and application development to training and deployment.

Launched in October, NGC is also available free of charge to users of NVIDIA Volta GPUs on Amazon Web Services and all NVIDIA DGX-1 and DGX Station customers. NVIDIA will continue to expand the reach of NGC over time.

 

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*