NVIDIA announced that its deep learning platform is now available as part of Baidu Cloud’s deep learning service, giving enterprise customers instant access to the world’s most adopted AI tools. The new Baidu Cloud offers the latest GPU computing technology, including Pascal™ architecture-based NVIDIA® Tesla® P40 GPUs and NVIDIA deep learning software. It provides both training and inference acceleration for open-source deep learning frameworks, such as TensorFlow and PaddlePaddle.
IBM (NYSE: IBM) today announced that it is the first major cloud provider to make the NVIDIA Tesla® P100 GPU available globally on the cloud. By combining NVIDIA’s acceleration technology with IBM’s Cloud platform, businesses can more quickly and efficiently run compute-heavy workloads, such as artificial intelligence, deep learning and high performance data analytics.
NVIDIA announced that Tencent Cloud will adopt NVIDIA® Tesla® GPU accelerators to help advance artificial intelligence for enterprise customers. NVIDIA’s AI computing technology is used worldwide by cloud service providers, enterprises, startups and research organizations for a wide range of applications.
NVIDIA unveiled the NVIDIA® Jetson™ TX2, a credit card-sized platform that delivers AI computing at the edge — opening the door to powerfully intelligent factory robots, commercial drones and smart cameras for AI cities. Jetson TX2 offers twice the performance of its predecessor, or it can run at more than twice the power efficiency, while drawing less than 7.5 watts of power. This allows Jetson TX2 to run larger, deeper neural networks on edge devices. The result: smarter devices with higher accuracy and faster response times for tasks like image classification, navigation and speech recognition.
With a new cluster of specialized graphics processing units (GPUs) now installed, the University of Massachusetts Amherst is poised to attract the nation’s next crop of top Ph.D. students and researchers in such fields as artificial intelligence, computer vision and natural language processing, says associate professor Erik Learned-Miller of the College of Information and Computer Sciences (CICS).
Kinetica Delivers Advanced In-Database Analytics, Opening the Way for Converged AI and BI Workloads Accelerated by GPUs
Kinetica, provider of the fast, in-memory database accelerated by GPUs, announced the availability of in-database analytics via user-defined functions (UDFs). This capability makes the parallel processing power of the GPU accessible to custom analytics functions deployed within Kinetica.
One Stop Systems, Inc. (OSS), a leader in PCI Express® (PCIe®) expansion technology, introduces two new deep learning appliances, OSS-PASCAL4 and OSS-PASCAL8. The OSS-PASCAL8 is a 170 TeraFLOP engine with 80GB/s NVIDIA® NVLink™ for the largest deep learning models.
MapD, a leader in GPU-powered analytics, announced significant new feature and performance enhancements to its Core database and Immerse visual analytics platform. The new capabilities extend the company’s pioneering work in using GPUs to both query and visualize billions of records with millisecond latency. The performance characteristics of MapD’s approach are anywhere from 75 to 3,500 times faster than traditional CPU-powered databases.
I recently caught up with Mike Perez, Vice President of Services at Kinetica, to talk about GPU-accelerated databases and discuss how the Kinetica new Install Accelerator and Application Accelerator programs are are helping customers quickly integrate Kinetica into their environments.
Kinetica Unveils Accelerator Solutions for Installing and Deploying Applications on its GPU-Accelerated Database
Kinetica, provider of the fastest, in-memory database accelerated by GPUs, announced the immediate availability of Install Accelerator and Application Accelerator programs, two new software and services offerings designed to help customers quickly ingest, explore and visualize streaming data sets, including for Internet of Things (IoT) use cases, by leveraging the power of GPUs.