Sign up for our newsletter and get the latest big data news and analysis.

Comparative Testing of GPU Servers with New NVIDIA RTX30 Video Cards in AI/ML Tasks

In early September 2020, NVIDIA debuted its second generation GeForce RTX 30 family of graphics cards, the Ampere RTX architecture. NVIDIA broke with tradition when its new generations of cards were sold more expensive than their predecessors, which means that the cost of training models has remained more or less the same.

Penguin Computing Announces OriginAI Powered by WekaIO

Penguin Computing, a division of SMART Global Holdings, Inc. (NASDAQ: SGH) and a leader in high-performance computing (HPC), artificial intelligence (AI), and enterprise data center solutions, announced that it has partnered with WekaIO™ (Weka) to provide NVIDIA GPU-Powered OriginAI, a comprehensive, end-to-end solution for data center AI that maximizes the performance and utility of high-value AI systems.

NVIDIA DGX Station A100 Offers Researchers AI Data Center-in-a-Box

NVIDIA today announced the NVIDIA DGX Station™ A100 — the world’s only petascale workgroup server. The second generation of the groundbreaking AI system, DGX Station A100 accelerates demanding machine learning and data science workloads for teams working in corporate offices, research facilities, labs or home offices everywhere.

NetApp AI and Run:AI Partner to Speed Up Data Science Initiatives

NetApp, a leading cloud data services provider has teamed up with Run:AI, a company virtualizing AI infrastructure, have teamed up to allow faster AI experimentation with full GPU utilization. The partnership allows teams to speed up AI by running many experiments in parallel, with fast access to data, utilizing limitless compute resources. Run:AI enables full GPU utilization by automating resource allocation, and NetApp® ONTAP® AI proven architecture allows every experiment to run at maximum speed by eliminating data pipeline bottlenecks.

NVIDIA Advances Performance Records on AI Inference

NVIDIA today announced its AI computing platform has again smashed performance records in the latest round of MLPerf, extending its lead on the industry’s only independent benchmark measuring AI performance of hardware, software and services.

Interview: Global Technology Leader PNY

The following whitepaper download is a reprint of the recent interview with our friends over at PNY to discuss a variety of topics affecting data scientists conducting work on big data problem domains including how “Big Data” is becoming increasingly accessible with big clusters with disk-based databases, small clusters with in-memory data, single systems with in-CPU-memory data, and single systems with in-GPU-memory data. Answering our inquiries were: Bojan Tunguz, Senior System Software Engineer, NVIDIA and Carl Flygare, NVIDIA Quadro Product Marketing Manager, PNY.

Transform Raw Data to Real Time Actionable Intelligence Using High Performance Computing at the Edge

In this special guest feature, Tim Miller from One Stop Systems discusses the importance of transforming raw data to real time actionable intelligence using HPC at the edge. The imperative now is to move processing closer to where the data is being sourced, and apply high performance computing edge technologies so real time insights can drive business actions.

Interview: Global Technology Leader PNY

We recently caught up with our friends over at PNY to discuss a variety of topics affecting data scientists conducting work on big data problem domains including how “Big Data” is becoming increasingly accessible with big clusters with disk-based databases, small clusters with in-memory data, single systems with in-CPU-memory data, and single systems with in-GPU-memory data. Answering our inquiries were: Bojan Tunguz, Senior System Software Engineer, NVIDIA and Carl Flygare, NVIDIA Quadro Product Marketing Manager, PNY.

Spell MLOps Platform Launches ‘Spell for Private Machines’ to Streamline DevOps and Foster Deeper Team Collaboration for Enterprises

Spell – a leading end-to-end machine learning platform that empowers businesses to get started with machine learning projects and make better use of their data – announced its new Spell for Private Machines integration. With Spell for Private Machines, enterprise teams that are spearheading machine learning projects will be able to use their privately owned GPUs or CPUs alongside cloud resources for experimentation, results and collaboration, reducing time, money and resources usually spent in-house.

The Essential Guide: Machine Scheduling for AI Workloads on GPUs

This white paper by Run:AI (virtualization and acceleration layer for deep learning) addresses the challenges of expensive and limited compute resources and identifies solutions for optimization of resources, applying concepts from the world of virtualization, High-Performance Computing (HPC), and distributed computing to deep learning.