Sign up for our newsletter and get the latest big data news and analysis.

NetApp AI and Run:AI Partner to Speed Up Data Science Initiatives

NetApp, a leading cloud data services provider has teamed up with Run:AI, a company virtualizing AI infrastructure, have teamed up to allow faster AI experimentation with full GPU utilization. The partnership allows teams to speed up AI by running many experiments in parallel, with fast access to data, utilizing limitless compute resources. Run:AI enables full GPU utilization by automating resource allocation, and NetApp® ONTAP® AI proven architecture allows every experiment to run at maximum speed by eliminating data pipeline bottlenecks.

NVIDIA Advances Performance Records on AI Inference

NVIDIA today announced its AI computing platform has again smashed performance records in the latest round of MLPerf, extending its lead on the industry’s only independent benchmark measuring AI performance of hardware, software and services.

Interview: Global Technology Leader PNY

The following whitepaper download is a reprint of the recent interview with our friends over at PNY to discuss a variety of topics affecting data scientists conducting work on big data problem domains including how “Big Data” is becoming increasingly accessible with big clusters with disk-based databases, small clusters with in-memory data, single systems with in-CPU-memory data, and single systems with in-GPU-memory data. Answering our inquiries were: Bojan Tunguz, Senior System Software Engineer, NVIDIA and Carl Flygare, NVIDIA Quadro Product Marketing Manager, PNY.

Transform Raw Data to Real Time Actionable Intelligence Using High Performance Computing at the Edge

In this special guest feature, Tim Miller from One Stop Systems discusses the importance of transforming raw data to real time actionable intelligence using HPC at the edge. The imperative now is to move processing closer to where the data is being sourced, and apply high performance computing edge technologies so real time insights can drive business actions.

Interview: Global Technology Leader PNY

We recently caught up with our friends over at PNY to discuss a variety of topics affecting data scientists conducting work on big data problem domains including how “Big Data” is becoming increasingly accessible with big clusters with disk-based databases, small clusters with in-memory data, single systems with in-CPU-memory data, and single systems with in-GPU-memory data. Answering our inquiries were: Bojan Tunguz, Senior System Software Engineer, NVIDIA and Carl Flygare, NVIDIA Quadro Product Marketing Manager, PNY.

Spell MLOps Platform Launches ‘Spell for Private Machines’ to Streamline DevOps and Foster Deeper Team Collaboration for Enterprises

Spell – a leading end-to-end machine learning platform that empowers businesses to get started with machine learning projects and make better use of their data – announced its new Spell for Private Machines integration. With Spell for Private Machines, enterprise teams that are spearheading machine learning projects will be able to use their privately owned GPUs or CPUs alongside cloud resources for experimentation, results and collaboration, reducing time, money and resources usually spent in-house.

The Essential Guide: Machine Scheduling for AI Workloads on GPUs

This white paper by Run:AI (virtualization and acceleration layer for deep learning) addresses the challenges of expensive and limited compute resources and identifies solutions for optimization of resources, applying concepts from the world of virtualization, High-Performance Computing (HPC), and distributed computing to deep learning.

The Essential Guide: Machine Scheduling for AI Workloads on GPUs

This white paper by Run:AI (virtualization and acceleration layer for deep learning) addresses the challenges of expensive and limited compute resources and identifies solutions for optimization of resources, applying concepts from the world of virtualization, High-Performance Computing (HPC), and distributed computing to deep learning.