Sign up for our newsletter and get the latest big data news and analysis.

Interview: Global Technology Leader PNY

The following whitepaper download is a reprint of the recent interview with our friends over at PNY to discuss a variety of topics affecting data scientists conducting work on big data problem domains including how “Big Data” is becoming increasingly accessible with big clusters with disk-based databases, small clusters with in-memory data, single systems with in-CPU-memory data, and single systems with in-GPU-memory data. Answering our inquiries were: Bojan Tunguz, Senior System Software Engineer, NVIDIA and Carl Flygare, NVIDIA Quadro Product Marketing Manager, PNY.

Transform Raw Data to Real Time Actionable Intelligence Using High Performance Computing at the Edge

In this special guest feature, Tim Miller from One Stop Systems discusses the importance of transforming raw data to real time actionable intelligence using HPC at the edge. The imperative now is to move processing closer to where the data is being sourced, and apply high performance computing edge technologies so real time insights can drive business actions.

Interview: Global Technology Leader PNY

We recently caught up with our friends over at PNY to discuss a variety of topics affecting data scientists conducting work on big data problem domains including how “Big Data” is becoming increasingly accessible with big clusters with disk-based databases, small clusters with in-memory data, single systems with in-CPU-memory data, and single systems with in-GPU-memory data. Answering our inquiries were: Bojan Tunguz, Senior System Software Engineer, NVIDIA and Carl Flygare, NVIDIA Quadro Product Marketing Manager, PNY.

Spell MLOps Platform Launches ‘Spell for Private Machines’ to Streamline DevOps and Foster Deeper Team Collaboration for Enterprises

Spell – a leading end-to-end machine learning platform that empowers businesses to get started with machine learning projects and make better use of their data – announced its new Spell for Private Machines integration. With Spell for Private Machines, enterprise teams that are spearheading machine learning projects will be able to use their privately owned GPUs or CPUs alongside cloud resources for experimentation, results and collaboration, reducing time, money and resources usually spent in-house.

The Essential Guide: Machine Scheduling for AI Workloads on GPUs

This white paper by Run:AI (virtualization and acceleration layer for deep learning) addresses the challenges of expensive and limited compute resources and identifies solutions for optimization of resources, applying concepts from the world of virtualization, High-Performance Computing (HPC), and distributed computing to deep learning.

The Essential Guide: Machine Scheduling for AI Workloads on GPUs

This white paper by Run:AI (virtualization and acceleration layer for deep learning) addresses the challenges of expensive and limited compute resources and identifies solutions for optimization of resources, applying concepts from the world of virtualization, High-Performance Computing (HPC), and distributed computing to deep learning.