Sign up for our newsletter and get the latest big data news and analysis.

How Astera Labs is Revolutionizing Semiconductor Product Development—100% in the Cloud

For any established semiconductor product developer, designing a next-generation PCIe 5.0 chipset in less than a year is no small feat. For a brand-new startup with no compute infrastructure other than laptops, however, it is a huge ask. That’s why, with time being of the essence, Astera Labs decided to take a chance on the efficiencies it would gain from a 100% cloud-based approach.

Six Platform Investments from Intel to Facilitate Running AI and HPC Workloads Together on Existing Infrastructure

Because HPC technologies today offer substantially more power and speed than their legacy predecessors, enterprises and research institutions benefit from combining AI and HPC workloads on a single system. Six platform investments from Intel will help reduce obstacles and make HPC and AI deployment even more accessible and practical.

DAOS Delivers Exascale Performance Using HPC Storage So Fast It Requires New Units of Measurement

Forget what you previously knew about high-performance storage and file systems. New I/O models for HPC such as Distributed Asynchronous Object Storage (DAOS) have been architected from the ground up to make use of new NVM technologies such as Intel® Optane™ DC Persistent Memory Modules (Intel Optane DCPMMs). With latencies measured in nanoseconds and bandwidth measured in tens of GB/s, new storage devices such as Intel DCPMMs redefine the measures used to describe high-performance nonvolatile storage.

Using Converged HPC Clusters to Combine HPC, AI, and HPDA Workloads

Many organizations follow an old trend to adopt AI and HPDA as distinct entities which leads to underutilization of their clusters. To avoid this, clusters can be converged to save (or potentially eliminate) capital expenditures and reduce OPEX costs. This sponsored post from Intel’s Esther Baldwin, AI Strategist, explores how organizations are using converged HPC to combine HPC, AI, and HPDA workloads.

Penguin Computing Unveils Powerful New Solution That Delivers Geographically Dispersed Data at Landmark Speeds While Also Providing Easy Access

Penguin Computing, a subsidiary of SMART Global Holdings, Inc. (NASDAQ: SGH) and a leading provider of high performance computing (HPC), artificial intelligence (AI), enterprise data center and cloud solutions, announced the availability of the Accelion™ managed data access platform.

InsideHPC Market Survey Results Intersection of AI and HPC

HOT off the press from our venerable sister publication insideHPC, is the new 2018 AI/HPC Perceptions Survey which was fielded to gain insights on the HPC community’s perceptions on the intersection of HPC and AI. The survey was executed in October of 2017 — and again in February 2018 — with a total of 201 responses.

New Containers on NVIDIA GPU Cloud Help Developers Instantly Deploy Fully Optimized AI and HPC Software

NVIDIA announced that a new advanced data center GPU — the NVIDIA® Tesla® V100 GPU based on NVIDIA’s Volta architecture — is available through major computer makers and chosen by major cloud providers to deliver artificial intelligence and high performance computing.

High Performance Computing: Answering the Big Data Dilemma

In this special guest feature, Jeff Reser, Global Product Marketing Manager of SUSE, suggests that as HPC’s prowess in business expands, so does its ability to solve a variety of data management problems. Individuals struggling to tackle Big Data’s most complex challenges should increasingly look at HPC to deliver the power and sophistication required to manage large volumes and varieties of data.

Take Our HPC & AI Survey and Win an Echo Show Device

“We invite you to take our Survey on the intersection HPC & AI. In return, we’ll send you a free report with the results and enter your name in a drawing to win one of two Echo Show devices with Amazon Alexa technology.”

The Importance of Vectorization Resurfaces

Vectorization offers potential speedups in codes with significant array-based computations—speedups that amplify the improved performance obtained through higher-level, parallel computations using threads and distributed execution on clusters. Key features for vectorization include tunable array sizes to reflect various processor cache and instruction capabilities and stride-1 accesses within inner loops.