Sign up for our newsletter and get the latest big data news and analysis.

Transform Raw Data to Real Time Actionable Intelligence Using High Performance Computing at the Edge

In this special guest feature, Tim Miller from One Stop Systems discusses the importance of transforming raw data to real time actionable intelligence using HPC at the edge. The imperative now is to move processing closer to where the data is being sourced, and apply high performance computing edge technologies so real time insights can drive business actions.

Hats Over Hearts

It is with great sadness that we announce the death of Rich Brueckner. His passing is an unexpected and enormous blow to both his family and the HPC Community. Rich was an institution in the HPC community. You couldn’t go to an event without seeing his red hat bobbing in the crowd, usually trailed by a fast-moving video crew. He’d be darting into booths, conducting interviews, and then speeding away to his next appointment.

New Study Details Importance of TCO for HPC Storage Buyers

Total cost of ownership (TCO) is often assumed to be an important consideration for buyers of HPC storage systems. Because TCO is defined differently by HPC users, it’s difficult to make comparisons based on a predefined set of attributes. With this fact in mind, our friends over at Panasas commissioned Hyperion Research to conduct a worldwide study that asked HPC storage buyers about the importance of TCO in general, and about specific TCO components that have been mentioned frequently in the past two years by HPC storage buyers.

Heterogeneous Computing Programming: oneAPI and Data Parallel C++

Sponsored Post What you missed at the Intel Developer Conference, and how to catch-up today By James Reinders In the interests of full disclosure … I must admit that I became sold on DPC++ after Intel approached me (as a consultant – 3 years retired from Intel) asking if I’d help with a book on […]

Supercomputers and Machine Learning: A Perfect Match

In this contributed article, technology writer Gilad David Maayan suggests that when considering complex or very large data sets, using the largest and most powerful computers in the world sounds ideal. High-performance computing is a perfect match for complex machine learning and big data models. These supercomputers can easily process billions of calculations, improving the capabilities of machine learning technologies.

2nd Generation Intel® Xeon® Platinum 9200 Processors Offer Leadership Performance, and Advance AI

Simulation, modeling, data analytics, and other workloads commonly use high performance computing (HPC) to advance research and business in many ways. However, as converged workloads involving AI grow in adoption, HPC systems must keep pace with evolving needs. 2nd Generation Intel® Xeon® Platinum processors, with built-in AI acceleration technologies, offer leadership performance to speed the most demanding HPC workloads.

How Astera Labs is Revolutionizing Semiconductor Product Development—100% in the Cloud

For any established semiconductor product developer, designing a next-generation PCIe 5.0 chipset in less than a year is no small feat. For a brand-new startup with no compute infrastructure other than laptops, however, it is a huge ask. That’s why, with time being of the essence, Astera Labs decided to take a chance on the efficiencies it would gain from a 100% cloud-based approach.

Six Platform Investments from Intel to Facilitate Running AI and HPC Workloads Together on Existing Infrastructure

Because HPC technologies today offer substantially more power and speed than their legacy predecessors, enterprises and research institutions benefit from combining AI and HPC workloads on a single system. Six platform investments from Intel will help reduce obstacles and make HPC and AI deployment even more accessible and practical.

DAOS Delivers Exascale Performance Using HPC Storage So Fast It Requires New Units of Measurement

Forget what you previously knew about high-performance storage and file systems. New I/O models for HPC such as Distributed Asynchronous Object Storage (DAOS) have been architected from the ground up to make use of new NVM technologies such as Intel® Optane™ DC Persistent Memory Modules (Intel Optane DCPMMs). With latencies measured in nanoseconds and bandwidth measured in tens of GB/s, new storage devices such as Intel DCPMMs redefine the measures used to describe high-performance nonvolatile storage.

Beyond the Delta: Compression is a Must for Big Data

In an era of big data, high-speed, reliable, cheap and scalable databases are no luxury. Our friends over at SQream invest a lot of time and effort into providing their customers with the best performance-at-scale. As such, SQream DB uses state-of-the-art HPC techniques. Some of these techniques rely on modifying existing algorithms to external technological […]