Sign up for our newsletter and get the latest big data news and analysis.

Infographic: The Fastest Supercomputers Ever Built and Who Built Them

Supercomputers are an important part of computational science, including weather forecasting, climate research, quantum mechanics, molecular modeling, and cryptanalysis, as they are able to process information more quickly than a traditional computer. The United States has historically been a leader in supercomputer development, but other countries’ technology and research have been catching up. The HP research team consulted data from supercomputer ranking project TOP500 to visualize the most powerful supercomputers in the world as well as their country and company of development.

insideBIGDATA Guide for Higher Education

The goal for this Guide sponsored by Dell Technologies is to provide direction for enterprise thought leaders on ways of leveraging big data technologies in support of analytics proficiencies designed to work more independently and effectively across a few distinct areas in higher education: student success  and workforce readiness, simplified systems and processes, and accelerate research.

Knowledge Graphs 2.0: High Performance Computing Emerges

In this contributed article, editorial consultant Jelani Harper discusses The how increasing reliance on knowledge graphs parallels that of Artificial Intelligence for three irrefutable reasons. They’re the most effective means of preparing data for statistical AI, creditable knowledge graph platforms utilize supervised and unsupervised learning to accelerate numerous processes, and their smart inferences are a form of machine intelligence.

NVIDIA A100, A40 and NVIDIA RTX A6000 Ampere Architecture-Based Professional GPUs Transform Data Science and Big Data Analytics

Scientists, researchers, and engineers are solving the world’s most important scientific, industrial, and big data challenges with AI and high-performance computing (HPC). Businesses, even entire industries, harness the power of AI to extract new insights from massive data sets, both on-premises and in the cloud. NVIDIA Ampere architecture-based products, like the NVIDIA A100 or the NVIDIA RTX A6000, designed for the age of elastic computing, deliver the next giant leap by providing unmatched acceleration at every scale, enabling innovators to push the boundaries of human knowledge and creativity forward.

Penguin Computing Announces OriginAI Powered by WekaIO

Penguin Computing, a division of SMART Global Holdings, Inc. (NASDAQ: SGH) and a leader in high-performance computing (HPC), artificial intelligence (AI), and enterprise data center solutions, announced that it has partnered with WekaIO™ (Weka) to provide NVIDIA GPU-Powered OriginAI, a comprehensive, end-to-end solution for data center AI that maximizes the performance and utility of high-value AI systems.

Examining Architectures for the Post-Exascale Era

On Wednesday, November 11th, at 9am PST, a group of researchers and industry players on the leading edge of a new approach to HPC architecture join to explore the topic in a webinar titled, “Disaggregated System Architectures for Next Generation HPC and AI Workloads.”

Transform Raw Data to Real Time Actionable Intelligence Using High Performance Computing at the Edge

In this special guest feature, Tim Miller from One Stop Systems discusses the importance of transforming raw data to real time actionable intelligence using HPC at the edge. The imperative now is to move processing closer to where the data is being sourced, and apply high performance computing edge technologies so real time insights can drive business actions.

Hats Over Hearts

It is with great sadness that we announce the death of Rich Brueckner. His passing is an unexpected and enormous blow to both his family and the HPC Community. Rich was an institution in the HPC community. You couldn’t go to an event without seeing his red hat bobbing in the crowd, usually trailed by a fast-moving video crew. He’d be darting into booths, conducting interviews, and then speeding away to his next appointment.

New Study Details Importance of TCO for HPC Storage Buyers

Total cost of ownership (TCO) is often assumed to be an important consideration for buyers of HPC storage systems. Because TCO is defined differently by HPC users, it’s difficult to make comparisons based on a predefined set of attributes. With this fact in mind, our friends over at Panasas commissioned Hyperion Research to conduct a worldwide study that asked HPC storage buyers about the importance of TCO in general, and about specific TCO components that have been mentioned frequently in the past two years by HPC storage buyers.

Heterogeneous Computing Programming: oneAPI and Data Parallel C++

Sponsored Post What you missed at the Intel Developer Conference, and how to catch-up today By James Reinders In the interests of full disclosure … I must admit that I became sold on DPC++ after Intel approached me (as a consultant – 3 years retired from Intel) asking if I’d help with a book on […]