Sign up for our newsletter and get the latest big data news and analysis.

Infinidat Expands InfiniBox Line with New Solid-State Array to Deliver High Performance for the Most Demanding Enterprise Applications

Infinidat, a leading provider of enterprise-class storage solutions, announced the new InfiniBox SSA™, a groundbreaking solid-state array that delivers the industry’s highest performance for the most demanding enterprise applications. The InfiniBox SSA is powered by Infinidat’s proven deep learning software algorithms and extensive DRAM cache. It will consistently deliver performance and latency results that surpass all-flash arrays (AFAs), while providing the same acclaimed customer experience, 100 percent availability, and uncompromising reliability of the InfiniBox.

Four Reasons On-premises Object Storage is Right for Today’s Businesses

In this special guest feature, Marcel Hergaarden, senior manager for product marketing at Red Hat, explains why he believes on-premises object-based storage is the correct approach for organizations that want better control over their data and greater cost savings.

The Value of Data Now vs. Data Later

In this contributed article, Fluency CEO and Founder Chris Jordan discusses the inevitable extinction of Moore’s law. 90% of the world’s data has been produced over the last two years, yet companies only analyze 12% of it. With Big Data only continuing to grow, how can more innovative data storage solutions, such as the cloud, effectively respond to this level of growth?

New Study Details Importance of TCO for HPC Storage Buyers

Total cost of ownership (TCO) now rivals performance as a top criterion for purchasing high-performance computing (HPC) storage systems, according to an independent study published by Hyperion Research. The report, commissioned by our friends over at Panasas®, a leader in HPC data storage solutions, surveyed data center planners and managers, storage system managers, purchasing decision-makers and key influencers, as well as users of HPC storage systems.

New Study Details Importance of TCO for HPC Storage Buyers

Total cost of ownership (TCO) is often assumed to be an important consideration for buyers of HPC storage systems. Because TCO is defined differently by HPC users, it’s difficult to make comparisons based on a predefined set of attributes. With this fact in mind, our friends over at Panasas commissioned Hyperion Research to conduct a worldwide study that asked HPC storage buyers about the importance of TCO in general, and about specific TCO components that have been mentioned frequently in the past two years by HPC storage buyers.

Qumulo Offers Free Cloud Software to help Fight COVID-19 Outbreak

Today Qumulo announced it is offering its cloud-native file software, for free, to public and private sector medical and healthcare research organizations that are working to minimize the spread and impact of the COVID-19 virus.

insideBIGDATA Guide to Optimized Storage for AI and Deep Learning Workloads – Part 3

Artificial Intelligence (AI) and Deep Learning (DL) represent some of the most demanding workloads in modern computing history as they present unique challenges to compute, storage and network resources. In this technology guide, insideBIGDATA Guide to Optimized Storage for AI and Deep Learning Workloads, we’ll see how traditional file storage technologies and protocols like NFS restrict AI workloads of data, thus reducing the performance of applications and impeding business innovation. A state-of-the-art AI-enabled data center should work to concurrently and efficiently service the entire spectrum of activities involved in DL workflows, including data ingest, data transformation, training, inference, and model evaluation.

insideBIGDATA Guide to Optimized Storage for AI and Deep Learning Workloads – Part 2

Artificial Intelligence (AI) and Deep Learning (DL) represent some of the most demanding workloads in modern computing history as they present unique challenges to compute, storage and network resources. In this technology guide, insideBIGDATA Guide to Optimized Storage for AI and Deep Learning Workloads, we’ll see how traditional file storage technologies and protocols like NFS restrict AI workloads of data, thus reducing the performance of applications and impeding business innovation. A state-of-the-art AI-enabled data center should work to concurrently and efficiently service the entire spectrum of activities involved in DL workflows, including data ingest, data transformation, training, inference, and model evaluation.

Pure Makes Customers “AI-First” Infrastructure a Reality

Pure Storage (NYSE: PSTG), a fast growing data storage company, announced a host of new and improved AI solutions that provide enterprise customers with the features and functionality needed to execute increasingly complex AI initiatives through any phase or scale. Built on Pure’s industry-leading file and object system, FlashBladeTM, and its joint AI-Ready Infrastructure (AIRITM) offering with NVIDIA, customers can develop and deploy AI rapidly to keep pace with modern business

insideBIGDATA Guide to Optimized Storage for AI and Deep Learning Workloads

Artificial Intelligence (AI) and Deep Learning (DL) represent some of the most demanding workloads in modern computing history as they present unique challenges to compute, storage and network resources. In this technology guide, insideBIGDATA Guide to Optimized Storage for AI and Deep Learning Workloads, we’ll see how traditional file storage technologies and protocols like NFS restrict AI workloads of data, thus reducing the performance of applications and impeding business innovation. A state-of-the-art AI-enabled data center should work to concurrently and efficiently service the entire spectrum of activities involved in DL workflows, including data ingest, data transformation, training, inference, and model evaluation.