Big Data Reliability with Lustre

Print Friendly, PDF & Email

Data is money for businesses—now more than ever. With Big Data analytics that means big money, and companies don’t want to risk losing their data and the value it represents. Lustre has proven its performance over the years. And, lately, companies are engaging Lustre on their HPC clusters for Big Data analytics. But, can Lustre maintain and keep a company’s data available? Does Lustre’s latest release have what it takes to take care of enterprise Big Data?

Big Data High Availability with Lustre

The Lustre community has seen some very significant advancements in the Lustre software over the last few years to give Lustre the needed data reliability enhancements that users have requested for enterprise deployments, where data availability is top of mind. Much of these enhancements have been contributed by developers at Intel.

The result has not only maintained an extremely fast parallel file system, but also has created a highly reliable storage solution that is being deployed for mission-critical applications, including seismic processing and analysis, regional climate and weather modeling, and banking. Such installations cannot tolerate downtime; any downtime has a significant cost associated with it. Lustre’s new enhancements have been designed to meet the strictest requirements these institutions and installations demand.

Different file systems use different mechanisms to maintain data availability. The Hadoop Distributed File System (HDFS) replicates data across multiple disks distributed across multiple servers in a network. Replication methodologies differ, but the process can degrade performance and compromise latency in a file system, since every file or block replication consumes additional bandwidth, which reduces the bandwidth available to the application. Synchronous replication is especially known for substantial latency degradation. In approaches where replication impacts performance, overprovisioning in the network and servers is often done to offset the loss.

Lustre uses a mature high availability (HA) design pattern that is well-known and understood within the IT industry. Metadata servers (MDS) and object storage server (OSS) are deployed in cooperative HA cluster pairs, with each pair attached to a reliable, scalable storage system. If a servers fails, the system migrates storage targets from the downed server to the remaining hardware. Besides maintaining high availability, this approach does not impact performance. When all of the servers are online, the maximum bandwidth is available to applications–there is no replication overhead[i]. And, since Lustre’s approach maintains performance, overprovisioning is not required.

Lustre development is a community responsibility, driven, not by corporate earnings, but by the needs of the user community. Thus, Lustre developers have focused on adding the high priority reliability and availability features that enterprise users have wanted. But that doesn’t mean the Lustre community is not working on replication.

Increasing Reliability with Replication

Adding replication to Lustre will only further increase Lustre’s reliability and availability characteristics. A replication feature is currently in the works with the Lustre development community. But Lustre’s approach, as with other enhancements, will be innovative, offering users flexibility and options about how their data is recorded, so they can best meet the requirements of their application or project.

Developers in the Lustre project are first defining a strategy for file layouts that can be arbitrary and decided upon at run time, instead of requiring a pre-determined layout strategy when the file system is first set up as some files systems do today. That will give users the flexibility to choose the best file layout for their workload when they’re ready to run it, and then extend that layout to replicated data across the file system. Here are some examples:

  • An application that places emphasis on throughput performance, such as very large scale streaming IO workloads, is more likely to benefit from a striped file layout. The emphasis is on persisting data to storage as quickly as possible and may not need long-term storage. Replication may not even be necessary.
  • An application that processes data that is vital to business operations or upon which human life depends, might set different requirements on the performance and persistence of the data. A company will likely replicate the data for added reliability, because loss of data can have immediate negative consequences—both in terms of the cost to restore the operation and the outcomes for which the data is used.
  • Critical data sets will also typically have longevity requirements. The information must be stored reliably for long periods of time. Ensuring that both availability and longevity requirements are met for permanent production data requires redundant data replication across multiple storage systems.

Lustre’s maturity is having an impact on Lustre momentum and adoption, according to recent statements from Intel and IDC. Earl Joseph, an HPC industry analyst with IDC, has stated “Along with IBM’s General Parallel File System (GPFS), Lustre is the most widely used file system. But Lustre is experiencing healthy growth in terms of market share while GPFS remains flat. Lustre is also supported by a large number of OEMs, providing the HPC community with a strong base for growth.” Enterprises are engaging with and trusting their data to Lustre reliability.

Learn more about Intel® Solutions for Lustre Software

[i] In a failure scenario, applications may experience a pause in I/O until the storage targets have been migrated, but on Lustre this happens quickly.

Speak Your Mind

*