Myth busting: The Truth About Disaggregated Storage 

Print Friendly, PDF & Email

The rise of any nascent technology often generates some amount of confusion, so it’s time to dispel some fallacies around disaggregated storage.

As data centers try to keep pace with mind boggling and exponential growth in data generation, composable disaggregated infrastructure (CDI) has emerged as one of the hottest new trends in storage.

CDI burst onto the scene as IT managers tried to determine correct data management systems to meet their needs: on-premises or cloud; public cloud or private cloud or a combination of each. Regardless of where the physical infrastructure lives, the name of the game now is delivering more performance, efficiency and cost savings from that infrastructure by disaggregating compute, storage and network into virtual resource pools, and easily provisioning those resources on the fly for better asset utilization and simplified operations.

With any significant pioneering technology comes some amount of confusion and misconception. CDI apparently is no different, and some fallacies have begun to creep into the conversation. 

In this post, we’ll attempt to distinguish between fact and fiction. 

Myth 1: Disaggregated Storage is new and unproven

This one is verifiably false.

Disaggregated storage describes the decoupling of storage and compute resources from a server without altering logical connections. The practice of composing separate computing and storage resources can be traced to data centers operated by the world’s largest hyperscalers. And the practice of disaggregation alone can be traced back to even the mainframe days.

These hyperscalers adopted disaggregated storage after finding that it could create data centers that greatly increased asset utilization and lowered total cost of ownership (TCO). Before now, the costs involved with over provisioning and operations added up fast.  

Today, when people discuss disaggregated storage, they’re often referring to NVMe™ over fabrics (NVMe-oF™), an extension of the NVMe protocol. The open-source technology sparked a great deal of excitement for delivering speedier and more efficient connectivity between servers and storage. This enabled more flexibility to scale up and/or scale out and resulted in better resource utilization. Much of the industry has shown faith in NVMe through growing market adoption and continued enhancements to product lines.  

What all this means is that any suggestion the concept of disaggregated storage is somehow untested and risky is simply untrue.

Myth 2: Proprietary CDI Solutions Are Easier

In some ways, this assertion is true.

Contracting with a vendor that provides proprietary CDI technology means an enterprise can get off the ground quickly. The vendor develops and controls their own end-to-end solution as well as the product’s evolution.

Still, there’s a lot of baggage involved. Proprietary systems often force rigid rules on users and potentially lock clients into their walled gardens. 

We believe CDI should be open. Open standards are typically good for ecosystems, especially end users who seek flexibility and a choice between competitors. Sure, creating standards requires a lot of time-consuming collaboration and consensus building among competitors, but end users are ultimately better off because they face fewer compatibility issues. This also helps ensure a technology’s organic growth for years. 

Standardization also makes a technology more accessible to smaller players. Disaggregated storage hatched inside the data centers of hyperscalers, but those deep-pocketed companies employ legions of engineers to decouple resources. Many smaller organizations lack equivalent resources, so having multiple vendors offering multiple open-source solutions helps level the playing field.

What it boils down to is that the storage sector must make CDI accessible to organizations of all sizes. To do this we need to provide open-source solutions.

Myth 3: Orchestration layers are scary

This is another one of those half-truths. 

A lot of confusion has creeped into the sector about orchestration layers, which governs the links and interactions between cloud and on-prem components, including networking, servers and virtual machines. Over the last few years, a growing number of IT chiefs asked: “How do I write orchestration?” or “Who’s the partner I should choose for orchestration?” 

Some of the misconceptions about orchestration likely stem from organizations that take a do-it-yourself approach to building their own end-to-end solution. When they struggle to accomplish this, they call it “complicated.”

The complexity and importance of an orchestration layer depend largely on the type of workloads and the kind of offerings involved. If an organization’s offerings are focused on a certain capability and don’t see much dynamic change over time, then there may not be a need for dynamic reuse of resources. 

On the other hand, some organizations – say a cloud company overseeing a multitude of different offerings — might manage large quantities of dynamic change in a 24-hour period or a 30-day period. These types of companies need to manage resources as efficiently as possible, and orchestration is likely to play a big role. 

But the point is that disaggregation is the foundation that helps to separate resources and while orchestration is what brings it all back together in a composable, manageable way that best fits an organization’s needs. 

The Future of CDI

While CDI may seem like a relatively new idea, the underlying concepts are not new. Hyperscalers have been doing it for years. 

Additionally, it’s a technology that’s poised to become a favorite solution in enterprise data centers of every size.   

For large enterprises, CDI enables the intelligent allocation of dynamic resources and that’s a must for controlling costs, boosting performance, optimizing IT resources and maximizing efficiency.  

About the Author

Scott Hamilton, Senior Director, Product Management & Marketing at Western Digital, is leader of product management and marketing for the Platforms Business Unit, which offers a diverse product portfolio spanning external storage, specialized storage servers and composable disaggregated infrastructure. Scott has more than 30 years of experience in the storage, IT and communications technologies. Prior to joining Western Digital, Scott was Vice President, Product Management & Advanced Solutions at Quantum. Scott holds a bachelor’s degree in electrical engineering from Texas A&M University and a master’s degree from the University of Pennsylvania.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*