Computational Storage Reinvigorates Storage in a Novel Way

Print Friendly, PDF & Email

In this special guest feature, Scott Shadley, VP at NGD Systems, discusses the SNIA Computational Storage (CS) working group’s technical progress since its inception and how they plan to make computational storage mainstream over the next year. Scott has spent over 20 years in the Semiconductor and storage space in MFG, Design and Marketing. His experience spans over 15 years at Micron and time at STEC. His efforts have help lead to products in the market with over $300M in revenue. In addition to his position as VP at NGD Systems, Scott also serves on the board of directors for the Storage Networking Industry Association (SNIA)

It’s not very often that a new technology gets its own “official definition” by a well-established standards organization. While the term “definition” may sound very “grammar school” (bringing up old scary memories of spelling and vocabulary tests), to earn such a “definition,” the technology needs to be quite a breakthrough.  And with that, we would like to formally introduce Computational Storage (CS) – a storage architecture that literally throws the legacy computer storage playbook out the window to offer a faster, more affordable and power-efficient way to store and analyze petabytes of data.

This is NOT Your Father’s Enterprise  Storage

In a nutshell, Computational Storage is an IT architecture where data is processed at the storage device level to reduce the amount of data that has to move between the storage and  compute planes.  As such, the technology provides a faster and more efficient means to address the unique challenges of our data-heavy world – satisfying reduced excess bandwidth and providing very low latency response times by reducing data movement and allowing response for analytics as much as 20 to 40-times faster.

If you think about it, the history of enterprise storage is like the fairy tale, Goldilocks and the Three bears – at one point it was too hot, then it was too slow and Computational Storage is just right!  Thanks to new flash technology aided by NVMe and NVMe Over Fabrics, the technology got faster…but not fast enough. Why? Because today’s datacenters, including the massive hyperscale datacenters, rely primarily on traditional server hardware built around the basic method of using Von Neumann architecture — a 70-year old computer architecture for nearly all general-purpose servers that has frankly, never experienced the right kind of change. 

Delivering new ways to implement Von Neumann has not met with much success before. Now that we have applications such as AI and ML that work with huge amounts of raw data (structured and unstructured) that require high computing power to “learn,” the compute capability has become the bottleneck. In the traditional scale-out model, this issue is addressed by adding nodes for more distributed compute power and more memory. Unfortunately, adding server nodes is costly from both CapEx and OpEx perspectives. Adding more nodes also increases the length of interconnects, thereby increasing time required for data movement and analytics.

Enter Computational Storage — a technology that can deftly organize raw information from sensors (think self-driving cars, video surveillance cameras, traffic signals) into meaningful data as the lack of movement can facilitate real-time data analysis, thus improving performance by reducing input/output bottlenecks.  As AI, machine learning, IoT workloads spew forth mind-blowing amounts of data (IDC says that by 2025 the total amount of data will exceed 163 Zettabytes – with 95% of it being generated by IoT devices), this new technology is the true missing link.

A recent survey by Dimensional Research of more than 300 computer storage professionals brought to light this challenge – demonstrating that bottlenecks can occur at under 10 terabytes.  As such, computational storage enables more robust processing power to aid each host CPU, allowing an organization to ingest all the data it can generate and provides back only what is truly necessary, therefore keeping the “pipes” as open as possible.  This allows for more raw data needed for analytics to be gathered and provides organizations the freedom to only pull out what is needed to achieve value from that data.In Comparison, when organizations must deal with  the entire data set being moved around which delays value added results.  This approach maximizes efficiency, reducing power consumption and lowers operational costs. This “sort, transform and send” approach enables real-time data applications that are fast, comprehensive, and meaningful.

Official Definition:

The official definition for Computational Storage took only a year to formulate – a fairly speedy process that offers testament to the urgent need for such a radically different computer storage technology.  A little over a year ago,  Storage Networking Industry Association (SNIA) — a non-profit organization made up of 198 member companies spanning information technology — convened to figure out how to define and set standards around the computational storage  technology.  After several months of meetings and a bit of debate, they have now formulated the official definition:

  •  “…Computational Storage architectures enable improvements in application performance and/or infrastructure efficiency through the integration of compute resources: directly with storage, near the storage or between the host and the storage.   These compute resources are outside of the traditional compute and memory architecture.
  • The goal of these architectures is to: enable parallel computation; reduce I/O traffic; and/or to alleviate other constraints on existing compute, memory, storage, and I/O.”

Industries that benefit from Computational Storage

Computational Storage becomes easier to understand and appreciate when applied to several use cases.  An example of an industry that requires the power and efficiency of Computational Storage technology are the new ‘smart’ automobiles and the upcoming fully autonomous cars that must process loads of data (up to 28TB per day) for analytics or it could impact the driver’s security.  Some of the companies designing CS architectures have been able to provide the technology that utilizes a small form factor that can work with SSD’s to process loads of data.  This works well in space-constrained edge-related such as automobiles.  But despite the size of the form factor a CS solution can offer a 20x or more improvement in capability and that allows AI-enabled systems to read, analyze as never seen before.

Hyperscale data centers that operate on the scale of thousands of physical servers and millions of virtual machines, (think Amazon) must execute a wide variety of workloads in parallel. These hyperscale data centers are starting to use Computational Storage Drives (CSDs) to process petabytes of data, also realizing the benefits of smaller form factors that utilize less space and power but still has enormously high compute powers. As such, the tiny but mighty CSDs can help increase the computing power of the hyperscale architectures that are using the machines for artificial intelligence (AI) and machine learning (ML) applications that often require operations such as real-time, complex and parallel indexing and pattern matching. 

Retail establishments that need to analyze massive amounts of data for point-of-sale data in real time for fraud detection purposes, for instance, may also benefit from the quicker response times of CS. “These applications must scan massive amounts of data to identify the subset of information that is relevant to the query, before executing the analytics request. Moving this volume of data out of the storage system, across the network, and into main host memory incurs time and latency penalties that real-time analytics applications can ill afford,” reports Storage Switzerland.

Content delivery networks (CDN) is another market that is leveraging CS technology.  Here the technology can help with encryption/Digital Rights Management (used to verify the content can be accessed by the user).  In this case, CS provides better data management by safely unlocking contents without sharing the keys. This ability to provide 40x improvement in key matching per server rack is just the start of this work.

In conclusion – less movement is more:

The fact is, less data movement is critical in today’s data-intensive world.  Data movement costs more than just time, but money, resources and sometimes wasted analytics. Now is the time to take the next step in storage and implement NVMe Computational Storage Drives (NVMe CSDs – or SSDs with Intelligence). Following the steps of the data, noting how much more is done inside the drive itself will save on the data movement time — increasing efficiency and reducing Host CPU and memory loads.

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*