Sign up for our newsletter and get the latest big data news and analysis.

The Five Blocks of the HPE WDO Solution

This is the sixth and final entry in an insideBIGDATA series that explores the intelligent use of big data on an industrial scale.  This series, compiled in a complete Guide,  also covers the exponential growth of data, the changing data landscape, and the HPE Workload and Density Optimized System.  The final entry in the series is focused on the five blocks of the HPE WDO Solution. 

The five components of the HPE WDO system include compute blocks, storage blocks, control blocks, network blocks, and rack blocks.

In addition to these standard blocks, HPE has also developed accelerator blocks designed to optimize solution deployment, workload performance, and storage. Unlike the standard compute block (one HPE Apollo 2000 chassis consisting of four XL170r Gen9 servers) and the standard storage block (one HPE Apollo 4200 Gen9, consisting of 28x LFF HDDs or SSDs), optional dense compute, dense storage, or accelerator blocks can be combined to address countless issues—hot/cold storage, high-latency/ low-latency compute, NoSQL, deep learning, etc.

Examples of accelerator blocks include the Moonshot 1500 chassis with 30 m510 cartridges for compute, Apollo 2000 with XL170r with 512GB of memory, Apollo 2000 with XL190r with GPUs, Apollo 4200 with 6 or 8TB LFF HDD’s, or Apollo 4510 with 3, 4, 6 or 8TB LFF HDD’s.

HPE has also developed accelerator blocks designed to optimize solution deployment, workload performance, and storage.

The control block is comprised of three HPE DL360 Gen9 servers, with an optional fourth server acting as an edge or gateway node, depending on the customer enterprise network requirements.

The network block consists of two HPE 5940- 32QSFP+ 40Gb switches and one HPE 5900AF-48G- 4XG-2QSFP+ 1Gb switch. The system allows for switching these network blocks to 25Gb/100Gb network blocks from HPE or other vendors as long as they have similar configurations.

Finally, the rack block consists of either a 1200mm or 1075mm rack and its accessories.

Summary

The HPE Elastic platform for big data analytics provides optimal support for modern data processing frameworks including Hadoop, Spark, and NoSQL. A deployment option of this framework, the HPE WDO system enables flexible and independent scale-out of compute and storage, and is ideally suited for deploying and consolidating big data workloads on a multi-tenant analytics platform. And thanks to YARN’s multi-tenant capabilities in conjunction with HPE solutions, it is now possible to leverage workload-optimized servers for a variety of use cases, including deep learning with GPU-based or blazing NoSQL performance in a small footprint with the HPE Moonshot.

The HPE Elastic platform for big data analytics provides optimal support for modern data processing frameworks.Click To Tweet

For existing big data deployments using conventional symmetric architectures, organizations can transition to a truly elastic platform with the HPE WDO System to reduce datacenter footprint, lower operating costs, optimize performance, and efficiently manage rapid data growth. The underlying solution is a common foundation which enables organizations to operate big data infrastructure as a service. For this, HPE provides deployment accelerators with the HPE Insight CMU and the BlueData EPIC software to rapidly provision and deploy an elastic, multi-tenant infrastructure.

The modularity of HPE solutions provides flexibility in block systems that can be manipulated to satisfy workload, density, form-factor, compute, memory, and storage needs in a hybrid environment.

The series on the use of big data on an industrial scale also covered the following additional topics:

You can also download the complete report, “insideBIGDATA Guide to the Intelligent Use of Big Data on an Industrial Scale,” courtesy of Hewlett Packard Enterprise.

 

Leave a Comment

*

Resource Links: