How MPSTOR Delivers Software Defined Storage Across Multiple Services

Print Friendly, PDF & Email

Most traditional storage providers integrate their solution with compute. MPSTOR takes a different approach by integrating virtualization into the software stack to provide better, more robust infrastructure management. We caught up with William Opperman, CEO and Founder of MPSTOR, to learn more.

insideBIGDATA: William, please tell me about MPSTOR. What products do you offer and who is your audience?

William Opperman: We were founded in 2006 in Cork, Ireland, and we’re truly a pioneer in the field of Software Defined Storage (SDS). Our flagship product is our Orkestra™ cloud management platform that provides data centers with the ability to deliver a unique range of multi-tiered compute and storage services and Service Level Agreements (SLAs) at the proper scale to data center storage consumers and end users. Orkestra enables automated delivery of “Anything as a Service,” allowing cloud operators to create and deliver cost effective, differentiated services.

insideBIGDATA: I understand you just released a new version of your cloud management platform, Orkestra. What can you customers look forward to?

William Opperman: We have just released the Orkestra VMDK version which allows a full OpenStack cloud to run under VMware. This means a VMware datacenter operator running the Orkestra IaaS VMDK platform can enhance their existing VMware installations and quickly provide users with an OpenStack based cloud and self-service dashboard while maintaining their existing VMware infrastructure.

We are continuing our integration of Orkestra with OpenStack and VMware and have two exciting new products ready for release this quarter. The first upgrades Orkestra to support VMware ESXi as a Hypervisor.  This allows virtual machines that are qualified under VMware ESXi to be managed by Orkestra.  The second is a version of our Orkestra Storage Array Management (SAM) solution running under VMware as a virtual machine.

We are also developing new technologies building on MPSTOR’s SDS capability will allow data intensive scalable compute such as data analytics to offload storage specific tasks to the storage array (Storage Offload Engines).

insideBIGDATA: What sets Orkestra apart from the competition out there?

William Opperman: MPSTOR’s founding principles were in “Self Organizing Storage”, an early version of SDS. This has allowed MPSTOR to create the most comprehensive software defined storage solution on the marketplace. True SDS requires four essential components

  •       Out-of-band “high availability” controller
  •       Storage provider plane
  •       In-band control plane
  •       Delivery layer providing storage to all consumer types

Most solutions from storage vendors provide simply an automation framework around their storage offering with limited scope in delivering the storage. MPSTOR offers all four components and greater flexibility in delivery of the storage to the consumer types.

Unlike the competition, MPSTOR did not follow a path of integrating storage with compute but instead integrated virtualization into their storage software. This has allowed MPSTOR to provide a better infrastructure management solution as all the components, Compute, Storage and Networking are all virtualized but can be run on a single node or scaled out across a data center or any combination.

insideBIGDATA: Scalable cloud management obviously has an angle in the Big Data world. How do you see Orkestra best fitting in?

William Opperman: “Software Defined Everything” is a framework to build “Anything as a Service”. Orkestra will continue its roadmap from provisioning storage to virtual machines, real machines, storage centric services and tenant spaces to provisioning storage in storage centric applications like Hadoop.

MPSTOR is currently working on SDS enabled “data analytics as a service”. SDS with its knowledge and control of storage and compute locality promises to deliver an optimised virtual implementation of HADOOP and other data intensive scalable compute and storage frameworks.

insideBIGDATA: What about the future of Big Data? Where do you see data analytics heading?

 William Opperman: With growth in data analytics we will see two classes of “compute bound” operations, one “compute centric” and one “storage centric.” This will mean that storage, along with its traditional role of storing data, will need to offer storage offload functions to process data without moving it.

Traditional implementations of Hadoop try to solve the compute/IO issue by having very high coupling between processing and storage. This high coupling paradigm is at odds with virtualization and physical disaggregation of compute storage and networking.

Disaggregation which allows software defined everything is a pre-requisite for “Data analytics as a service.” Storage offload processing and a normalized API to achieve it between compute and the storage array is the future of high performance but disaggregated scalable compute and storage.

Big data is a route back for Big Iron to make its block storage more relevant in the virtualized world where “cheap” object storage is being used increasingly in data processing environments.

Speak Your Mind