Adaptive Computing Introduces Big Workflow to Accelerate Insights

Print Friendly, PDF & Email

Over at the Adaptive Computing Blog, Jill King writes that Big Workflow brings together HPC and Big Data for the purpose of attaining rapid insight.

Jill King

Jill King

We’re proud to announce Big Workflow, an industry term coined by Adaptive Computing that accelerates insights by more efficiently processing intense simulations and big data analysis. Big Workflow derives its name from its ability to solve big data challenges by streamlining the workflow to deliver valuable insights from massive quantities of data across multiple platforms, environments, and locations.

Accelerate InsightsOur Moab HPC Suite and Moab Cloud Suite are an integral part of the Big Workflow solution, which unifies all data center resources, optimizes the analysis process, and guarantees services, shortening the time to discovery.

While current solutions solve big data challenges with only cloud or only HPC, Adaptive Computing utilizes all available resources—including bare metal and virtual machines, technical computing environments (e.g., HPC, Hadoop), cloud (public, private, and hybrid) and even agnostic platforms that span multiple environments, such as OpenStack—as a single ecosystem that adapts as workloads demand.

Traditional IT operates in a steady state, with maximum uptime and continuous equilibrium. Big data interrupts this balance, creating a logjam to discovery. Big Workflow optimizes the analysis process to deliver an organized workflow that greatly increases throughput and productivity, and reduces cost, complexity and errors. Even with big data challenges, the data center can still guarantee services that ensure SLAs, maximize uptime and prove services were delivered and resources were allocated fairly.

Rob Clyde, CEO of Adaptive Computing believes the explosion of big data, coupled with the collisions of HPC and cloud, is driving the evolution of big data analytics. He said, “A Big Workflow approach to big data not only delivers business intelligence more rapidly, accurately, and cost effectively, but also provides a distinct competitive advantage.”

DigitalGlobe, a leading global provider of high-resolution Earth imagery solutions, uses Moab to dynamically allocate resources, maximize data throughput, and monitor system efficiency to analyze its archived Earth imagery, which contains more than 4.5 billion square kilometers of global coverage. With Moab at the core of its data center, DigitalGlobe has been able to operate at a global scale with timelines their customers need by breaking down silos of isolated resources and increasing its maximum workflow capacity, helping decision makers better understand the planet in order to save lives, resources, and time.

Moab enables our responsiveness when disaster strikes,” said Jason Bucholtz, principal architect at DigitalGlobe. “With Big Workflow, we have been able to gain insights about our changing planet more rapidly—all without adding new resources to our existing infrastructure.”

According to a, Adaptive Computing hands-on survey of more than 400 data center managers, administrators, and users, 91% believe some combination of big data, HPC, and cloud should occur for a better big data solution. This finding underscores the intensifying collision between big data, HPC, and cloud and is supported by the International Data Corporation (IDC) Worldwide Study of HPC End-User Sites.

Our 2013 study revealed that a surprising two thirds of HPC sites are now performing big data analysis as part of their HPC workloads, as well as an uptick in combined uses of cloud computing and supercomputing,” said Chirag Dekate, Ph.D., research manager, High-Performance Systems at IDC. “As there is no shortage of big data to analyze and no sign of it slowing down, combined uses of cloud and HPC will occur with greater frequency, creating market opportunities for solutions such as Adaptive’s Big Workflow.”

With all this news, let’s recap:

  • Big Workflow unifies data center resources, optimizes the analysis process, and guarantees services. Tweet this info.
  • DigitalGlobe processes geospatial big data within 90 minutes with Big Workflow to aid first responders during natural disasters. Tweet this info.
  • 91% of organizations believe some combination of big data, HPC, or cloud should occur for a better big data solution.Tweet this info.

To learn more about Big Workflow, visit the Adaptive Computing website or contact an Adaptive Computing representative today.

You can also Download the whitepaper.

Speak Your Mind

*