Interview: Adaptive Computing Brings Big Workflow to the Data Center

Print Friendly, PDF & Email

High Performance Computing (HPC) has traditionally taken place in the realms of research and government. Now Adaptive Computing is heralding a new trend it calls “Big Workflow” that makes it possible for enterprise customers to gain insight into enormous amounts of data through HPC. We sat down with Jill King, VP of Marketing at Adaptive Computing, to get a better understanding of these topics and more.

insideBIGDATA: Adaptive Computing is all about workload management in the data center. How has the technology evolved in the last couple of years?

Jill King

Jill King: The traditional HPC market has always been steeped in research, higher education and government. There have previously been few instances where HPC has crossed over into the enterprise, but because of big data’s prevalence in this market and the need to process big data analysis more quickly and accurately, more organizations are turning to HPC as a big data solution.

insideBIGDATA: Which markets do you see using your products as this technology evolves?

Jill King: While our software has always had a strong presence in research, higher education and government, we’re now seeing greater adoption of Moab in a wider range of industries, including oil and gas, manufacturing, technology software, healthcare and bioinformatics.

insideBIGDATA: Your website mentions Big Data, HPC, and the cloud. How have these technologies merged from Adaptive’s point of view?

Jill King: To solve big data challenges, it’s no longer a siloed approach that leverages just cloud for big data, or just HPC for big data, but combining those resources to get to results faster.

insideBIGDATA: This term, “Big Workflow” is a new one. Who came up with it and what does it mean?

Jill King: It was a combined effort from the Adaptive team. Our thought process was that Big Data + a better Workflow = Big Workflow. We coined it as an industry term to denote a faster, more accurate and more cost-effective big data analysis process. It is not a product or trademarked name of Adaptive’s, and we hope it becomes a common term in the industry that is synonymous with a more efficient big data analysis process.

insideBIGDATA: Which organizations can benefit from this particular technology? Any recent success stories you’d like to share?

Jill King: Organizations that benefit in particular from Big Workflow are mid-market enterprises in the aforementioned industries with existing investments in cloud, HPC, and big data.

One of our recent success stories is with DigitalGlobe, a leading global provider of high-resolution Earth imagery solutions that uses Moab to analyze its archived Earth imagery. Each year, the company adds two petabytes of raw imagery to its archives – which already contain more than 4.5 billion square kilometers of global coverage – and turns that into eight petabytes of new product.

When a natural disaster strikes, DigitalGlobe can quickly and efficiently perform the heavy lifting computing required to place imagery into the hands of first responders in less than 90 minutes. This is a case of Moab guaranteeing a mission-critical SLA, and just one of many exciting examples of what Big Workflow can do.

insideBIGDATA: As you see it, what is the role of the data scientist in this arena?

Jill King: The data scientist is the mastermind behind extracting the right data to get results the business needs to gain a competitive edge. He or she identifies the most optimal path to tackle big data and guides this process from analysis to results. I see the data scientist as a hybrid: part-analyst, part-engineer and part-IT administrator.

insideBIGDATA: As these technologies emerge and evolve, how do you see this role changing going forward?

Jill King: Many enterprises are still creating custom workflows because a data scientist is required to perform the simulation and big data analysis. As the market matures, we see that manual process becoming more automated.

insideBIGDATA: This is really exciting stuff. What’s down the road for Adaptive? What do you have up your collective sleeves?

Jill King: We plan to continue streamlining the simulation and big data analysis process to make it more robust and less manual, making it much easier for IT to respond to the business’ big data needs. We have many new advancements rolling out in 2014 so stay tuned.

 

Click HERE to download the white paper to better understand Adaptive Computing’s vision for meeting the unique workflow requirements of big data applications.

Speak Your Mind

*