Big Workflow: More than Just Intelligent Workload Management for Big Data

White Papers > Big Data > Big Workflow: More than Just Intelligent Workload Management for Big Data

Big data applications represent a fast-growing category of high-value applications that are increasingly employed by business and technical computing users. However, they have exposed an inconvenient dichotomy in the way resources are utilized in data centers. Conventional enterprise and web-based applications can be executed efficiently in virtualized server environments, where resource management and scheduling is generally confined to a single server. By contrast, data-intensive analytics and technical simulations demand large aggregated resources, necessitating intelligent scheduling and resource management that spans a computer cluster, cloud, or entire data center. Although these tools exist in isolation, they are not available in a general-purpose framework that allows them to interoperate easily and automatically within existing IT infrastructure. A new approach, known as “Big Workflow,” is being created by Adaptive Computing to address the needs of these applications. It is designed to unify public clouds, private clouds, Map Reduce-type clusters, and technical computing clusters. Specifically Big Workflow will:
• Schedule, optimize and enforce policies across the data center
• Enable data-aware workflow coordination across storage and compute silos
• Integrate with external workflow automation tools
Such a solution will provide a much-needed toolset for managing big data applications, shortening timelines, simplifying operations, and maximizing resource utilization, and preserving existing investments.

Until recently, the majority of computing in the data center consisted of workloads that did not demand a very sophisticated resource allocation strategy. Their signature characteristic is that they were assumed to run forever – applications such as email serving, web hosting, and CRM systems with no definitive end to their lifetime. As a result of simple and permanent allocation, many of these applications could share the same server, without the need for scheduling or resource allocation beyond that of a single server node. The overriding goal for these applications was to maximize uptime.

    Contact Info

    Work Email*
    First Name*
    Last Name*
    Zip/Postal Code*

    Company Info

    Company Size*
    Job Role*

    All information that you supply is protected by our privacy policy. By submitting your information you agree to our Terms of Use.
    * All fields required.