Run:ai Launches Full-Stack Solution for Hyper-Optimized Enterprise AI Built on NVIDIA DGX Systems

Print Friendly, PDF & Email

Run:ai, a leader in compute orchestration for AI workloads, announced the launch of the Run:ai MLOps Compute Platform (MCP) powered by NVIDIA DGX™  Systems, a complete, full-stack AI solution for enterprises. Built on NVIDIA DGX systems and using Run:ai Atlas software, Run:ai MCP is an end-to-end AI infrastructure platform that seamlessly orchestrates the hardware and software complexities of AI development and deployment into a single solution, accelerating a company’s ROI from artificial intelligence. 

Organizations are increasingly turning to AI to grow revenue and improve efficiency. However, each level of the AI stack, from hardware to high-level software, can create challenges and inefficiencies, with multiple teams competing for the same limited GPU computing time. “shadow AI,” where individual teams buy their own infrastructure or use pricey cloud compute resources, has become common. This decentralized approach leads to idle resources, duplication, increased expense and delayed time to market. Run:ai MCP is designed to overcome these potential roadblocks to successful AI deployments.

“Enterprises are investing heavily in data science to deliver on the promise of AI, but they lack a single, end-to-end AI infrastructure to ensure access to the resources their practitioners need to succeed,” said Omri Geller, co-founder and CEO of Run:ai. “This is a unique, best-in-class hardware/software AI solution that unifies our AI workload orchestration with NVIDIA DGX systems — the universal AI system for every AI workload — to deliver unprecedented compute density, performance and flexibility. Our early design partners have achieved remarkable results with MCP including a 200-500% improved utilization and ROI on their GPUs, which demonstrates the power of this solution to address the biggest bottlenecks in the development of AI.” 

“AI offers incredible potential for enterprises to grow sales and reduce costs, and simplicity is key for businesses seeking to develop their AI capabilities,” said Matt Hull, vice president of Global AI Data Center Solutions at NVIDIA. “As an integrated solution featuring NVIDIA DGX systems and the Run:ai software stack, Run:ai MCP makes it easier for enterprises to add the infrastructure needed to scale their success.”

Run:ai MCP powered by NVIDIA DGX systems with NVIDIA Base Command is a full-stack AI solution that can be obtained from distributors and simply installed with world-class enterprise support, including direct access to NVIDIA and Run:ai experts. 

With MCP, compute resources are gathered into a centralized pool that can be managed and provisioned by one team, but delivered to many users with self-service access. A cloud-native operating system helps IT manage everything from fractions of NVIDIA GPUs to large-scale distributed training. Run:ai’s workload-aware orchestration ensures that every type of AI workload gets the right amount of compute resources when needed. The solution provides MLOps tools while preserving freedom for developers to use their preferred tools via integrations with Kubeflow, Airflow, MLflow and more. 

This bundle is the latest in a series of Run:ai’s collaborations with NVIDIA, including Run:ai’s Atlas Platform certification on the NVIDIA AI Enterprise software suite, which is included with NVIDIA DGX systems.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*