Domino Data Lab Announces Hybrid MLOps Architecture to Future-Proof Model-Driven Business at Scale

Print Friendly, PDF & Email

Domino Data Lab, a leading Enterprise MLOps platform trusted by over 20 percent of the Fortune 100, announced its new Nexus hybrid Enterprise MLOps architecture that will allow companies to rapidly scale, control and orchestrate data science work across different compute clusters — in different geographic regions, on premises, and even across multiple clouds. 

Despite the attention paid to cloud migration, concerns over cost, security and regulations compel a growing majority of enterprises to adopt AI infrastructure strategies that straddle on-premises data centers and the cloud. 66 percent of IT decision makers have already invested in hybrid support for AI workload development, Forrester Consulting found, and 91 percent plan to do so within two years. The new Nexus architecture enables Enterprise MLOps for this new reality. It delivers the portability and cost management for AI development and deployment that enterprises require, and flexibility that data science teams need, to accelerate breakthrough innovations at scale.

“Though the shift to cloud is on, a growing number of enterprises have some type of on-premises and cloud-based architecture currently in place,” said Melanie Posey, Research Director for Cloud & Managed Services Transformation at 451 Research, a part of S&P Global Market Intelligence. “The reality is that cost optimization persists as an ongoing issue for both cloud veterans and cloud beginners.”

Nexus is a highly scalable hybrid Enterprise MLOps platform architecture delivering enterprises the best of both worlds — the cost benefits of on-premises infrastructure and the flexibility to quickly scale to the cloud using a single control point. Customers gain maximum cost optimization by leveraging owned on-premises NVIDIA GPUs, and the ability to move workloads to cloud-based GPUs when additional capacity is needed – all without sacrificing reliability, security or usability. 

“Enterprise data science and IT organizations are consistently asking for more infrastructure flexibility, to optimize compute spend, data security, and avoid vendor lock-in,” said Nick Elprin, CEO and co-founder of Domino Data Lab. ”Our Nexus architecture will help our customers unleash data science while future-proofing their infrastructure investments.”

Domino Expands Collaboration with NVIDIA as First Nexus Launch Partner

Domino has already begun development of Nexus with NVIDIA as a launch partner, an effort which will include specific solution architectures validated for NVIDIA technologies, with a release targeted for later this year. Today, enterprise IT teams can learn how to scale data science workloads by taking a free, immediately available, hands-on lab that includes the Domino Enterprise MLOps Platform and the NVIDIA AI Enterprise software suite, accessed on NVIDIA LaunchPad.

To enable further powerful competitive advantages through innovative AI-enabled use cases, Domino has also joined the NVIDIA AI Accelerated program, which enables software and solution partners to leverage the NVIDIA AI platform and its expansive libraries and SDKs to build accelerated AI applications for customers. Domino continues to collaborate with NVIDIA on streamlining development, deployment, and management of GPU-trained models across a variety of computing platforms, from on-premises infrastructure to edge devices, leveraging Domino and the NVIDIA AI platform, which includes NVIDIA AI Enterprise and NVIDIA Fleet Command. Hybrid MLOps is a continuation of Domino’s vision to design the most innovative, flexible solutions – balancing customer needs for the most effective data science work.

“Enterprises are looking for AI solutions that deliver on performance and costs, with a strategy that aligns with their IT policies and practices,” said Manuvir Das, vice president of Enterprise Computing at NVIDIA. “NVIDIA’s collaboration with Domino Data Lab provides customers a powerful hybrid MLOps solution with the flexibility required to maximize productivity throughout the AI development and deployment lifecycle.”

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*