Sign up for our newsletter and get the latest big data news and analysis.

Verta Insights Study Reveals that Fewer than Half of Companies Are Ready to Scale Real-time AI Within Three Years

Verta Inc., a leading provider of enterprise model management and operational artificial intelligence (AI) solutions, released findings from the 2022 State of Machine Learning Operations  study, which surveyed more than 200 machine learning (ML) practitioners about their use of AI and ML models to drive business success. The study was conducted by Verta Insights, the research practice of Verta Inc., and found that although companies across industries are poised to significantly increase their use of real-time AI within the next three years, fewer than half have actually adopted the tools needed to manage the anticipated expansion.

In fact, 45% of the survey respondents reported that their company reported having a data or AI/ML platform team in place to support getting models into production, and just 46% have an MLOps platform in place to facilitate collaboration across stakeholders in the ML lifecycle, suggesting that the majority of companies are unprepared to handle the anticipated increase in real-time use cases.

The survey also revealed that just over half (54%) of applied machine learning models deployed today enable real-time or low-latency use cases or applications, versus 46% that enable batch or analytical applications. However, real-time use cases are set for a sharp increase, according to the study. More than two-thirds (69%) of participants reported that real-time use cases would increase within the next three years, including 25% who believe there will be a “significant increase” in real-time over the same period.

“We launched Verta Insights to better understand the critical challenges and emerging issues that organizations face as they seek to realize value from AI-driven business initiatives,” said Rory King, Head of Marketing and Research of Verta.  “As we had hypothesized, our MLOps study identified capabilities such as MLOps platform adoption and the formalization of ML platform teams and governance committees that leading performers use more readily to their advantage.”

When asked to report how frequently their organizations met financial targets and their success rate in shipping AI-enabled features to intelligent applications, leaders were more than twice as likely to ship AI products or features and three times more likely to meet their required service level agreements (SLAs) than their peers.

“Every smart device has intelligence built into it, and consumers just expect that their interactions with companies take place online, in real time.  Over time, we’ve seen how consumer norms resulted in higher expectations for intelligent, digitally-based business-to-business interactions as well,” said Manasi Vartak, CEO and Founder of Verta. “As AI adoption scales dramatically, organizations will need to augment their technology stack to include operational AI infrastructure if they intend to achieve top-line benefits through intelligent equipment, systems, products and services.”

Vartak explained that most organizations have spent years investing in foundational aspects of machine learning, such as hiring data science talent to build and train models, and acquiring the associated technology stacks to support them. This is beginning to change.

“The term ‘MLOps’ is often used to describe a model’s lifecycle from initial build through its intended use, but in reality, very few organizations and their enabling technologies are designed to perform actual operational aspects of machine learning,” Vartak said. “Instead most companies have focused their efforts on establishing a strong foundation for mastering batch, analytical workloads that are not suited to running real-time critical applications.”

Technology stacks for operationalizing ML to support real-time applications differ from those used to build and train models, Vartak noted. The former rely heavily upon massive computation power such as tapping into graphics process units (GPU) combined with specialized analytics engines for large-scale data processing. By contrast, operational machine learning needs to be treated as agile software that must go through incredible testing rigor, be subject to stringent security measures, operate with high reliability and make predictions with incredibly fast response times measured in sub-milliseconds. 

“The demand for more ML platform teams signals a shift in the market, as it underscores the need for unique skills and technology  to achieve operational AI,” Vartak said. “Realizing the value that these teams bring will ensure that companies fare much better in delivering real-time responsiveness to their customers, adhering to Responsible AI principles and complying with the coming wave of AI regulations.”

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Leave a Comment

*