Improving Your Odds of ML Success with MLOps

Print Friendly, PDF & Email

In this special guest feature, Harish Doddi, CEO, Datatron, discusses what CEOs need to understand about using MLOps. He also shares insights on how to use MLOps to gain competitive advantage and provide tips on how to implement it. Over the past decade, Harish has focused on AI and data science. Before Datatron, he worked on the surge pricing model for Lyft, the backend for Snapchat Stories, the photo storage platform for Twitter, and designing and developing human workflow components for Oracle. Harish completed his master’s degree in computer science at Stanford, where he focused on systems and databases.

Machine learning is a lofty, unreachable goal in some executives’ minds; they see it as a nice-to-have that’s just too complex to execute. It doesn’t have to be that way, however, thanks to Machine Learning Operations (MLOps). MLOps uses models to systematize the ML lifecycle by defining processes to make ML development more productive and reliable. But that alone is no guarantee of success. Business leaders need to understand several important aspects of MLOps to make it work for their enterprises.

Difficulties with MLOps

Enterprises need to dedicate people and resources to MLOps. Sometimes, enterprise leaders just assume it will all magically work out, but that’s not a given. Specifically, you need to invest in people with the right skill sets. MLOps is a skill that needs to be developed because to really do analog, you need to understand the machine learning portion of the AI models, but you also need to understand the operations portion. It can be difficult to find people who understand all of this.

To implement MLOps successfully, you’ll need to do some planning in advance. Carefully consider the various contingencies and possible outcomes before you initiate deployment to ensure your organization is prepared in advance.

MLOps success requires culture change

Generally, when machine learning is involved, there are many different people involved. One of the most important adjustments to an organization’s culture when introducing MLOps is being able to demonstrate the separation of duties. Traditionally, AI was seen as a project for the AI group or the data science group – but that’s no longer true.

You’ll need to make sure you can carefully separate duties – because in some cases, the priorities for the data scientists aren’t the same priorities for the business leaders. And those might not be the same priorities from an operations standpoint.

These days, there are many stakeholders involved: operations, engineering, line of business – even the regulatory compliance people. That’s why, rather than having a one-size-fits-all approach, each of them will have different priorities. How you bridge this priorities gap is a key question. Because everyone needs to work together, this is the culture adjustment to aim for first.

MLOps: Understanding what’s important

It’s crucial to understand that when someone develops the first version of the model, it’s not the final version; it’s the first draft. When stakeholders push the draft to production, and they see how it behaves, they learn from it and then take what they learn back to the development environment. It’s a highly iterative process.

In a production environment, many things are changing, including data and behaviors of users. So, the things they observe in the development, they may not observe in production. They may unlock some new insight. So, it’s important to remember that it’s always an iterative process.

A second important point is that you need to adopt MLOps processes. This journey is key for AI success, because things are going to be more difficult the longer it takes to adopt these best practices. Here’s one example: If data scientists have the right set of tools in their development environment, they can move quickly. They can iterate fast on their models, but they don’t see the same thing in the production environment once people get involved. So, this behavior unnecessarily creates friction between the data science teams and other parts of optimization – engineering, operations, infrastructure and other parts of the organization. That is why the faster you can adopt best practices and have standardization, the better off you’ll be in terms of easing friction that could occur down the line.

The third important point is that auditing is happening across huge volumes of data and across different business units, and it can happen in terms of models, too. You need to be able to show evidence and accountability for any questions the auditing team might ask. For instance, if the model loses money during a particular time period, you explain why that happened, and what – if any – actions were taken.

Reap the MLOps benefits

Why do many AI and ML deployments fail? It’s more often an issue of culture and process rather than a technology issue. To successfully adopt ML, you need the right systems, resources and skills, and this is where MLOps can provide a significant advantage. The above recommendations will help you make the needed shifts in culture and remind you of the iterative process involved. These changes will help you deploy MLOps and reap all the ensuing business benefits.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*