Why We Need ML Ops: 4 Things to Consider When Testing AI

Print Friendly, PDF & Email

In this special guest feature, Stephan Jou, CTO of Interset, a Micro Focus company, explores things businesses should consider when deploying production ML pipelines and testing AI. Interset a leading-edge cybersecurity and In-Q-Tel portfolio company that uses machine learning and behavioral analytics. Jou holds a M.Sc. in Computational Neuroscience and Biomedical Engineering, and a dual B.Sc. in Computer Science and Human Physiology, all from the University of Toronto. He has held advisory positions on NSERC Strategic Networks and is involved in setting goals for NSERC Strategic Research Grant research topics in the areas of analytics and security for Canada and was an invited participant in 2018’s G7 Multistakeholder Conference on Artificial Intelligence.

MLOps – a compound of “machine learning” and “operations” – is a newly emerging best practice in the enterprise space that is helping data science leaders effectively develop, deploy and monitor data models. According to new research, the MLOps market is only predicted to grow in the coming years, and is predicted to reach almost $4B by 2025. With such rapid growth, it’s important that businesses prioritize MLOps innovation now.

Why is MLOps so important?

A recent study found that 89 percent of global senior IT decision-makers surveyed believe that AI and machine learning are critical in how organizations run their IT operations.

While it is tempting to think of a machine learning model as a black box, in reality, it is a pipeline with many components. Similar to how DevOps emerged from the need to provide a framework for the software development lifecycle, MLOps has been developed as a framework and best practice for the development and implementation of machine learning systems. Machine learning development and deployment comprises of a complex set of people, processes, and technologies that, similar to the world of software development, has a lifecycle that needs to be managed, monitored and optimized in order to be effective. Now that businesses have accepted the value of AI and ML, it is important they now focus on extracting the promised value from those ML systems through MLOps.

How can businesses better test AI?

Because MLOps in the enterprise industry won’t slow down, here are four ways companies can start testing AI more effectively and efficiently:

Focus on model deployment

Machine learning mathematical models have a lifecycle that spans from hypothesis to testing, to learning, to coding, to staging, to production. The entire end-to-end deployment process needs to be tracked, monitored, and, ideally, automated.

These mathematical models need to be tested and reproduced on new datasets not seen during the initial development, both pre-production and continuously afterwards to detect model drift (when the conditions or assumptions of the original model no longer apply). Like source code and regression tests for software, models need to be version controlled and automatically, continuously tested.

Prioritize model security and governance

Attacks against AI and machine learning models continue to be exposed by both hackers as well as in leaders in the research community. As MLOps grows in prominence within the IT industry, it’s important that professionals incorporate security into the entire AI lifecycle. Given machine learning’s dependency on data, data privacy and ethical considerations must be evaluated and considered frequently. Many AI attacks rely on vulnerabilities that can be easily prevented through regular reviews and testing.

Monitor model performance

In production, because machine learning is rarely binary and is associated with predictive accuracy, it is crucial to monitor the model performance. Businesses should continue question how precise the machine learning model is performing in production on actual data. IT professionals should also measure if performance is decaying or improving over time. For example, a model that executes quickly on small amounts of data might find itself struggling with a large number of data points in production, or new changed data conditions impacting the computational load. It is important to have monitoring systems to measure and record for improved model performance and scalability.

Automate to scale

Automation through MLOps is critical to scale machine learning-based production systems. As AI becomes more and more democratized and important to businesses, and not the exclusive domain of large companies like Google, Facebook and Amazon, MLOps will become a critical requirement for the mass deployment and management of those AI systems.

During the initial stages of model development, many of the tasks mentioned above are performed by human data scientists or data engineers, using manual tooling and processes. While this is acceptable during the initial exploratory development phase, over-reliance on human and manual methods will be unnecessarily limiting in production, especially as the number of models grows to the hundreds, or thousands.

Currently, MLOps tools and practices are dramatically impacting the IT world, helping increase productivity through automation and intelligence that puts enterprises at a stronger advantage against competitors. Decision makers and IT leaders must consider the role MLOps will play in their business and recognize model performance, security and scalability as they MLOps continues to evolve and grow in the market.

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*