The ModelOps Movement: Streamlining Model Governance, Workflow Analytics, and Explainability

Print Friendly, PDF & Email

The value additive gains from enterprise use cases of cognitive computing and machine learning are as manifold as they are lucrative. Organizations can employ these technologies to optimize management of distributed retail or branch locations, supply relevant recommendations for tempting cross-selling and up-selling possibilities, and process workflows more effectively—and efficiently—at scale to boost customer satisfaction.

What many are beginning to realize, however, is these gains are only manifested when firms can solve the core challenges that have been caveats for statistical Artificial Intelligence: model governance, explainability, and workflow analytics.

The ModelOps movement either directly or indirectly addresses each of these three potential barriers to cognitive computing success. According to Indico CEO Tom Wilde, it’s more than an enterprise savior in this regard, but an impending requisite for employing these technologies in production settings across industries.

“As a vendor, if you haven’t built this into your product natively, you’re in trouble,” Wilde reflected about ModelOps. “It’s quite difficult to retrofit if you haven’t thought through how you’re going to expose the inner workings [of models] in a business friendly way to satisfy the needs of the enterprise for explainability.”

Data Lineage and Model Governance

The explainability issue is paramount in many verticals like finance or insurance in which there are regulations for disclosing how organizations arrived at decisions impacting customers—like approval or denial for extending credit offers, for example. ModelOps delivers insight into this concern partly by elucidating data lineage about how models function with both production data and initial training datasets, which naturally inform the way they operate online. Some dedicated ModelOps solutions give companies “the ability to monitor their models and provide training datasets they used for building those into the platform,” Datatron CEO Harish Doddi revealed. “With this metadata the system inherently knows the assumptions in which the model was built and the different scenarios for the model to be effective.”

Such intelligent platforms can surface alerts when models crafted with data for a customer base in California are deployed for a customer base in Ohio, for example. The workflow analytics notion also impacts this concept by providing insight into “how custom models are performing and how accurate they are,” Wilde mentioned. Poor results serve as the impetus for organizations to trace production outputs to initial training data to make adjustments.

Training in Production

Workflow analytics yield answers to common questions like how many documents are processed in healthcare, for example, as well as how accurate they are and the total time AI systems spend processing them. With provenance spanning from production to training settings, users with dissatisfactory results can trace this information to the models producing them to “completely introspect and remediate a machine learning model within a dashboard,” Wilde noted. This capability to curate machine learning models with both production data and the original training data used to construct them was described by Doddi as a ModelOps trend directly impacting enterprise AI.

Before, businesses “couldn’t detect things in an online fashion,” Doddi explained. “Something would happen on the business side and people would have to backtrack it, like damage control.” However, alerts for interactive ModelOps dashboards monitor aspects of risk, drift, and bias, making that process preemptive to prevent undesired outcomes. Consequently, ModelOps solutions give organizations the liberty to “develop models in whatever frameworks or tools you’re comfortable with and let us worry about the production,” Doddi remarked. “We’ll also provide governance capabilities of connecting the production and development world so that you have that centralized dashboard view of what’s going on across your models.”

Anomaly Detection

Compelling solutions in this space actually govern machine learning models with machine learning models. Doddi referenced a model trained for anomaly detection that users can leverage in ModelOps platforms to do “continuous analysis on different metrics the system is computing, and also look at the logs for the production data and training data and detect any sort of anomalies.”

This capability enables organizations to discern why a specific cognitive computing “model is meant to behave in a certain way and it did so two weeks back verses the model’s behavior this week,” Doddi said. Marketing models based on customer micro-segmentation could be impacted by specific campaigns or varying customer attributes, for example. In this case, the anomaly detection model would notify users of the change in behavior of their underlying AI models so they can take action.

Explainability

By pairing workflow analytics with specific metrics centered on explainability, organizations can use ModelOps for unparalleled understanding of how specific cognitive computing models are impacting production results. For instance, utilizing workflow analytics to gauge the productivity of insurance brokers seeking to rapidly generate quotes “helps you determine what kind of ROI that you’re generating from this investment in automation,” Wilde commented. “It also helps you measure and determine over time if you’re improving your quality of service.”

Detailed metrics about the way customized models (relying on transfer learning to be trained by individual users) are performing offers granular insight into the explainability issue for why models deliver the results they do. These metrics transcend alerts to naturally complement workflow outcomes with granular perceptibility about specific models and “what do I do to understand their predictions and perhaps modify or update those predictions,” Wilde denoted. “The two are linked together obviously because a more effective set of models will lead to a higher accuracy, which leads to a better business outcome.”

Human Intervention

The merit of coupling workflow analytics with model metrics for explainability is illustrated by a straight through processing use case that ascertains whether models have high enough confidence scores to process results downstream without human intervention. For instance, a healthcare worker on-boarding patients with a workflow based on a deep learning model may want to assess its performance for specific fields via workflow analytics. “If those metrics are suggesting to you that some of the fields are low confidence and not straight through, you want to be able to go back to your model and focus just on those fields and how to improve your training data approach to improve those fields,” Wilde observed.

Although humans can always monitor the specific outcomes of models using techniques similar to those described by Wilde, automation’s objective is always to minimize human involvement while maintaining the accuracy of model predictions and the rectitude of ensuing actions. ModelOps effectively monitors both to facilitate data governance, explainability, and workflow analytics for in-production cognitive computing models. Consequently, it’s constantly decreasing the level of human involvement in machine learning processes. It may begin with alerts resulting in “waiting for human action,” Doddi said. “But when a sufficient number of human actions come to the system, then we know for this type of alert the human is behaving like this. With AI, as new alerts come in the future, we give a suggestive approach that this is what we think you should do.”

And, once the confidence levels are high enough, the models can eventually make those actions without human intervention.   

About the Author

Jelani Harper is an editorial consultant servicing the information technology market. He specializes in data-driven applications focused on semantic technologies, data governance and analytics.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*