Monitaur Launches GovernML to Guide and Assure Entire AI Life Cycle 

Print Friendly, PDF & Email

Monitaur, an AI governance software company, announced the general availability of GovernML, the latest addition to its ML Assurance platform, designed for enterprises committed to responsible AI. Offered as a web-based, SaaS application, GovernML enables enterprises to establish and maintain a system of record of model governance policies, ethical practices, and model risk across their entire AI portfolio. 

As deployments of AI accelerate across industries, so too have efforts to establish regulations and internal standards that ensure fair, safe, transparent and responsible use. 

  • Entities ranging from the European Union to New York City and the state of Colorado are finalizing legislation that codifies practices espoused by a wide range of public and private institutions into law. 
  • Corporations are prioritizing the need to establish and operationalize governance policies across AI applications in order to demonstrate compliance and protect stakeholders from harm.

“Good AI needs great governance,” said Monitaur founding CEO Anthony Habayeb. “Many companies have no idea where to start with governing their AI. Others have a strong foundation of policies and enterprise risk management but no real enabled operations around them. They lack a central home for their policies, evidence of good practice, and collaboration across functions. We built GovernML to solve for both.” 

The Importance of AI Governance Today 

Effective AI governance requires a strong foundation of risk management policies and tight collaboration between modeling and risk management stakeholders. Too often, conversations about managing risks of AI focus narrowly on technical concepts like model explainability, monitoring, or bias testing. This focus minimizes the broader business challenge of life cycle governance and ignores the prioritization of policies and enablement of human oversight. 

“While there are foundations for risk management and model governance in some sectors, the execution of these is quite manual,” offered David Cass, former banking regulator for the Federal Reserve and CISO at IBM. “We are now seeing more models, with increasing complexity, used in more impactful ways, across more sectors that are not experienced with model governance. We need software to distribute the methods and execution of governance in a more scalable way. GovernML takes what is best of proven methods, adds for the new complexity of AI, and software-enables the entire life cycle.” 

“The emergence of and necessity for AI governance is not simply a result of AI investments or AI regulations; it is a clear example of a broader need to synergize risk, governance and compliance software categories overall,” said Bradley Shimmin, chief analyst, AI Platforms, Analytics, and Data Management at Omdia. “Considering software as a stand-alone industry and comparing its regulation relative to other major sectors or industries, software’s impact-to-regulation ratio is an outlier. GovernML offers a very thoughtful approach to the broader AI problem; it also puts Monitaur in an attractive position for future expansion within this much broader theme.” 

GovernML for Building and Managing Policies for AI Ethics 

Available today, GovernML’s integration into the Monitaur ML Assurance platform supports a full life cycle AI governance offering, covering everything from policy management through technical monitoring and testing and human oversight. 

By centralizing policies, controls and evidence across all advanced models in the enterprise, GovernML makes managing responsible, compliant and ethical AI programs possible. 

Highlights enable business, risk and compliance, and technical leaders to: 

  • Create a comprehensive library of governance policies that map to specific business needs, including the ability to immediately leverage Monitaur’s proprietary controls based on best practices for AI and ML audits. 
  • Provide centralized access to model information and proof of responsible practice throughout the model life cycle. 
  • Embed multiple lines of defense and appropriate segregation of duties in a compliant, secure system of record. 
  • Gain consensus and drive cross-functional alignment around AI projects. 

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*