The Need for Transparency and Explainability in EU AI Regulation

In this contributed article, Anita Schjøll Abildgaard, CEO and Co-Founder of Iris.ai, believes that while legislators work towards governance that enables appropriate, effective oversight without stifling innovation, organizations working on AI technology have a responsibility for ethical development.
With an emphasis on transparency and explainability, companies can ensure AI technology offers the most benefit to the most people.

The ModelOps Movement: Streamlining Model Governance, Workflow Analytics, and Explainability

In this contributed article, editorial consultant Jelani Harper discusses how the ModelOps movement either directly or indirectly addresses each of the following three potential barriers to cognitive computing success: model governance, explainability, and workflow analytics.

Genesis of a Model Intelligence Platform – Truera

In this start-up highlight piece, we discuss how a CMU professor and his former grad student are ushering in a new era of responsible AI, and helping companies address bias in their AI models. This is a short story of the genesis of Truera.

XAI: Are We Looking Before We Leap?

This four part report, “XAI: Are We Looking Before We Leap?,” from our friends over at SOSA highlights the challenges and advances when it comes to regulations, relevant use-cases, and emerging technologies taking the mystery out of AI.

XAI: Are We Looking Before We Leap?

This four part report from our friends over at SOSA highlights the challenges and advances when it comes to regulations, relevant use-cases, and emerging technologies taking the mystery out of AI.

New Survey of Data Science Pros Finds that AI Explainability is their Top Concern

In late October 2020, venture capital firm Wing conducted a survey, “Chief Data Scientist Survey,” of 320 of the senior-most data scientists at both global corporations and venture-backed startups, in advance of its annual Wing Data Science Summit. AI explainability came out on top as the leading concern.

Addressing AI Trust, Systemic Bias & Transparency as Business Priorities

Our friend Dr Stuart Battersby, CTO of Chatterbox Labs (an Enterprise Al Company), reached out to us to share how his company built a patented AI Model Insights Platform (AIMI) to address the lack of explainability & trust, systemic bias and vulnerabilities within any AI model or system.

Truera Launches Model Intelligence Platform to Solve Machine Learning’s Black Box Problem

Truera, which provides the Model Intelligence platform, emerged from stealth to launch its technology solution that removes the “black box” surrounding Machine Learning (ML) and provides intelligence and actionable insights throughout the ML model lifecycle. The platform is already deployed at and delivering value to a number of early Fortune 100 customers in banking and insurance.

AI Transparency will Lead to New Approaches

In this contributed article, nationally recognized entrepreneur and software developer, Charles Simon contends that transparency – coupled with the necessary algorithms to mimic vision, hearing, learning, planning, and even imagination – will enable AI to evolve into its next phase.

Do You Trust and Understand Your Predictive Models?

To help practitioners make the most of recent and disruptive breakthroughs in debugging, explainability, fairness, and interpretability techniques for machine learning, our friends over at H2O.ai have written and exciting eBook “An Introduction to Machine Learning Intrepretability Second Edition.” This report defines key terms, introduces the human and commercial motivations for the techniques, and discusses predictive modeling and machine learning from an applied perspective, focusing on the common challenges of business adoption, internal model documentation, governance, validation requirements, and external regulatory mandates.