The Need for Transparency and Explainability in EU AI Regulation

In this contributed article, Anita Schjøll Abildgaard, CEO and Co-Founder of Iris.ai, believes that while legislators work towards governance that enables appropriate, effective oversight without stifling innovation, organizations working on AI technology have a responsibility for ethical development.
With an emphasis on transparency and explainability, companies can ensure AI technology offers the most benefit to the most people.

Video Highlights: Introduction to Explainable AI

Responsible AI is reaching new heights these days. Companies have started exploring Explainable AI as a means to explain the results better to senior leadership and increase their trust in AI Algorithms. This workshop presentation, conducted by Supreet Kaur, Assistant Vice President at Morgan Stanley, will entail an overview of this area, importance of it in today’s era, and some of the practical techniques that you can use to implement it.

Video Highlights: Minimize Risk and Accelerate MLOps With ML Monitoring and Explainability

In the presentation below, Amit Paka, Chief Product Officer and Co-founder, from our friends over at Fiddler AI, spoke at the Machine Learning in Finance Summit discussing the importance of monitoring and explainable AI (XAI).

Genesis of a Model Intelligence Platform – Truera

In this start-up highlight piece, we discuss how a CMU professor and his former grad student are ushering in a new era of responsible AI, and helping companies address bias in their AI models. This is a short story of the genesis of Truera.

XAI: Are We Looking Before We Leap?

This four part report, “XAI: Are We Looking Before We Leap?,” from our friends over at SOSA highlights the challenges and advances when it comes to regulations, relevant use-cases, and emerging technologies taking the mystery out of AI.

XAI: Are We Looking Before We Leap?

This four part report from our friends over at SOSA highlights the challenges and advances when it comes to regulations, relevant use-cases, and emerging technologies taking the mystery out of AI.

New Survey of Data Science Pros Finds that AI Explainability is their Top Concern

In late October 2020, venture capital firm Wing conducted a survey, “Chief Data Scientist Survey,” of 320 of the senior-most data scientists at both global corporations and venture-backed startups, in advance of its annual Wing Data Science Summit. AI explainability came out on top as the leading concern.

Addressing AI Trust, Systemic Bias & Transparency as Business Priorities

Our friend Dr Stuart Battersby, CTO of Chatterbox Labs (an Enterprise Al Company), reached out to us to share how his company built a patented AI Model Insights Platform (AIMI) to address the lack of explainability & trust, systemic bias and vulnerabilities within any AI model or system.

Truera Launches Model Intelligence Platform to Solve Machine Learning’s Black Box Problem

Truera, which provides the Model Intelligence platform, emerged from stealth to launch its technology solution that removes the “black box” surrounding Machine Learning (ML) and provides intelligence and actionable insights throughout the ML model lifecycle. The platform is already deployed at and delivering value to a number of early Fortune 100 customers in banking and insurance.

AI Transparency will Lead to New Approaches

In this contributed article, nationally recognized entrepreneur and software developer, Charles Simon contends that transparency – coupled with the necessary algorithms to mimic vision, hearing, learning, planning, and even imagination – will enable AI to evolve into its next phase.