The Catalyst for Widespread AI Adoption

Print Friendly, PDF & Email

In this special guest feature, Dr. Josh Sullivan from Modzy, discusses the issue of understanding how AI comes to a decision is no longer just about curiosity; it must be an essential element of nearly all AI applications. Modzy is an enterprise AI Platform for machine learning model deployment, security, management, and governance at scale. Modzy also includes a marketplace of powerful pre-trained and re-trainable AI models from dozens of machine learning companies. Previously, Josh led Booz Allen’s analytics practice, driving the vision, strategy, investments, and delivery of complex technology and analytics programs for clients across public sector and commercial industry verticals. A central focus of Josh’s career has been improving how technology and analytics can be applied to advance the nation’s capabilities in cyberspace, healthcare, national intelligence and delivering services to the citizen.

Simple AI already powers much of the technology we use every day. Cell phones with authentication via facial recognition, cars with self-driving capabilities, chatbots, smart household appliances, automated machinery––whether the average person knows it or not––they are likely engaging with some self-learning technology.

This presents both opportunities and challenges. While efficiency is increased, removing the human element from everyday decision-making implies a certain level of trust in machines making those decisions.

The issue of understanding how AI comes to a decision is no longer just about curiosity; it must be an essential element of nearly all AI applications. AI is being used to power decisions that significantly impact lives. For example, AI could be the deciding factor in whether an individual’s mortgage loan application is approved or not. Decisions about a patient’s course of treatment are informed by AI-accelerated analysis. In high-stakes decisions, reasoning must be traceable to enable AI auditability.

One technique with very real potential to transform the world of machine learning is explainability, which is emerging as a catalyst for AI adoption. Explainability strives to make sense of what led an AI model to a certain prediction or decision. Understanding how machine learning reasons against real-world data helps build trust between people and models. Explainability ultimately enables transparency.

We have a unique perspective on explainability as informed by our research on adversarial machine learning. If an adversary knows how to fool an AI model, they’ve figured out how the model thinks. Our solution is based on using adversarial AI to quickly understand how a model makes its predictions, and then explaining the AI model’s outputs by producing the most important input features affecting those predictions. This process sheds light on how an AI model distinguishes what factors contribute to model predictions.

It’s critical to understand how AI functions via explainability so that the AI is governable, accountable, and compliant. For highly regulated industries, transparency, explainability and auditability are required for compliance purposes.

The impact of explainability extends beyond basic understanding of how a model operates. An explainability tool can also be used to pinpoint features of a model that may generate bias, enabling retraining to mitigate negative effects. There are steps to understand what data influenced a model’s prediction and how to understand bias — information which can then be used to train robust models that are more trusted, reliable and hardened against adversarial attacks.

To truly leverage the power of AI and drive widespread adoption, we must be able to trust AI-powered decisions and predictions. Without understanding outputs and decision processes, there will never be true confidence in AI-enabled decision-making. Explainability is critical in moving forward into the next phase of AI adoption.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*