How to Transfer Fundamental AI Advances into Practical Solutions for Healthcare

Print Friendly, PDF & Email

In this special guest feature, Dave DeCaprio, CTO and Co-founder,, discusses what it really takes to make AI that physicians trust. Dave has more than 20 years of experience transitioning advanced technology from academic research labs into successful businesses. His experience includes genome research, pharmaceutical development, health insurance, computer vision, sports analytics, speech recognition, transportation logistics, operations research, real time collaboration, robotics, and financial markets. Dave has been involved in several successful startups as well as consulting and advising both small and large organizations on how to innovate using technology with maximum impact. Dave graduated from MIT with a degree in Electrical Engineering and Computer Science and currently lives in Austin, TX.

Healthcare is one of the most cited applications for AI. When researchers announce a new algorithmic advance, they don’t say, “This will further optimize click-through rates.” They always lead with “This could help doctors better identify the right treatments for their patients.”  Unfortunately, statements like this gloss over the significant challenges that go beyond the algorithms when applying AI to healthcare. 

What does it really take to make AI that physicians trust?

Two years ago, the Centers for Medicare & Medicaid Services (CMS) decided to find out and launched the Artificial Intelligence (AI) Health Outcomes Challenge, the largest healthcare-focused AI challenge in history. The $1.6 million contest prioritized creating “explainable artificial intelligence solutions to help front-line clinicians understand and trust AI-driven data feedback.” This was not an academic exercise — unplanned hospital admissions and adverse events are a $200 billion problem that impacts nearly 32% of Medicare beneficiaries. The contest attracted more than 300 of the world’s leading technology, healthcare and pharmaceutical organizations, including IBM, Mayo Clinic, Geisinger, Merck, Accenture and Deloitte.

From the competition, some cutting-edge AI algorithms emerged. But competitors also had to place significant efforts into other areas — understanding the nuances of healthcare data, producing trustworthy explanations for the predictions, and rigorously evaluating algorithmic bias and AI ethics concerns — all issues paramount to making AI a reality in healthcare.

Understanding the nuances of healthcare data

The data powering the challenge was 10 years of historical Medicare claims data. Buried within the hundreds of different files and billions of data points were many salient details of patients’ health histories, but uncovering those required an understanding of the intricate details of how the Medicare system works.

For example, there exists a special coding system used to describe home health visits. Hidden within that coding system was information about a patient’s ability to perform “activities of daily living.” This can be useful in predicting adverse events, but only if the data science team knows how to extract the information from the codes by combining healthcare domain expertise and expert knowledge from research. This information is critical when engineering features that can be as informative as possible for the learning algorithms.

Building trustworthy, data-backed predictions

Accuracy can be simply measured with a few statistics, but to build AI that clinicians trust requires hundreds of hours of engagement with practicing clinicians. No matter how many iterations of an interface are built, at the end of the day, the technology needs to pass the strict eye test of physicians, nurses, care managers and social workers. Scrutinizing clinicians will demand transparency along with specific evidence from patient records for each feature used to justify any predictions. It must also be kept in mind, medical professionals are seeking technology that can accurately summarize a patient’s entire history, not just a single prediction.

Addressing fairness and bias in healthcare

Algorithmic fairness has been in the spotlight recently, and is particularly important in healthcare settings where AI models could be used to help prioritize limited resources for early interventions. Many experts are beginning to acknowledge that healthcare data, and in particular, the underlying data used to train these models, reflects our healthcare system’s historical biases and inequities in terms of access and delivery of healthcare.

A 2019 study by Ziad Obermeyer of UC Berkeley and Brigham and Women’s Hospital demonstrated that predicting future healthcare costs leads to racially biased models, and proposed methods to avoid this bias. These findings must be taken into account when evaluating final models for bias based on race, ethnicity, gender, age and disability status.

Transferring fundamental AI advances into actionable insights

The CMS AI Challenge included all of these factors in their judging criteria. To their credit, CMS understood that a standard Kaggle-style competition based on getting the highest accuracy metrics numbers on a leaderboard wasn’t going to address the core issues confronting AI in healthcare. The competition demonstrated the necessity for pairing strong algorithms with a deep understanding of the domain, highlighted by transparent and explainable visualizations of the predictions and rigorous analyses of algorithmic bias. These criteria are exactly what is needed to transfer fundamental AI advances into practical working solutions for healthcare.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 –

Speak Your Mind