Sign up for our newsletter and get the latest big data news and analysis.

AI and Healthcare: Figure Out the Problem First

When someone goes to the hardware store to buy a drill, they are really looking for the best solution for making a hole. The drill just appears to be the best tool for the job. These days, along with blockchain, deep learning, natural language processing, computer vision and a slew of other fancy terms, artificial intelligence (AI) is bandied about as a panacea for the challenges that healthcare faces. But like the drill, these technologies are simply tools to solve some problems, and are not always the best tools for a specific problem.

Some problems can be best solved using a set of established rules; the use of rules can be categorized as a deterministic approach. Other problems need predictive or probabilistic approaches that get more accurate as additional data is fed into the system. For instance, decision support for treating a disease like gout can be based on a simple rules-based protocol (such as IF this THEN that). Probabilistic tools such as deep learning would likely be overkill. It’s true that as new guidance emerges for the treatment of gout, the rules will have to be updated manually. But if those changes are infrequent, using a rules-based approach rather than a probabilistic AI approach will likely be not only cheaper, but more predictable and accurate.

Other complex tasks such as automating the work of medical scribes or patient coordinators will need sophisticated AI techniques like machine learning, natural language processing and language comprehension.  Similarly, to help a radiologist make accurate readings of the volume of images coming out of the latest imaging solutions, sophisticated probabilistic approaches that leverage computer vision, image processing and machine learning are indeed applicable. Where probabilistic algorithms can be trained automatically with new data to adapt, deterministic rules require careful manual modification by experts and engineers to accommodate new scenarios. The good news is that techniques are emerging that marry the two.

Other important considerations when deciding which tools to use are the level of accuracy needed in the solution, and whether the solution will have human oversight. If a chatbot, for instance, is being used to to reschedule an appointment or have a conversation about insurance authorization with only the patient, probabilistic approaches that get it wrong occasionally may be acceptable. But if the chatbot is automating a clinical conversation, poor accuracy could lead to serious consequences. In these situations, you either need human oversight to review the chatbot’s conclusions, or you need deterministic approaches where the chatbot doesn’t make mistakes (but can’t handle every possible scenario). A radiologist reading an image to make a diagnosis that will affect a patient’s treatment is very different from a coordinator scheduling a visit or a CT scan, and appropriate AI approaches and human oversight will vary.

Predictive, probabilistic approaches provide valuable and actionable insights that are already being adopted to engage patients earlier in their care journeys. But for clinical scenarios that involve diagnosis or treatment planning, extreme care is warranted. An algorithm that assists a radiologist is quite different from one that intends to replace a radiologist.

User experience is also often overlooked when using AI, especially with the probabilistic ‘black-box’ methods such as machine learning. When a machine learning algorithm — a black box — suggests a treatment or diagnosis with no explanation on how it arrived at that recommendation, both provider and patient can be hesitant to accept that guidance. On the other hand, deterministic rules-based solutions can clearly spell out why a recommendation is being made, since they are essentially a consistent, explicit set of logical statements. To address this user experience challenge, product managers have to come up with creative approaches to win users’ confidence. For instance, if a ‘black-box’ suggests a diagnosis for a patient, it might also show the profile of other similar patients that contributed to its learning.

AI-enabled healthcare will certainly get more robust over time and enhance the effectiveness of everyone in the broader ecosystem, and even improve outcomes for patients — that is, after all, the ultimate goal. The good news is that we are well on our way down that path. But hard questions should be asked when considering AI approaches. These questions include but are not limited to: What is the problem being solved? Can it be solved in a cheaper or faster way without AI? How accurate does the AI have to be to contribute meaningfully as a solution? How can the users — clinicians or patients — trust a solution based on AI? Will the AI-based solution have human oversight, and how will the need for that oversight change over time?

About the Author

Kulmeet Singh is the CEO and Founder of Twistle. Kulmeet has spent the last decade in healthcare IT strategy, M&A, and product creation, starting with the founding of Medremote, a company focused on changing the economics of medical transcription using the cloud, speech recognition and machine learning. He has degrees in Economics from the University of Chicago and in Computer Science from Columbia University.

Sign up for the free insideBIGDATA newsletter.

Comments

  1. You left out that healthcare is care. Spend the night in the hospital against your plans and re-write.

Leave a Comment

*

Resource Links: