Research Highlights: Using Theory of Mind to improve Human Trust in Artificial Intelligence

Print Friendly, PDF & Email

Artificial Intelligence (AI) systems are threaded throughout modern society, informing us in low-risk interactions such as movie recommendations and chatbots to high-risk environments like medical diagnosis, self-driving cars, drones, and military operations. But is remains a significant challenge to develop human trust in these systems, particularly because the systems themselves cannot explain in a way graspable to humans how a recommendation or decision was reached. This lack of trust can become problematic in critical situations involving finances or healthcare where AI decisions can have life-altering consequences.

To address this issue, eXplainable Artificial Intelligence (XAI) has become an active research area both for scientists and industry. XAI develops models using explanations that aim to shed light on the underlying mechanisms of AI systems, thus bringing transparency to the process. Results become more capable of being interpreted by both experts and non-expert end users alike.

New research by a team of UCLA scientists is focused on boosting human trust in these increasingly common systems by greatly improving upon XAI. Their study was recently published in the journal iScience – “CX-ToM Counterfactual explanations with theory-of-mind for enhancing human trust in image recognition models.”

“Humans can easily be overwhelmed with too many or too detailed explanations. Our interactive communication process helps machine in understanding the human user and identify user-specific content for explanation,” says Song-Chun Zhu, the project’s principal investigator and a professor of statistics and computer science at UCLA.

Zhu and his team at UCLA set out to improve existing XAI models and to pose the explanation generation as an iterative process of communication between the human and the machine. They use the Theory of Mind (ToM) framework to drive this communication dialog. ToM helps in explicitly tracking three important aspects at each dialog turn: (a) human’s intention (or curiosity); (b) human’s understanding of the machine; and (c) machine’s understanding of the human user.

“In our framework, we let the machine and the user solve a collaborative task, but the machine’s mind and the human user’s mind only have a partial knowledge of the environment. Hence, the machine and user need to communicate with each other in a dialog, using their partial knowledge, otherwise they would not be able to optimally solve the collaborative task,” said Arjun Reddy Akula, the UCLA Ph.D. student who led this work for Prof. Zhu’s group. “Our work will make it easier for non-expert AI human users to operate, understand and improve human trust in AI-based systems. We believe our interactive ToM based framework provides a new way of thinking at designing XAI solutions.”

The group’s latest development is the culmination of research in their UCLA laboratory over five years.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*