Research Highlights: Using Theory of Mind to improve Human Trust in Artificial Intelligence

eXplainable Artificial Intelligence (XAI) has become an active research area both for scientists and industry. XAI develops models using explanations that aim to shed light on the underlying mechanisms of AI systems, thus bringing transparency to the process. Results become more capable of being interpreted by both experts and non-expert end users alike. New research by a team of UCLA scientists is focused on boosting human trust in these increasingly common systems by greatly improving upon XAI. Their study was recently published in the journal iScience.

XAI: Are We Looking Before We Leap?

This four part report, “XAI: Are We Looking Before We Leap?,” from our friends over at SOSA highlights the challenges and advances when it comes to regulations, relevant use-cases, and emerging technologies taking the mystery out of AI.