Generative AI Models Are Built to Hallucinate: The Question is How to Control Them

In this contributed article, Stefano Soatto, Professor of Computer Science at the University of California, Los Angeles and a Vice President at Amazon Web Services, discusses generative AI models and how they are designed and trained to hallucinate, so hallucinations are a common product of any generative model. However, instead of preventing generative AI models from hallucinating, we should be designing AI systems that can control them. Hallucinations are indeed a problem – a big problem – but one that an AI system, that includes a generative model as a component, can control.

Research Highlights: Using Theory of Mind to improve Human Trust in Artificial Intelligence

eXplainable Artificial Intelligence (XAI) has become an active research area both for scientists and industry. XAI develops models using explanations that aim to shed light on the underlying mechanisms of AI systems, thus bringing transparency to the process. Results become more capable of being interpreted by both experts and non-expert end users alike. New research by a team of UCLA scientists is focused on boosting human trust in these increasingly common systems by greatly improving upon XAI. Their study was recently published in the journal iScience.

UCLA DataFest Winners Announced, Presentations Posted

For the annual UCLA DataFest, student worked hard with data pertaining to the monumental challenge we are all facing: COVID-19. This year’s virtual version of ASA DataFest at UCLA brought forth unforeseen challenges and wonderful opportunities. This beloved tradition is generally a competition wherein groups of three to five students have just 48 hours to make sense of a huge data set and present their findings in five minutes, using just two slides.