Generative AI Models Are Built to Hallucinate: The Question is How to Control Them

In this contributed article, Stefano Soatto, Professor of Computer Science at the University of California, Los Angeles and a Vice President at Amazon Web Services, discusses generative AI models and how they are designed and trained to hallucinate, so hallucinations are a common product of any generative model. However, instead of preventing generative AI models from hallucinating, we should be designing AI systems that can control them. Hallucinations are indeed a problem – a big problem – but one that an AI system, that includes a generative model as a component, can control.