OpenAI's latest research paper diagnoses exactly why ChatGPT and other large language models can make things up—known in the world of artificial intelligence as "hallucination." It also reveals why the problem may be unfixable, at least as far as consumers are concerned.
The paper provides the most rigorous mathematical explanation yet for why these models confidently state falsehoods. It demonstrates that these aren't just an unfortunate side effect of the way that AIs are currently trained, but are mathematically inevitable.
The issue can partly be explained by mistakes in the underlying data used to train the AIs. But using mathematical analysis of how AI systems learn, the researchers prove that even with perfect training data, the problem still exists.
In other words, you're fu**ed.
To read more, click here.