Why language models hallucinate
OpenAI’s new research explains why language models hallucinate. The findings show how improved evaluations can enhance AI reliability, honesty, and safety
Why it matters: This belongs in Radar because it points to a concrete shift in how AI systems are built, evaluated, secured, sold, or operated. The practical question is not whether the headline sounds impressive, but whether it changes real workflows: developer tooling, agent safety, model evaluation, governance, or the cost of maintaining AI-assisted work.
takeWorth tracking, but not swallowing whole: Why language models hallucinate is useful as a signal only if the mechanism, limits, and real operational impact survive scrutiny.