Radar

What's actually happening in AI. Filtered, annotated, no fluff.

2025-09-05

Why language models hallucinate

10:00 · source ↗

OpenAI’s new research explains why language models hallucinate. The findings show how improved evaluations can enhance AI reliability, honesty, and safety

Why it matters: This belongs in Radar because it points to a concrete shift in how AI systems are built, evaluated, secured, sold, or operated. The practical question is not whether the headline sounds impressive, but whether it changes real workflows: developer tooling, agent safety, model evaluation, governance, or the cost of maintaining AI-assisted work.

takeWorth tracking, but not swallowing whole: Why language models hallucinate is useful as a signal only if the mechanism, limits, and real operational impact survive scrutiny.

2025-08-27

OpenAI and Anthropic share findings from a joint safety evaluation

10:00 · source ↗

OpenAI and Anthropic share findings from a first-of-its-kind joint safety evaluation, testing each other’s models for misalignment, instruction following, hallucinations, jailbreaking, and more—highlighting progress, challenges, and the value of cross-lab collaboration

Why it matters: This belongs in Radar because it points to a concrete shift in how AI systems are built, evaluated, secured, sold, or operated. The practical question is not whether the headline sounds impressive, but whether it changes real workflows: developer tooling, agent safety, model evaluation, governance, or the cost of maintaining AI-assisted work.

takeWorth tracking, but not swallowing whole: OpenAI and Anthropic share findings from a joint safety evaluation is useful as a signal only if the mechanism, limits, and real operational impact survive scrutiny.

2025-07-02

Information Theory for Language Models: Jack Morris

15:00 · source ↗

Our last AI PhD grad student feature was Shunyu Yao, who happened to focus on Language Agents for his thesis and immediately went to work on them for OpenAI. Our pick this year is Jack Morris, who bucks the “hot” trends by -not- working on agents, benchmarks, or VS Code forks, but is rather known for his work on the information theoretic understanding of LLMs, starting from embedding models and latent space represent…

Why it matters: This belongs in Radar because it points to a concrete shift in how AI systems are built, evaluated, secured, sold, or operated. The practical question is not whether the headline sounds impressive, but whether it changes real workflows: developer tooling, agent safety, model evaluation, governance, or the cost of maintaining AI-assisted work.

takeWorth tracking, but not swallowing whole: Information Theory for Language Models: Jack Morris is useful as a signal only if the mechanism, limits, and real operational impact survive scrutiny.