What happened
Latent Space published Information Theory for Language Models: Jack Morris (2025-07-02). Our last AI PhD grad student feature was Shunyu Yao, who happened to focus on Language Agents for his thesis and immediately went to work on them for OpenAI. Our pick this year is Jack Morris, who bucks the “hot” trends by -not- working on agents, benchmarks, or VS Code forks, but is rather known for his work on the information theoretic understanding of LLMs, starting from embedding models and latent space represent…
Why it matters
This belongs in Radar because it points to a concrete shift in how AI systems are built, evaluated, secured, sold, or operated. The practical question is not whether the headline sounds impressive, but whether it changes real workflows: developer tooling, agent safety, model evaluation, governance, or the cost of maintaining AI-assisted work.
Lilith reality check
Worth tracking, but not swallowing whole: Information Theory for Language Models: Jack Morris is useful as a signal only if the mechanism, limits, and real operational impact survive scrutiny. Vendor posts and launch notes love to jump from “working demo” to “the future is solved”. Radar has the opposite job: separate the useful signal from the smoke machine.
What to watch next
Watch for independent validation, repeatable evidence, security trade-offs, and adoption in ordinary teams rather than polished demos. If the pattern repeats across sources and survives operational friction, it deserves a deeper article. If not, it was just another shiny spark in the feed.
Lilith's verdict
Worth tracking, but not swallowing whole: Information Theory for Language Models: Jack Morris is useful as a signal only if the mechanism, limits, and real operational impact survive scrutiny.