
Mar 17, 2026 · 47m, Episode 1058
What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado
From The a16z Show
— Player
?t=90 · ?t=1:30 · ?t=1h2m3s
— Episode notes
Vishal Misra returns to explain his latest research on how LLMs actually work under the hood. He walks through experiments showing that transformers update their predictions in a precise, mathematically predictable way as they process new information, explains why this still doesn't mean they're conscious, and describes what's actually required for AGI: the ability to keep learning after training and the move from pattern matching to understanding cause and effect. Resources: Follow Vishal Misra on X: https://x.com/vishalmisra Follow Martin Casado on X: https://x.com/martin_casado ...
— Timestamp deep links
Share any moment by clicking "Copy @ timestamp" in the player above. Supported formats: ?t=90 (seconds), ?t=1:30 (mm:ss), ?t=1h2m3s (hms).