Skip to content

Essential AI Reading List

What this is

A curated list of high-signal sources for staying current on AI, LLMs, agents, and tooling.

Blogs & Personal Sites

  • Simon Willison (simonwillison.net) — Essential for tracking the fast-moving practical side of LLM tooling, prompt engineering, and open-source integration.
  • Lilian Weng (lilianweng.github.io) — Unrivaled for thorough, well-cited technical deep dives on AI architectures, memory, and reasoning methods.
  • Jay Alammar (jalammar.github.io) — The gold standard for developing visual intuition about complex transformer architectures and model mechanics.
  • Sebastian Raschka (sebastianraschka.com) — Bridges the gap between research and code with highly reproducible tutorials on LLM training, fine-tuning, and evaluation.
  • Chip Huyen (huyenchip.com) — Leading perspective on the infrastructure, MLOps, and systems engineering required to put AI into production.
  • Eugene Yan (eugeneyan.com) — Focused on the applied ML patterns and practical "how-to" of building reliable, data-driven AI products.
  • Andrej Karpathy (karpathy.ai) — Offers world-class clarity on deep learning fundamentals and the "LLM OS" concept for software engineers.
  • Hamel Husain (hamel.dev) — Expert guidance on the rigors of LLM evaluation, fine-tuning, and building high-quality AI engineering workflows.
  • Vicki Boykis (vickiboykis.com) — Provides a grounded, experienced perspective on ML engineering, data systems, and the reality of deploying models.
  • Jeremy Howard (fast.ai) — Pioneer of the "top-down" code-first approach, making cutting-edge deep learning accessible to traditional software developers.
  • François Chollet (fchollet.com) — Essential for deep thinking on the nature of intelligence, abstraction, and the theoretical limits of current LLM architectures.
  • Tyler Rockwood (rockwotj.com) — High-signal analysis of LLM security, trust boundaries, and practical exploits in AI systems.

Newsletters

  • The Batch (deeplearning.ai) — Andrew Ng's weekly AI digest, high-level synthesis of AI trends and their societal/business impacts from an industry legend.
  • AI News (buttondown.com/ainews) — Daily aggregator, comprehensive daily summary of everything happening in the AI Twitter/X and GitHub ecosystem.
  • Latent Space (latent.space) — Deep-dive podcast and newsletter, excellent for understanding the "AI Engineer" stack and emerging implementation patterns.
  • Import AI (jack-clark.net) — Jack Clark's curated roundup, best-in-class coverage of AI policy, safety, and global research milestones.
  • The Gradient (thegradient.pub) — Long-form AI analysis, providing thoughtful, long-form perspectives and debates on the direction of AI research.
  • TheSequence (thesequence.ai) — Deep-dive technical newsletter, providing detailed breakdowns of research papers and engineering patterns.
  • TLDR AI (tldr.tech/ai) — Daily technical summary, quick, skimmable daily digest of the most important AI tools, papers, and news.
  • Ben's Bites (bensbites.co) — Daily AI product updates, focusing on the "new and shiny" AI products and creative use cases appearing every day.
  • Interconnects (interconnects.ai) — Frontier model analysis, deep, practitioner-level analysis of the newest frontier models and research.
  • AlphaSignal (alphasignal.ai) — Technical AI news, highly technical, signal-heavy newsletter focusing on the latest breakthroughs and code repositories.

Research Labs to Follow

  • OpenAI Research — Setting the pace for state-of-the-art model capabilities and safety evaluations as the industry leader in frontier models.
  • Anthropic Research — Pioneers of constitutional AI and mechanistic interpretability, leading research into how models think and how to align them through structural constraints.
  • Google DeepMind — Historical powerhouse of fundamental AI breakthroughs and scientific applications, continuing to produce foundational research spanning from LLMs to AI for science.
  • Meta FAIR — Leading the charge in high-quality open-source models and fundamental research, a crucial source for open-weights models that democratize AI access.
  • Mistral — Proving that small, efficient models can rival giants in performance, essential for tracking the efficiency frontier and high-performance local inference.
  • DeepSeek — Leading the way in cost-efficient, high-performance open models, particularly in reasoning and coding domains.
  • Allen AI (AI2) — Non-profit research focusing on AI for the common good and open science, important for open-dataset initiatives and research unbiased by commercial interests.

Aggregators & Communities

  • Hacker News (AI filter) — The best place for real-time technical debate and discovering new AI developer tools before they go mainstream.
  • r/LocalLLaMA — The primary hub for the open-weights community, unrivaled for practical tips on running and quantizing models locally.
  • r/MachineLearning — High-density source for academic paper discussions and professional ML engineering advice.
  • Papers With Code — Bridges the gap between academic theory and practical implementation by linking papers directly to runnable code.
  • Hugging Face Daily Papers — Curated daily feed that helps filter the sheer volume of new research appearing on arXiv.

Podcasts

  • Latent Space Podcast — Deep technical conversations with the builders of the AI engineering era, the best source for understanding the actual engineering trade-offs made by leading practitioners.
  • Gradient Dissent (W&B) — Interviews with top ML practitioners about their real-world workflows and challenges, providing deep insight into the production realities of training and deploying models.
  • No Priors — High-level conversations with AI founders and researchers about the future of the industry and the most significant shifts in the technology.
  • Practical AI — Accessible discussions on making AI useful in real-world software development, great for seeing how AI fits into broader software engineering and business contexts.

Sources / References

Contribution Metadata

  • Last reviewed: 2026-03-02
  • Confidence: high