PyTorch Essentials Cheat Sheet: From Zero to Backpropagation

A dense, correct reference covering tensors, GPU acceleration, autograd, backpropagation, and training loops. Everything you need to understand how PyTorch trains models.

March 10, 2026 · 4 min · Joshua Antony

The Spelled-Out Intro to Language Modeling and Transformers

A dense walkthrough of how large language models work – from next-token prediction to tokenization, embeddings, self-attention with causal masking, multi-head attention, and the full transformer architecture. Based on Andrej Karpathy’s teaching approach.

March 5, 2026 · 9 min · Joshua Antony