The AI Concepts Podcast
The AI Concepts Podcast is my attempt to turn the complex world of artificial intelligence into bite-sized, easy-to-digest episodes. Imagine a space where you can pick any AI topic and immediately grasp it, like flipping through an Audio Lexicon - but even better! Using vivid analogies and storytelling, I guide you through intricate ideas, helping you create mental images that stick. Whether you’re a tech enthusiast, business leader, technologist or just curious, my episodes bridge the gap between cutting-edge AI and everyday understanding. Dive in and let your imagination bring these concepts to life!
The AI Concepts Podcast is my attempt to turn the complex world of artificial intelligence into bite-sized, easy-to-digest episodes. Imagine a space where you can pick any AI topic and immediately grasp it, like flipping through an Audio Lexicon - but even better! Using vivid analogies and storytelling, I guide you through intricate ideas, helping you create mental images that stick. Whether you’re a tech enthusiast, business leader, technologist or just curious, my episodes bridge the gap between cutting-edge AI and everyday understanding. Dive in and let your imagination bring these concepts to life!
Episodes
7 days ago
7 days ago
This episode tackles the lever that turns powerful LLMs into something you can actually run: quantization. We explore what it means to store model weights with fewer bits, why that can cut memory in half at 8-bit and down to roughly a quarter at 4-bit, and the real tradeoff between compression and capability as rounding error accumulates across billions of parameters. We break down why large models survive this better than small ones, why 8-bit is often near lossless, why 4-bit can still be shockingly strong, and why going below that can make models fall apart. We compare the three practical paths you will see in the wild: GPTQ (layer-wise compression with error compensation), AWQ (protecting the most important weights), and GGUF (the local-friendly format that makes CPU and GPU splitting possible).
Tuesday Feb 24, 2026
Module 4: Optimization - The GPU Memory Bottleneck
Tuesday Feb 24, 2026
Tuesday Feb 24, 2026
This episode addresses the real bottleneck after you build an LLM: fitting it into hardware that can actually run it. We explore why GPU memory is the scarce resource, how weights, KV cache, and activations compete for that space, and what that means in practice when prompts get long or concurrency spikes. We compare data center GPUs (high bandwidth HBM) versus local machines like the Mac Studio (huge unified memory but slower bandwidth) to show the core tradeoff between capacity and speed. By the end, you will understand how to choose hardware based on your goal, and why the next lever is quantization to shrink models enough to fit, with a closing reflection on perspective when something big feels like it will not fit.
Friday Feb 20, 2026
Module 3: Reinforcement Learning from Human Feedback
Friday Feb 20, 2026
Friday Feb 20, 2026
This episode addresses how Reinforcement Learning from Human Feedback (RLHF) adds the final layer of alignment after supervised fine-tuning, shifting the training signal from “right vs wrong” to “better vs worse.” We explore how preference rankings create a reward signal (reward models plus PPO) and the newer shortcut (DPO) that learns preferences directly, then connect RLHF to safety through the Helpful, Honest, Harmless goal. We also unpack the “alignment tax,” the trade-off between being safe and being genuinely useful, and close by setting up the next module on running models at scale, starting with GPU memory limits, plus a personal reflection on starting later without being behind.
Friday Feb 20, 2026
Module 3: Supervised Fine Tuning
Friday Feb 20, 2026
Friday Feb 20, 2026
This episode addresses how we turn a raw base model into something that behaves like a real assistant using Supervised Fine-Tuning (SFT). We explore instruction and response training data, why SFT makes behaviors consistent beyond prompting, and the practical engineering choices that keep fine-tuning efficient and safe, including low learning rates and LoRA-style adapters. By the end, you will understand what SFT solves, and why the next layer (RLHF) is needed to add human preference and nuance.
Monday Jan 26, 2026
Module 3: Context Windows & Attention Complexity
Monday Jan 26, 2026
Monday Jan 26, 2026
This episode addresses the physical and mathematical limits of a model’s "short-term memory." We explore the context window and the engineering trade-offs required to process long documents. You will learn about the quadratic cost of attention where doubling the input length quadruples the computational work and why this creates a massive bottleneck for long-form reasoning. We also introduce the architectural tricks like Flash Attention that allow us to push these limits further. By the end, you will understand why context is the most expensive real estate in the generative stack.
Sunday Jan 25, 2026
Module 3: The Lifecycle of an LLM : Pre-Training
Sunday Jan 25, 2026
Sunday Jan 25, 2026
This episode explores the foundational stage of creating an LLM known as the pre-training phase. We break down the Trillion Token Diet by explaining how models move from random weights to sophisticated world models through the simple objective of next token prediction. You will learn about the Chinchilla Scaling Laws or the mathematical relationship between model size and data volume. We also discuss why the industry shifted from building bigger brains to better fed ones. By the end, you will understand the transition from raw statistical probability to parametric memory.
Tuesday Jan 06, 2026
Module 2: The MLP Layer - Where Transformers Store Knowledge
Tuesday Jan 06, 2026
Tuesday Jan 06, 2026
Shay explains where a transformer actually stores knowledge: not in attention, but in the MLP (feed-forward) layer. The episode frames the transformer block as a two-step loop: attention moves information between tokens, then the MLP transforms each token’s representation independently to inject learned knowledge.
Monday Jan 05, 2026
Module 2: The Encoder (BERT) vs. The Decoder (GPT)
Monday Jan 05, 2026
Monday Jan 05, 2026
Shay breaks down the encoder vs decoder split in transformers: encoders (BERT) read the full text with bidirectional attention to understand meaning, while decoders (GPT) generate text one token at a time using causal attention.
She ties the architecture to training (masked-word prediction vs next-token prediction), explains why decoder-only models dominate today (they can both interpret prompts and generate efficiently with KV caching), and previews the next episode on the MLP layer, where most learned knowledge lives.
Monday Jan 05, 2026
Module 2: Multi Head Attention & Positional Encodings
Monday Jan 05, 2026
Monday Jan 05, 2026
Shay explains multi-head attention and positional encodings: how transformers run multiple parallel attention 'heads' that specialize, why we concatenate their outputs, and how positional encodings reintroduce word order into parallel processing.
The episode uses clear analogies (lawyer, engineer, accountant), highlights GPU efficiency, and previews the next episode on encoder vs decoder architectures.
Saturday Jan 03, 2026
Module 2: Inside the Transformer -The Math That Makes Attention Work
Saturday Jan 03, 2026
Saturday Jan 03, 2026
In this episode, Shay walks through the transformer's attention mechanism in plain terms: how token embeddings are projected into queries, keys, and values; how dot products measure similarity; why scaling and softmax produce stable weights; and how weighted sums create context-enriched token vectors.
The episode previews multi-head attention (multiple perspectives in parallel) and ends with a short encouragement to take a small step toward your goals.




