Securing AI Agents from Doing Bad Things
Show notes for AI Explained Part 31 — sandboxing, permission scoping, instruction hierarchy, and the metrics that tell you whether your agent is safe to ship.
How LLMs actually work — tokenization, embeddings, RAG, fine-tuning, agents — explained for engineers who ship production code, not papers.
31 posts below, newest first.
Show notes for AI Explained Part 31 — sandboxing, permission scoping, instruction hierarchy, and the metrics that tell you whether your agent is safe to ship.
Feeling overwhelmed by the fear of AI making huge mistakes? In this video, we break it down into simple pieces.
Feeling overwhelmed by high AI API costs and latency? In this video, we break it down into simple pieces.
Feeling overwhelmed by the different layers of AI memory? In this video, we break it down into simple pieces.
Feeling overwhelmed by memory and state tracking? In this video, we break it down into simple pieces.
Feeling overwhelmed by the idea of communicating AIs? In this video, we break it down into simple pieces.
Feeling overwhelmed by APIs and AI Tool Integration? In this video, we break it down into simple pieces.
Feeling overwhelmed by the hype around "AI Agents"? This video is your ultimate guide to finally understanding AI Agents and Agentic RAG, even if you're completely new…
Feeling overwhelmed by complex AI data architectures? In this video, we break it down into simple pieces.
Why is your AI missing the context? In this video, we break down Reranking. We show you the secret 2nd stage of the search pipeline that perfectly orders results from…
Feeling overwhelmed by your AI giving you bad search results? In this video, we break down Hybrid Search—the ultimate strategy of combining exact keywords with deep…
Feeling overwhelmed by Vector Databases and SQL? In this video, we break it down into simple pieces.
Feeling overwhelmed by getting your data ready for AI? In this video, we break it down into simple pieces.
Feeling overwhelmed by the term "RAG"? In this video, we break it down into simple pieces.
Feeling overwhelmed by AI security risks? In this video, we break it down into simple pieces.
Feeling overwhelmed when your AI forgets what you said 20 minutes ago? In this video, we break it down into simple pieces.
Feeling overwhelmed by how AIs solve complex math or logic puzzles? In this video, we break it down into simple pieces.
Feeling overwhelmed by advanced AI prompting techniques? In this video, we break it down into simple pieces.
Feeling overwhelmed by bad AI responses? In this video, we break it down into simple pieces.
Feeling overwhelmed by how complex AIs stay small enough to run on phones? In this video, we break it down into simple pieces.
Feeling overwhelmed by AI safety terminology? In this video, we break it down into simple pieces.
Feeling overwhelmed by the hardware requirements of AI? In this video, we break it down into simple pieces.
Feeling overwhelmed by the term "Fine-Tuning"? In this video, we break it down into simple pieces.
Feeling overwhelmed by how tech giants actually build ChatGPT? In this video, we break it down into simple pieces.
Feeling overwhelmed by confusing API pricing tiers? In this video, we break it down into simple pieces.
Feeling overwhelmed by latency complaints in your AI app? In this video, we break it down into simple pieces.
Feeling overwhelmed by the hundreds of AI models launching every week? In this video, we break it down into simple pieces.
Feeling overwhelmed by high-dimensional matrices and embeddings? In this video, we break it down into simple pieces.
Feeling overwhelmed by Transformer architecture diagrams? In this video, we break it down into simple pieces.
Ever wonder why AI makes simple math or spelling mistakes? This video demystifies tokenization, breaking down how AI cuts up English words into puzzle pieces.
Unlock the secrets of AI in this masterclass! Part 1 simplifies how AI works, especially Large Language Models like ChatGPT, without any code.
Subjects that frequently appear alongside #ai. Click through to see every post on each one.
Large language models — how they think, why they fail, what RAG fixes, and how to evaluate them. The fundamentals every engineer building on top of an LLM should internalise.
The AI Masterclass series: a numbered, beginner-friendly walkthrough of every concept you need to ship LLM-powered applications, from training to inference to RAG to alignment.
Posts written for people who are new to a topic — minimal jargon, real examples, and the context that more advanced material assumes you already have.
Machine learning from the perspective of someone shipping code, not writing papers. Algorithms, training, evaluation, and the practical trade-offs that decide which model you actually use.
The AI Explained series: short, focused episodes on individual AI building blocks — transformers, attention, tokenization, memory, tool use, multi-agent systems, and more.