Skip to content
M LearnwithManoj

All topics

#ai-agents

How autonomous AI agents reason, plan, use tools, remember context, and stay aligned with your intent. The posts collected here cover the ReAct loop and tool use, agentic RAG, multi-agent orchestration, short-term and long-term memory, semantic caching, and human-in-the-loop patterns for catching mistakes before they ship.

The series is opinionated about what makes an agent useful versus what makes it dangerous: which capabilities to grant, which to gate behind explicit approval, and which to refuse outright. Every post pairs the mental model with the trade-offs that come up the moment you try to put an agent in front of real users — latency, cost, evaluation, and the failure modes that emerge only at scale. Read them in order if you're new to agents, or skip to the topic you need to ship next.

1 post below, newest first.

Securing AI Agents from Doing Bad Things

Show notes for AI Explained Part 31 — sandboxing, permission scoping, instruction hierarchy, and the metrics that tell you whether your agent is safe to ship.

Subjects that frequently appear alongside #ai-agents. Click through to see every post on each one.

#ai 1 post

How LLMs actually work — tokenization, embeddings, RAG, fine-tuning, agents — explained for engineers who ship production code, not papers.

#ai-explained 1 post

The AI Explained series: short, focused episodes on individual AI building blocks — transformers, attention, tokenization, memory, tool use, multi-agent systems, and more.

#llm 1 post

Large language models — how they think, why they fail, what RAG fixes, and how to evaluate them. The fundamentals every engineer building on top of an LLM should internalise.

#security 1 post

Practical software security for engineers — secrets handling, threat modelling, least privilege, prompt injection, sandboxing, and AI-specific attack surfaces.