Agentic Engineering Weekly for March 14-21, 2026

Agentic Engineering Weekly for March 14-21, 2026

Something shifted this week: the industry started naming things it couldn't articulate six months ago. Comprehension debt, craft alienation, agentic CD. These aren't buzzwords. They're diagnostic labels for problems practitioners have been feeling but couldn't pin down. When a field starts coining precise vocabulary, scattered experimentation is crystallizing into shared understanding.


My top 3 picks this week


DORA confirms: AI amplifies your bottlenecks, not your velocity

DORA's latest report delivers data that cuts through the "AI makes everything faster" narrative. Their findings show that AI amplifies existing flow: if your delivery pipeline is healthy, AI tooling accelerates it further. If your pipeline is brittle, AI makes the pain worse. The prerequisite for AI-accelerated delivery isn't a better model. It's better engineering practices.

Wes McKinney draws a sharp parallel to Brooks' The Mythical Man-Month: adding more agents to a late project makes it later. The coordination overhead doesn't vanish just because the workers are silicon. Barry O'Reilly observes that most organizations are bolting AI tools onto operating systems built for a pre-AI world, creating more noise and fragmentation rather than flow.

Doc Norton adds historical perspective, pointing to previous technology shifts that promised to eliminate developer bottlenecks but merely relocated them. The pattern is consistent: new tools expose the real constraints. If your deployment pipeline takes two weeks of manual approvals, an agent that writes code in ten minutes just means the code waits fourteen days and nine hours fifty minutes instead of fourteen days and ten hours.

Worth reading:


Craft alienation finally has a name, and it's spreading

Hong Minhee coined the term craft alienation and it's resonating because it names a split that wasn't about skill level before. LLM coding assistants created a new fault line: not between good and bad developers, but between those who understand what they build and those who've outsourced that understanding. The identity crisis is real, and it's showing up everywhere from engineering blogs to the New York Times.

Ian Cooper makes a useful distinction between a "coder" and a "programmer." The coder role, translating intent into syntax, now belongs to the agent. But the programmer role, the one that designs, reasons about trade-offs, and makes judgment calls, remains fundamentally human. O'Reilly Radar's third AI Codecon is wrestling with exactly this question: what does craftsmanship look like when the craft's most visible artifact is increasingly machine-generated?

Mo Bitar takes the most provocative stance: there's no skill in AI coding. The models do the heavy lifting and you can't meaningfully change outcomes through clever prompting. Philip Kiely offers an instructive counterpoint. He works at an AI company, tried to get AI to write his book about AI, watched it completely fail, then woke up at 5am for six weeks and wrote it himself. Sometimes the craft resists automation not because the tools are bad, but because the work demands something the tools can't provide.

Worth reading:


Comprehension debt names the cost we've been ignoring

Addy Osmani coined comprehension debt and the concept is sticking because it names something distinct from technical debt. Technical debt is code you know is suboptimal. Comprehension debt is code you can't even evaluate because nobody on the team understands it well enough to judge. An AI can generate a thousand lines of correct, performant code in minutes. The team then owns a thousand lines they didn't write, didn't design, and can't confidently modify.

Osmani and O'Reilly argue that specs become the forcing function in an agent-first world. If consumers can't clearly describe what they want, engineers won't be able to either, and agents certainly won't fill that gap. Spec-driven development isn't bureaucracy: it's the only reliable way to maintain comprehension when the code-generation bottleneck disappears. Gerald Sussman's warning resurfaces here too: in the days of coding agents, understanding what the code does matters more than ever.

Tom Wojcik asks the measurement question nobody's tracking: what's the real cognitive cost of prolonged AI-assisted coding? We count lines generated, PRs merged, velocity points delivered. We don't count the accumulating gap between what the codebase does and what the team actually understands about it. That gap is comprehension debt, and unlike technical debt, you can't even see it accumulating until something breaks and nobody knows why.

Worth reading:


The winning pattern is separating decisions from execution

Across co-design frameworks, quality processes, and refactoring workflows, a consistent pattern keeps emerging: the teams shipping well with agents have formalized the split between deciding what to build and executing the build. David Whitney's co-design framework makes this explicit: you own the design decisions, the agent owns the execution. Steve Yegge's approach to quality when "vibing" follows the same logic. Don't inspect the code, that's not your job. Define the quality bar and verify the output meets it.

I'm finding this maps cleanly onto Cynefin: not every AI coding problem is the same type of problem, and choosing which approach fits is the irreducibly human decision. Paint-by-numbers for the obvious, collaborative exploration for the complex. The agent doesn't know whether it's in obvious or complex territory. You do, and that judgment is the thing that can't be delegated.

This pattern also shows up in tooling. AI-assisted refactoring tools like ast-grep separate three phases: detect, preview rewrites, apply. Each phase has built-in test support, keeping the human in the decision loop at every transition. The code review skeptics at Agent Driven Development land in the same place: stop reviewing code line by line, start proving it works through continuous delivery, not ceremony. The common thread is that verification beats inspection when code-generation cost drops to near zero.

Worth reading:


Agentic engineering is crystallizing into a discipline

Six months ago, "working with AI agents" was ad-hoc experimentation. This week, it's getting formal specifications. The MinimumCD community published Agentic Continuous Delivery (ACD), extending their continuous delivery framework with constraints and practices designed specifically for agent-generated changes. Bassim Eledath published an 8-level maturity model. Simon Willison published both a Pragmatic Summit talk and a written guide defining agentic engineering as a set of patterns, not a product category.

The formalization matters because it creates a shared vocabulary. When Willison defines specific patterns from real usage and David Vujic warns against falling into waterfall-style development with agents, they're having a conversation that wasn't possible six months ago. The discipline is developing its own anti-patterns, its own best practices, its own quality criteria. That's what fields look like when they transition from "interesting experiment" to "thing we need to get right."

What's notably absent from most of these frameworks is any claim that agents change the fundamentals. The MinimumCD practice guide is a phased migration from quarterly releases to continuous delivery, now extended with agent-specific guardrails. Vujic's core argument is that agile principles apply more than ever when agents are involved. The discipline is crystallizing not around new principles, but around new applications of old ones.

Worth reading:


Quick Hits


Curated from 30 sources across articles, podcasts, and videos. Week of March 14-21, 2026.