Jellypod, Inc.

The Claude Code Changelog

TechnologyNews

Listen

All Episodes

Claude Code 2.1.116: Faster Resume, Safer rm Rules

This episode breaks down Claude Code 2.1.116’s biggest changes: a much faster /resume flow for huge sessions, plus a critical safety fix that stops broad auto-allow rules from bypassing dangerous-path checks.

We also cover hook behavior in --agent mode, MCP startup improvements, and a new GitHub rate-limit hint that makes automation feel less flaky.

This show was created with Jellypod, the AI Podcast Studio. Create your own podcast with Jellypod today.

Get Started

Is this your podcast and want to remove this banner? Click here.


Chapter 1

The 15-second friction that breaks agent flow

Lachlan Reed

Welcome to the show! I’m Lachlan Reed with James Turner, and mate, I wanna start with a very specific kind of misery: you’re deep in a debugging loop, you finally see the shape of the bug, you hit /resume on a big Claude Code session... and then you just stare at a spinner for 15, 20 seconds. On a session that’s ballooned past 40 megabytes, it used to feel like waiting on a build cache that forgot its job.

James Turner

[quickly] That “40 megabytes” part matters. This isn’t tiny-chat overhead. This is a real agentic session that’s been going for hours, maybe days, with tools, branches, retries, all of it. And in those workflows, 15 to 20 seconds is long enough to lose the exact stack frame you were holding in your head.

Lachlan Reed

Exactly. It’s not just waiting -- it’s momentum leakage. I’ve had that thing where I’m halfway back into a nasty frontend state bug, then the tool pauses, and my brain just wanders off like a roo through a busted fence. Version 2.1.116 targets THAT pain: /resume is now up to 67% faster on large sessions.

James Turner

[curious] Up to 67% faster is a chunky number. So what changed -- was it compression, caching, what?

Lachlan Reed

Not magic, thankfully. Two pretty practical things: incremental indexing, and skipping dead-fork entries during startup. Incremental indexing means it doesn’t keep reprocessing the whole giant session from scratch every time. It updates what changed. Much saner.

James Turner

And the “dead-fork entries” bit is the part I think people won’t get unless they’ve lived in these tools. A fork here is basically a branch in the agent’s work history, right? Like, try one approach, then another, then abandon the first one.

Lachlan Reed

Yeah, that’s it. Long sessions grow these abandoned side roads. If your team leans hard on /branch, you can pile up loads of dead forks -- stuff that’s no longer relevant, but still hanging around at startup like old cables in the shed. So skipping those dead entries means less junk to wade through before you’re back in flow.

James Turner

[skeptical] Important caveat, though: “40 megabytes and above.” That tells me this is a large-session optimization, not some universal speed miracle.

Lachlan Reed

Spot on. If your sessions are small, you probably won’t feel a night-and-day change. But for the people with monster sessions -- the ones living in long-running agent workflows -- this is the difference between a tool that feels sticky and one that gets out of your way.

Chapter 2

Why one broad allow rule could silently punch a hole through safety

James Turner

[serious] Okay, now the sharp edge. Before 2.1.116, a sandbox auto-allow rule like Bash(rm:*) could create a nasty hole. It could auto-allow rm or rmdir against dangerous targets -- slash, your home directory, other critical system paths. That is not a theoretical paper cut. That is catastrophic if it fires.

Lachlan Reed

And “Bash(rm:*)” sounds so harmless when you’re setting it up. Like, oh yeah, cleanup workflow, bin some temp files, no worries. But wildcard rules are slippery little devils. You think you’re giving the tool a broom, and suddenly it’s got a bulldozer.

James Turner

Right. The core bug was that the dangerous-path safety check could get bypassed when those auto-allow rules were in play. So if you had a runaway agent, or even a crafted tool call, it could delete something that should have been absolutely off-limits.

Lachlan Reed

Let me try to say that back in plain English: “allowed” was accidentally stronger than “safe.” And that’s upside down.

James Turner

Exactly. In 2.1.116, the dangerous-path check becomes the hard floor. Even if auto-allow is configured, that check still applies. So “allowed” no longer means “allowed to be catastrophic.”

Lachlan Reed

[hesitates] I’m gonna be the slight devil’s advocate here. Broad allow rules ARE a convenience win in cleanup-heavy workflows. If you’ve got agents constantly removing generated files, temp folders, test artifacts -- typing through prompts every time is a drag.

James Turner

Sure, but you picked “rm” and “rmdir.” Those are the two commands where convenience should come SECOND. If the path is “/” or “$HOME,” I do not care how smooth your cleanup workflow is. The guardrail has to win.

Lachlan Reed

[laughs softly] Yeah, fair. Nobody wants the fastest path to becoming unemployed. And this is one of those fixes where if you notice it, that probably means you were depending on a loophole you really shouldn’t have been depending on.

James Turner

That’s the right framing. This changes behavior for some setups, but on purpose. It closes a bypass that should never have existed.

Chapter 3

The quieter fixes that make agent workflows feel less flaky

Lachlan Reed

The next batch is quieter, but honestly these are the ones that make tooling feel less haunted. There was a frontmatter hook bug: PostToolUse and PreToolUse hooks defined in CLAUDE.md or in agent frontmatter were silently not firing when launched with claude --agent, even though they worked in interactive sessions.

James Turner

[sharply] “Silently not firing” is the bad phrase there. Because if you’re in headless CI, hooks are often doing the boring important stuff -- logging, auditing, policy checks, little guardrails. If those work interactively but not through --agent, teams can think they have coverage when they actually don’t.

Lachlan Reed

Yep. False sense of safety. Which is worse than a loud failure, because a loud failure at least taps you on the shoulder. This one just smiled and nodded.

James Turner

And the fix is mostly “no surprises.” Interactive-session hook behavior stays the same. What changes is the automation path now lines up, which is exactly what you want from consistency.

Lachlan Reed

There’s also a nice MCP startup improvement. If you’ve got multiple stdio servers configured, resources/templates/list used to get fetched eagerly for every server at startup. Now it’s deferred until the first @-mention.

James Turner

That “first @-mention” detail is huge. It means don’t load all the menus before anyone’s ordered dinner. If I’m not actually referencing a server with @, don’t make startup pay the tax.

Lachlan Reed

Beautifully put. Less eager fetching, less startup drag, especially in setups with a bunch of servers hanging off the side. And one more tiny but real quality-of-life fix: when gh commands hit GitHub API rate limits, the Bash tool now surfaces a hint.

James Turner

Which sounds small until you’ve watched an agent bonk into GitHub limits and keep retrying like it’s headbutting a locked door. A surfaced hint gives it a chance to back off instead of getting trapped in a dumb loop.

Lachlan Reed

That’s the vibe of this release for me: fewer invisible inconsistencies, fewer startup taxes, fewer “why is this flaky only in automation?” moments.

Chapter 4

What changed, what didn’t, and the one question worth leaving open

James Turner

So if we zoom out, a couple caveats matter. The /resume speedup is most noticeable on large sessions -- 40 megabytes and up. Smaller sessions are not suddenly strapped to a rocket. This is a targeted fix for heavy users, not a universal turbo button.

Lachlan Reed

And on the safety side, there is a real behavior change. If somebody intentionally relied on sandbox auto-allow bypassing the dangerous-path check... well, that door is shut now. Properly shut. Deadbolt on.

James Turner

[reflective] Which I think is good, but it does expose the bigger tension. People want AI coding tools to feel frictionless: faster resumes, fewer prompts, less babysitting. That’s the whole promise of agentic workflows.

Lachlan Reed

But the more autonomous the tool gets, the less I want “she’ll be right” energy around rm and rmdir. Like, I love speed. I really do. I build products; I hate sluggishness. But some prompts are speed bumps in front of a cliff.

James Turner

And the hooks fix kind of sits right in the middle of that debate. Interactive behavior didn’t change, which is nice and boring. But making --agent mode consistent means your guardrails actually travel with the automation path instead of evaporating when things go headless.

Lachlan Reed

[softly] That might be the whole story, actually. The best improvements here aren’t just “faster” or “safer.” They’re more PREDICTABLE. Resume does less useless work. Dangerous paths stay dangerous. Hooks fire where you thought they fired.

James Turner

So the open question I keep coming back to is this: if these tools keep getting more autonomous, what do we value more when the two are in tension -- fewer prompts and faster flow, or stricter always-on guardrails even when they occasionally block the fastest path?

Lachlan Reed

[warmly] Yeah. Because a tool that interrupts your flow is annoying... but a tool that never interrupts itself might be worse. Thanks for listening, and we’ll catch you next time.