Why I’m More Excited About Dreaming Than 2x Token Limits
Everyone’s talking about the same headline from Code with Claude: 2x token limits. And it’s genuinely exciting news. More context means longer conversations, bigger codebases, richer documents, all handled in a single pass. The AI community has been waiting for this, and the celebration is well-deserved.
But I’ll be honest: that’s not what caught my attention.
What I’m actually excited about is Dreaming.
The Part of the Announcement Nobody’s Talking About
Anthropic’s Managed Agents offering quietly introduced something that I think is far more significant in the long run: agents that review their own sessions and rewrite their own memory between runs.
That’s what Dreaming is. It’s not just a catchy name; it’s a fundamentally different approach to how agents improve over time.
Right now, if you want a well-tuned agent, you have to do it yourself. I spend a couple of hours every weekend going through my agents’ sessions: pulling out the patterns that worked, killing the ones that didn’t, updating system prompts, refining instructions. It’s part craft, part debugging, part intuition. And it works, but it doesn’t scale.
Dreaming automates exactly that loop.
Why Self-Improving Agents Change Everything
The current generation of AI agents is impressive. They can write code, draft documents, search the web, call APIs, and chain complex multi-step tasks together. But anyone who works with them closely knows the other side of that story: they can also be frustratingly inconsistent, confidently wrong, and stubbornly repetitive in their mistakes.
The reason is simple: they don’t learn from their own failures. Each session starts fresh. Whatever went wrong last time, a misunderstood instruction, a bad tool call, a flawed reasoning pattern, it’s all forgotten. The agent has no memory of its own history, no mechanism to reflect, and no way to improve without a human stepping in.
Dreaming changes that. Agents that can review their own sessions, identify what worked and what didn’t, and update their own memory and instructions between runs are agents that get better over time, automatically, without requiring a human to babysit the improvement loop.
That’s not a small upgrade. It’s a different category of system.
The Open Source Side: Hermes
For those who prefer to stay on the open source side of things, it’s worth noting that Hermes offers a similar promise. The same core idea — agents that self-diagnose and self-improve — is being explored outside of Anthropic’s ecosystem too.
This matters because it signals that self-improving agents aren’t a proprietary bet. They’re a direction the field is moving in, across both commercial and open source tracks. The question isn’t whether this becomes mainstream; it’s how quickly.
What This Means for Teams Building with AI
If you’re building products or workflows on top of AI agents today, this shift has real implications.
First, the cost of agent maintenance drops. Right now, keeping an agent well-tuned requires ongoing human attention. As self-improvement becomes a standard capability, that overhead shrinks, and the agents you deploy stay sharp without constant intervention.
Second, the performance ceiling rises. An agent that learns from its own mistakes compounds its improvements over time. The gap between a freshly deployed agent and a well-run one will widen, and the teams that start building with self-improving agents now will have a meaningful head start.
Third, the nature of agent oversight changes. You’ll spend less time fixing what went wrong and more time setting the direction for where the agent should go. The human role shifts from debugger to strategist.
The Headline Isn’t Always the Story
2x token limits is the kind of announcement that’s easy to celebrate; it’s concrete, it’s measurable, and it makes existing workflows immediately better. I get why it’s dominating the conversation.
But Dreaming is the kind of capability that quietly reshapes what’s possible. It’s the difference between a tool that’s powerful and a system that grows. In the long run, systems that grow tend to win.
That’s what I’m watching.

.png)
