Anthropic just shipped the most significant redesign to Claude Code since its launch. The desktop app now includes a session sidebar for juggling multiple conversations, drag-and-drop layout control, an integrated terminal that doesn't require jumping to a separate window, and a proper file editor. It's a legitimate upgrade—if you've been using Claude Code in the terminal, this feels like a breath of fresh air.
The Repeatable Routines Feature
Alongside the redesign, Anthropic rolled out "repeatable routines" as a research preview. Think of these as scheduled automation for development tasks—things like running tests on a timer, automatically reviewing PRs when they land, or triggering code refactoring at specific intervals. It's a nod to the fact that AI coding assistants aren't just for ad-hoc prompting anymore; they're becoming part of the development loop.
The real question isn't whether Claude Code looks better. It's whether the model underneath still performs at the level developers expect.
The Performance Controversy
Here's where things get uncomfortable. As Anthropic shipped this glossy redesign, a growing chorus of developers reported that Claude Opus 4.6—and Claude Code itself—feel noticeably slower and less capable than they were a few months ago. The accusations range from subtle quality degradation to outright "nerfing."
Anthropic employees have publicly pushed back, denying that the company intentionally degrades model performance to manage capacity. But the timing is awkward: the redesign lands the same week users are questioning whether the product they're using is actually getting worse.
There's a reasonable explanation: compute constraints during high demand could lead to prioritization changes that affect model routing and quality. But transparency around this is nearly nonexistent. Developers build workflows around these tools. They deserve to know if their tooling is changing underneath them.
What This Means
The desktop redesign is genuinely useful. The session management alone makes multi-project work much less painful. But the surface-level improvements risk being overshadowed if the underlying model quality continues to slip in perception—or reality.
Meanwhile, OpenAI is pushing forward with GPT-5.4-Cyber, and the broader competitive landscape isn't waiting. Anthropic needs to solve both problems: ship a better UI and restore confidence that the model inside hasn't been quietly dialed back. The redesign is the easy part.