Anthropic just released a desktop code editor built around Claude. It's called Claude Code, it's free during beta, and according to developer Theo, it's not as good as Cursor. That matters, because understanding why it's not as good is more useful than knowing it exists.
What Claude Code Actually Is
It's a lightweight code editor with Claude integrated directly into the interface. You can ask Claude to generate code, explain existing code, or refactor something you've written. The model has access to your project files and can reference your codebase when answering questions. It runs locally on your machine, and during beta, it's free to use.
On the surface, that sounds useful. In practice, it's missing the features that make Cursor indispensable for developers who've already switched. The difference isn't obvious until you've used both, but once you have, it's hard to go back.
What Makes Cursor Better
Cursor isn't just an AI chatbot bolted onto a code editor. It's a rethink of how AI should interact with code. The killer feature is multi-file editing. You can highlight code across multiple files, ask Cursor to refactor or extend it, and it applies changes across all of them simultaneously. Claude Code doesn't do that. You're working file-by-file, which breaks the flow.
The other thing Cursor does well is context management. It indexes your entire codebase and understands relationships between files. When you ask it to add a feature, it knows which files need to change and how they connect. Claude Code has file access, but it doesn't have the same level of contextual awareness. You end up explaining more and getting less precise results.
Cursor also has inline suggestions that feel like GitHub Copilot but with better reasoning. It predicts what you're trying to write and offers completions that actually make sense in the context of your project. Claude Code has basic autocomplete, but it's not in the same league.
Where Claude Code Might Still Be Worth Using
If you're not a professional developer, Claude Code is less intimidating than Cursor. The interface is simpler, the learning curve is gentler, and the free tier is generous. For someone learning to code or working on small personal projects, it's a solid starting point.
The other use case is experimentation. Because it's built directly by Anthropic, it gets new Claude features faster than third-party tools. If you want to test Claude's latest capabilities in a coding context, Claude Code is the fastest way to do it.
But for professional work - shipping features, maintaining a large codebase, working under deadlines - Cursor is still the better tool. The efficiency gains from multi-file editing and better context management are too significant to ignore.
What This Says About AI Coding Tools
The interesting thing isn't that Claude Code exists. It's that it launched and immediately got compared unfavourably to Cursor. That tells you where the bar is now. It's not enough to integrate an AI model into a code editor. The integration has to be smarter than a chatbot with file access.
Developers want tools that understand their workflow, not just their code. Multi-file editing isn't a nice-to-have. It's foundational. If an AI tool can't track changes across a project, it's forcing you to think like a machine instead of letting the machine think like you.
The other lesson is about speed of execution. Cursor wasn't built by an AI lab. It was built by developers who used AI tools daily and got frustrated with the limitations. They shipped a product that solved their own problems, and it turned out thousands of other developers had the same problems. That's the pattern to watch - the best tools come from people who live in the problem space, not from companies trying to find a use case for their models.
What Actually Matters in AI Code Editors
Context. The tool needs to understand your entire project, not just the file you're editing. Cursor does this. Claude Code doesn't, at least not yet.
Speed. If waiting for the AI to respond breaks your flow, you'll stop using it. Cursor is fast. Claude Code is fast enough, but not noticeably faster than using the Claude web interface, which raises the question of why you'd use the desktop app.
Trust. The tool needs to produce code you can rely on without constant verification. This is where model quality matters, but also where UX matters. Cursor shows you exactly what it's changing and why. Claude Code is less transparent, which makes it harder to trust.
The worst thing an AI coding tool can do is slow you down. If you're explaining the same context repeatedly, or manually applying changes the AI should have handled, you're better off without it. That's the test. Does the tool make you faster, or does it add cognitive overhead?
Should You Try Claude Code?
If you're already using Cursor, probably not. The switching cost isn't worth it. You'd be trading a tool you know for one that's objectively less capable in the areas that matter most.
If you're not using an AI code editor yet, Claude Code is worth trying as a starting point. It's free, it's simple, and it'll show you what's possible. But once you've outgrown it - and you will outgrow it quickly - Cursor is where you'll end up.
For Anthropic, this is a decent first version. But the gap between "decent first version" and "tool developers actually switch to" is significant. Cursor didn't get there by being first. It got there by being better. Claude Code will need to do the same.