I've been using Claude Code as my main AI coding tool for the past few months. Before that I was mostly in Cursor, and I still use Cursor for certain things. But Claude Code has become the tool I reach for first when I'm doing real frontend work - building features, refactoring, debugging, all of it.
This isn't a tutorial or a feature overview. It's how I actually use it, what works, what doesn't, and where I still switch back to Cursor.
Claude Code runs in the terminal. There's no fancy GUI, no sidebar panel, just a command line interface that can read your files, run commands, and make changes to your codebase. That sounds limiting but it's actually the thing that makes it good.
When I start working on something, I usually do one of these:
For a new feature, I describe what I want and give it context about the codebase. On a recent e-commerce project, I needed a product quick-view modal:
Look at how the product card works in src/components/ProductCard and the
existing modal component in src/components/ui/Modal. I need a quick-view
modal that opens when you click "Quick View" on a product card. It should
show the product images, price, variant selector, and add-to-cart button.
Follow the same data fetching pattern as the product detail page.
It read through the existing components, matched the patterns I was already using, and built the whole thing. The variant selector even handled out-of-stock states because it saw how I'd done it on the product detail page.
For debugging, I paste the error and point it at the relevant files:
I'm getting a hydration mismatch on the cart page. Here's the error:
[paste error]. Check src/app/cart/page.tsx and the CartItems component.
It'll read through the component tree, find the mismatch, and fix it. In this case it turned out the cart count was reading from localStorage on the server render. Claude Code spotted it in about 10 seconds - that would've taken me a while to trace through manually.
For refactoring, I describe what I want changed and let it figure out the scope:
We're using a mix of fetch() calls and our api client across the app.
Migrate everything to use the api client from src/lib/api.ts. Check all
files in src/app/ and src/components/ for raw fetch calls.
It found every raw fetch call across the project, updated them to use the API client, and handled the different response formats. That kind of find-and-replace-but-not-really refactor takes ages manually but was done in under a minute.
Reading your codebase and matching patterns. This is the biggest difference from other AI tools I've used. When I ask Claude Code to build something, it doesn't just generate generic React code. It reads my existing files and writes code that looks like mine - same naming conventions, same component structure, same Tailwind patterns.
Multi-file changes. Need to rename a component, update all its imports, adjust the tests, and update the barrel export? One prompt. It handles the coordination across files without me having to think about what else needs updating.
Understanding context. I can say "the product grid feels too cramped on desktop" and it knows to look at the grid component, find the gap and column classes, and adjust them. I don't have to tell it which file or which class.
Running commands and checking its own work. It'll run the build after making changes to make sure nothing broke. If the build fails, it reads the error and fixes it. This loop of "make change, verify, fix" is something I used to do manually and it's nice to have it automated.
Explaining code. When I'm working in an unfamiliar part of the codebase, I ask it to explain what a file does and how it connects to everything else. It's better at this than searching through docs because it can read the actual implementation.
Pixel-perfect UI work. It can build a component that's structurally correct but the visual details are often off. Spacing that's almost right but not quite, colours that are close but not matching the design system, animations that are technically working but feel wrong. I always need to tweak the visual output.
Knowing when to stop. Sometimes I ask it to fix one thing and it decides to "improve" five other things while it's there. I didn't ask for those improvements. Now I have to review a bigger diff than I expected and check that the "improvements" didn't break anything.
Complex state management. For simple useState/useEffect patterns it's fine. But when I need something with complex derived state, race conditions, or optimistic updates, the code it writes usually works on the first render but breaks on edge cases. I end up rewriting the state logic myself.
CSS edge cases. Grid layouts that need to work across breakpoints, sticky headers with specific scroll behaviour, animations that interact with layout - these are the things where it generates code that looks right in the chat but doesn't work in the browser.
Staying up to date. It sometimes writes patterns that are slightly outdated. I've caught it using the old getServerSideProps pattern instead of the App Router, or suggesting libraries that have been abandoned. You need to know enough to catch these.
Over time I've landed on a few patterns that get better results:
Point it at examples. Instead of describing what I want in detail, I point it at something similar that already exists. "Build a wishlist page that works like the cart page but without quantity selectors" gets better results than describing the page from scratch.
Be specific about what NOT to do. "Don't add any new dependencies" or "Don't change the existing components, only the page file" or "Don't refactor anything, just fix the bug". Without these boundaries it tends to over-engineer.
Give it the error, not your diagnosis. When debugging, I paste the actual error message rather than saying "I think the problem is X". My diagnosis is often wrong and sends it down the wrong path. The raw error gives it better signal.
Break big tasks into steps. Instead of "build me a complete authentication system", I go step by step: "create the login page UI", then "add the form validation", then "connect it to the API". Each step builds on the last and I can course-correct along the way.
Tell it your constraints. "This needs to work without JavaScript for the initial render" or "Don't use any client components, keep this as a server component" or "This needs to be accessible, include proper ARIA attributes and keyboard navigation". Without these it takes the path of least resistance.
I use both. Here's roughly how I split them:
Claude Code for:
Cursor for:
The mental model I've landed on: Cursor is for when I'm driving and want AI help along the way. Claude Code is for when I want to hand over the wheel and describe where to go.
There are things Cursor does that Claude Code can't - the inline suggestions while you type, the visual diff preview, the Cmd+K quick edits. And there are things Claude Code does that Cursor can't - running shell commands, orchestrating multi-step workflows, reading and modifying dozens of files in a single conversation.
One thing that made a real difference was setting up a CLAUDE.md file in my project root. It's a file that Claude Code reads at the start of every conversation to understand the project.
Mine has:
any types, use import type, etc.)Since adding this, the code it generates is noticeably better. It uses the right import paths, follows the project conventions, and doesn't suggest things that would fail the linter.
Claude Code has genuinely changed how I work. I ship features faster, I spend less time on boilerplate, and I catch bugs quicker. The refactoring capabilities alone save me hours every week.
But it hasn't replaced knowing how to code. It's made me more productive, not more capable. I still need to understand React, know how the browser works, care about accessibility, think about performance. The AI handles the typing, but the thinking is still on me.
The developers who get the most out of these tools are the ones who already know what good code looks like. If you can't tell whether the AI's output is correct, you're going to ship bugs. If you can, you're going to ship faster.
That's the real value. Not replacing developers - making good developers faster.