I've been writing code for over a decade. I started with Angular, moved to React, built things from scratch, debugged production issues at 2am, stared at CSS until it worked. I knew my codebase inside out because I wrote it.
These days a growing chunk of my time goes to reading code I didn't write. Not code from teammates - code from AI. I describe what I want, the AI writes it, and I review it. Accept, reject, tweak, accept, reject. I still write plenty of code myself - the tricky parts, the architecture decisions, the stuff that needs a human brain. But the balance is shifting. More reviewing, less writing.
It hit me the other day - part of my job is starting to feel like a TSA agent for code.
And it's not just engineers feeling the shift. Designers are watching this happen too. With tools like Figma MCP, v0.dev, and AI that can turn a screenshot into working code, the handoff between design and engineering is getting blurry. Designers used to throw a Figma file over the wall and wait two sprints for it to come back looking nothing like the mockup. Now an engineer can paste a Figma link into Claude Code and have a working component in minutes.

The roles are merging in weird ways. Engineers are making design decisions because the AI gives them options. Designers are wondering if they even need engineers for the straightforward stuff. Everyone's job description is shifting and nobody's quite sure where it lands.
Think about what a TSA agent does. They don't design airports. They don't build planes. They don't decide where flights go. They stand at a checkpoint and screen things. Is this bag safe? Does this person look right? Wave them through or pull them aside.
That's increasingly part of my job now. Claude Code writes a component. I read it. Does the logic look right? Are the edge cases handled? Any security issues? Is it accessible? Wave it through or send it back.
I'm still using my knowledge and experience. I still write code every day. But a growing portion of my work is reviewing and steering rather than typing. And that shift feels significant.

It happened gradually. First it was autocomplete suggestions - just tab to accept a line here and there. Then Copilot started writing whole functions. Then Cursor could edit multiple files at once. Then Claude Code could build entire features from a description.
At each step I thought "this is just a tool, I'm still the one in charge." And that's still true - I decide the architecture, I write the complex logic, I make the calls on what approach to take. But the proportion of keystrokes that are mine versus the AI's has definitely changed.
The thing is, the reviewing part isn't trivial. Ten years of writing code means I can spot problems quickly. I know what good React looks like. I know when a component is doing too much. I can see accessibility gaps, performance issues, security holes. Experience makes me effective at both writing and reviewing.
But I do wonder where this trend goes. If the balance keeps shifting, at what point does "engineer who uses AI tools" become "reviewer who occasionally writes code"?
Here's what I've noticed. For the straightforward stuff - a form component, a data fetching hook, some API glue code - I reach for AI almost every time now. It's faster. The output is fine. Why wouldn't I?
But I've caught myself reaching for AI on things I could easily write myself. Last week I needed a custom hook with a debounced callback. Nothing fancy. A year ago I'd have written it without thinking. This time I paused, then just described it to Claude Code.
Not because I couldn't write it. Because it was faster. And that's the rational choice. But each time I make it, I wonder if the muscle memory is fading a little. I still write plenty of code - the architecture, the state machines, the performance-critical bits. But the easy stuff? I've basically outsourced it. And the line between "easy" and "hard" keeps moving.
I think about the junior developers coming up right now. They're learning to code in a world where AI writes most of it. Their first instinct when they hit a problem isn't to debug - it's to prompt. They've never had to build something from a blank file and figure it out line by line.
When they review AI-generated code, what are they comparing it against? They don't have ten years of handwritten code in their head. They don't have the pattern recognition that comes from making mistakes and fixing them. They're screening luggage without knowing what a bomb looks like.
That sounds harsh but it's a real concern. Code review only works when the reviewer knows more than the code they're reviewing. If your entire career has been reviewing AI output, your ceiling is the AI's ceiling. You can't catch mistakes you don't understand.
I read Brittany Chiang's piece "Welcome to the AI Parade" a few weeks ago and it put words to something I'd been feeling. She talks about the loss of craft - that satisfaction of starting with a blank file and building something line by line, understanding every decision, knowing exactly why each piece is there.
I felt that. I miss it.
There was something valuable about the struggle. Not the suffering - I don't miss spending hours on a CSS bug. But the process of thinking through a problem, trying an approach, realising it's wrong, trying another, and eventually landing on something clean. That process taught me things that reviewing AI output doesn't.
When the AI writes the code, I lose the thinking. I get the answer without the journey. And the journey was where the learning happened.
I want to be clear about this. I use AI tools every day. Claude Code, Cursor, Copilot - they're all in my workflow. I ship faster than I ever have. I take on bigger tasks because the AI handles the boilerplate. My output has genuinely increased.
I'm not arguing we should go back to writing everything by hand. That would be like arguing we should go back to horses because cars changed the nature of travel.
But I think we need to be honest about how the job is changing. We're not "10x engineers" because AI made us faster. We're engineers whose role is shifting - part builder, part reviewer, part AI wrangler. And the reviewer part keeps growing.
I don't have a clean answer. I'm still figuring this out myself. But a few things I've been trying:
I still write some things by hand. Not everything, but the hard parts. The state management logic, the performance-critical paths, the accessibility patterns. The things where understanding matters more than speed. It keeps the muscle memory alive.
I treat AI as a first draft, not a final product. Even when the output looks right, I rewrite sections in my own style. Not because the AI's version is wrong, but because rewriting forces me to understand it. If I can't rewrite it, I don't understand it well enough to ship it.
I'm honest about what I don't understand. When the AI generates something and I'm not sure why it works, I stop and figure it out before accepting it. The temptation is to wave it through because it passes the tests. But that's how you end up with a codebase nobody understands.
I push back on "AI-first" as a mandate. At work, there's pressure to use AI for everything. Move faster, ship more. I push back when it doesn't make sense. Some problems need human thinking first and AI second. The hard part is knowing which ones.
Actually, maybe the metaphor isn't quite right. TSA agents don't improve the security system. They just operate within it. Engineers, even in this new AI-assisted world, can still shape the tools, influence the architecture, make decisions that matter.
Maybe a better metaphor is an editor at a newspaper. The reporters (AI) write the stories. The editor shapes them, catches errors, maintains quality standards, and decides what gets published. An editor's job is valuable and skilled, but it's different from being a reporter.
I'm not sure that makes me feel better. But at least it's more accurate.
The question I keep coming back to: is this a transition or a destination? Are we on our way to something better, or is this just what the job is now? I genuinely don't know. But I think it's worth sitting with the question instead of just waving it through.