The Nostalgia Problem: Six Months of AI-Generated Code
· 5 min read
For the last six months, I’ve barely written a line of code by hand. Instead I’ve managed a platoon of AI agents breaking down features, reviewing outputs, merging pull requests. By every metric that matters to a product manager, it’s been a success.
Then yesterday I sat reading a pull request. The code was elegant, with clean abstractions, thoughtful error handling, the kind of work I’d have been proud to write a year ago. What I felt, reading it, wasn’t pride. It was nostalgia.
I missed writing code.
The Shift from Craftsperson to Conductor #
The shift is easy to describe but harder to sit with. I specify what needs building, the agents build it, I review the results. The relationship is managerial: delegating, approving, occasionally redirecting when the implementation misses the mark. Metrics look great. But metrics capture what shipped, not what happened in the making of it.
When you write code yourself, you make thousands of small decisions: where to break a function, what to name a variable, whether to extract a helper or inline the logic. Individually trivial, but they accumulate into something: a relationship with the codebase. You know it because you built it, choice by choice.
When the AI writes the code, that process disappears entirely. The PR arrives complete. You review it, approve it, merge it. Committed under your name, but not really yours.
The Intimacy Problem #
Intimacy with a codebase comes from struggle: from debugging why the test fails at (yes) 2 AM, from refactoring the same function three times before finding the right abstraction, from typing each line and feeling whether it fits.
AI-generated code skips all that. The struggle happens somewhere else: in the agent’s context window, in whatever reasoning process produces the output. You see the result, not the journey.
One developer described it well: “I just don’t have the same insight as I would if I wrote the code, no ownership, even if it was committed in my name.”1 I kept assuming that feeling would fade as I got more used to the workflow. It hasn’t.
59% of developers use AI-generated code they don’t fully understand.2 When I first read that, I assumed it was a skills problem, people moving too fast. I’m less sure now. The tools are so good at producing plausible code that it becomes genuinely hard to know where your understanding stops.
What You Can Control #
You can’t stop technology from advancing, and there’s no real argument for doing so. But you can decide what you preserve in the process.
My first instinct, when I noticed the gap, was to tell myself I’d adjust. That the ownership feeling was just a transition cost I hadn’t paid yet, that once I’d reviewed enough AI code, it would start to feel like mine. Six months in, I’m not sure that’s true. The code gets reviewed, merged, forgotten. My understanding of it doesn’t deepen the way it used to when I was the one who built it.
The nostalgia I felt reading that PR wasn’t just sentimentality. I’d optimized so hard for output that I’d traded away the felt experience of the work itself without noticing until it was gone. The agents are very good at producing outcomes. Outcomes are what we’re measured on. That’s the trap.
The Paradox of Success #
I shipped more features, fewer bugs, faster. By every measure a sprint report cares about, the last six months have been a success. And something essential still feels missing.
The more effective the tools become, the more we risk losing what makes programming satisfying: the direct manipulation of ideas through code. Not the output of that manipulation, but the activity itself. Programming has always been a form of thinking. You understand a problem differently when you’ve had to implement a solution rather than just review one. AI agents compress that process: they deliver the product but skip the understanding that comes from building it.
If we’re saving time with AI agents, the question is what we’re doing with it. Deeper architecture work? Or just shipping more features?
Regaining Ownership #
Review like an author, not an auditor. There’s a difference between approving code and understanding it. When I catch myself reading a PR the way I’d skim a documentation page, looking for obvious problems rather than internalising the logic, I’ve stopped owning it. Tracing execution paths, questioning design choices, identifying what the agent might have missed: this is slower, but it’s the difference between code that’s in your repository and code that’s actually in your head.
Write the load-bearing parts yourself. Core business logic, security boundaries, the pieces where a subtle misunderstanding will cost you three days in six months. Not because the agent will write them badly, but because you need to know them well enough to fix them when things go wrong. I’ve started treating this as a deliberate choice rather than an admission of inefficiency. Some code is worth the time it takes to build by hand.
Keep architectural decisions as yours to make. Decompose complex problems yourself before engaging agents. Decide how components fit together, where the boundaries are, what the system is actually doing. If you outsource the architecture along with the implementation, you end up owning neither.
A Practice Worth Trying #
This week, pick one feature that would normally go to an agent and write it yourself. Not as a protest, but as a calibration. Feel the decisions as you make them. Notice where your first instinct is wrong and you have to backtrack.
I tried this recently. It took longer than I expected, and the code wasn’t obviously better. But I understood it. I knew why every line was there. That’s a feeling I’d mostly stopped having, and I hadn’t quite registered how much I missed it until it came back.
The agents aren’t going away. Some of what’s been lost probably isn’t coming back either. But the ratio of delegation to craft is still a choice, and it’s worth making deliberately rather than by default.