I’ve stopped letting AI tools name things in my code. Not because the AI’s names are bad in some obvious way — they look fine. They’re plausible. They follow conventions. But they’re consistently worse than what I’d come up with after thirty seconds of thinking.
The realization snuck up on me. I’d accept AI-suggested names, ship the code, then notice three weeks later that I couldn’t quite remember what processUserData did, while my hand-named enrichProfileWithSubscriptionState from the same time period was instantly understandable.
The pattern
The AI’s naming defaults are toward generic plausibility:
handleEventoverrecalculateOrderTotalOnLineItemChangeprocessDataoverdedupeAndNormalizeAddressesvalidateInputoverrejectMalformedPayloadOrTooLargePayloadhelperfiles containing functions that aren’t really helpersutils.tsfor things that aren’t utilities
Each AI-generated name is technically defensible. The functions named that way work. The code reviews pass. But six months later, the code reads like a generic tutorial instead of a specific solution.
Why this happens
The model’s training data has many examples of generic names because much of what’s been written is generic. Tutorials, sample code, throwaway scripts — all use generic names because the content is generic.
When the model encounters your specific business logic, the patterns it falls back to are the generic ones. It doesn’t know your domain well enough to name things in domain-specific terms. It picks words that mean roughly the right thing, which means they’re approximate, which means they’re worse than specific.
The cost compounds
Names are read more than they’re written. A name written once is read hundreds of times — in autocomplete, in stack traces, in code review, in IDE navigation, in git blame, in documentation, in conversations with teammates.
A slightly worse name doesn’t matter much each time it’s read. It matters across hundreds of reads.
A specific example. A function I named applyPromotionEligibility(cart, customer). It’s instantly clear what it does and what changes. The same function, AI-named, would have been applyDiscount(cart, customer). Both work. But “promotion eligibility” is the actual concept in our business; “discount” is the generic version. Engineers reading the codebase will understand the first one immediately and have to think about the second one.
Multiplied by hundreds of reads across a codebase’s lifetime, the cost of the generic name is real.
What I do instead
For naming, my workflow:
- Write the code with a placeholder name
- Once the code works, sit with it for thirty seconds
- What is this actually doing? What’s the simplest, most specific name?
- Rename to that
- Often: rename a second time after using it for a few hours
The thirty seconds of thought produces names that are, on average, much better than what AI suggests. Not because I’m a naming wizard — because thirty seconds of focused thought beats automatic plausibility.
For most other code tasks, AI is fast and useful. For naming specifically, the speed gain costs you in long-term readability.
What about IDE rename refactors?
A counterargument: “if the name is bad, just rename later.” LSP-driven rename is fast.
True, but the rename rarely happens. Engineers see a not-quite-right name and let it slide. The original name persists. The collective effect is a codebase full of slightly-off names.
The version that gets named carefully on first write doesn’t accumulate this drift. Better to spend the time at write-time than to rely on a future cleanup that won’t happen.
Where AI-naming is fine
A few places where I let AI name things without much friction:
Local variables in obvious code. A loop counter, a temporary value used only on the next line. The name doesn’t matter; the local context makes it obvious.
Test cases. Test names like it("returns 0 for empty array") are conventional. AI suggestions are fine.
Throwaway prototypes. Code I’m going to delete next week doesn’t deserve careful naming.
Boilerplate that follows known conventions. A standard React component file, a standard Rails controller method — the names follow conventions and the AI’s defaults are correct.
For these, AI naming is fine. The code is local in scope or short-lived; the cost of mediocre naming doesn’t compound.
For everything else — long-lived code, public APIs, data shapes that other developers will read — I name by hand.
The signal it sends
There’s a meta-observation here. When I let AI name things, the codebase reads as more generic. When I name things carefully, the codebase reads as a specific solution to a specific problem.
The first reads like “this is what code generally looks like.” The second reads like “this is what this team actually does.”
Both are valid choices. For codebases that are generic — common SaaS patterns, well-trodden frameworks — generic naming may be fine. For codebases solving novel problems, specific naming carries information about what’s novel.
I prefer the second. It makes the codebase a teaching artifact about the problem domain. AI-generic naming makes the codebase a teaching artifact about generic code patterns. The former is more valuable for engineers reading the code.
Summary
AI tools are great for many parts of coding. Naming is the part where the speed gain costs you the most. The thirty-second pause to name carefully pays back many times over the life of the code.
This isn’t about distrusting AI. It’s about understanding which parts of programming benefit from AI’s strengths (fast typing, broad patterns) and which benefit from human attention (specificity, context, judgment).
Naming is firmly in the second category. It always has been. AI tools made it tempting to skip the careful naming step; the cost of skipping shows up later.