Tinker AI
Read reviews
3 min read Editor

A pattern I’ve noticed lately: certain categories of code are pulling back toward human authorship even as AI tooling matures. Engineers are deliberately writing some things by hand that AI could generate.

The pattern is more interesting than “AI couldn’t do this part.” It’s about specific places where hand-writing matters.

Categories pulling back

A few specific places where I see hand-authorship returning:

Critical-path business logic. The code that defines what your product does. The pricing rules, the eligibility checks, the core workflows. Engineers are increasingly hand-writing these even though AI can scaffold.

Architectural foundations. Module boundaries, public APIs, type structures. Hand-designed; AI fills in implementations.

Anything with “weird” requirements. Code that has to do something specific to your business that AI’s defaults miss. Easier to hand-write than to argue with the AI.

Performance-critical inner loops. Hot paths where every cycle matters. Manual tuning beats AI generation.

Code that defines team conventions. The first instance of a pattern. Once written, AI scales it; the first one is human.

Documentation of subtle decisions. Comments explaining why something is unusual. AI’s defaults are generic; subtle reasons need human articulation.

Why hand-writing matters here

Several reasons specific categories pull back:

The value is in the specifics. Critical business logic encodes what makes your product yours. Generic AI defaults miss this; specific human authorship captures it.

The cost of being wrong is high. Hot paths, security, payment logic — wrong code is expensive. Hand-writing is more careful.

The thinking matters more than the typing. When the design is the hard part, AI doesn’t help. The typing is incidental.

Team conventions need deliberation. Conventions are decisions. AI doesn’t decide; humans do. Once decided, AI follows.

Subtle context resists transfer. Some context exists in the writer’s head. Hand-writing externalizes it; AI generation doesn’t.

What this looks like in practice

A typical workflow I see:

For a new feature:

  1. Senior engineer hand-writes the architectural foundation
  2. The foundation includes type definitions, key interfaces, error handling pattern
  3. AI tools fill in implementations following the foundation
  4. Review focuses on whether implementations match the foundation’s intent

The foundation is small; the implementations are larger. The split: hand-write the part that defines what; let AI generate the part that does it.

For a typical bug fix:

  1. Engineer investigates manually (AI assists with research)
  2. Engineer writes the fix by hand (small but careful)
  3. AI generates tests covering the fix
  4. Engineer verifies the tests check the right thing

The fix is hand-written because the cost of fixing the wrong thing is high. The tests are AI-generated because the test patterns are mechanical.

What this isn’t

Some misreadings to avoid:

It’s not “AI is bad.” The same engineers using AI heavily for some work hand-write other parts. They’re picking the right tool for each task.

It’s not “going back to pre-AI workflows.” The hand-writing is still done with AI tools available. The choice is deliberate, not nostalgic.

It’s not “AI failed at programming.” AI is genuinely productive for a lot of code. The pull-back is in specific categories, not generally.

The pattern is “use AI where it adds value; hand-write where it doesn’t or where careful authorship matters.”

A specific example

I refactored a payment retry system recently. AI tools could have written most of it.

I chose to hand-write:

  • The retry policy interface (architectural)
  • The error classification logic (subtle business rules)
  • The exponential backoff math (small but critical)

I let AI generate:

  • The Stripe API integration boilerplate
  • The tests for the policy
  • The logging instrumentation
  • The documentation comments

About 30% hand-written, 70% AI-assisted. The 30% is the part that matters.

If I’d let AI write all of it, the implementation would have been generic. The retry policy would have been a typical exponential-backoff-with-jitter — fine in general, not optimal for our specific case. The error classification would have been generic — fine for typical APIs, missing our specific failure modes.

The result of my split: a system that fits our needs and was efficiently implemented around the careful core.

The teaching point

For engineers thinking about how to use AI tools:

The skill isn’t “use AI for everything.” It’s “know what to write yourself.”

For most people, this skill develops gradually. Notice when AI’s output is generic where it shouldn’t be. Notice where you find yourself rewriting AI suggestions extensively. These are signals that you should have hand-written.

Over time, you develop intuition: “this part needs me; this part doesn’t.” The AI’s productivity gains apply where they apply; you keep your fingers on the parts that matter.

A measurable signal

If you find yourself constantly rewriting AI suggestions, you might be using AI for things you should write directly. The friction of correction may exceed the benefit of generation.

If you find yourself accepting AI suggestions without much review, you might be missing the parts where careful authorship matters. The lack of friction may indicate insufficient attention.

The right balance varies by task. Pay attention to the friction. Use it as a signal.

Closing

The “AI generates everything” narrative is incomplete. The reality, for engineers using AI tools well, is more nuanced. Some code benefits from generation; some benefits from authorship.

Knowing which is which is one of the underrated skills of effective AI-assisted engineering. It’s not “use AI more” or “use AI less.” It’s “use AI deliberately.”

For engineers wondering if you’re using AI well: notice the categories above. Are you hand-writing the parts where authorship matters? Are you AI-generating the parts where speed matters? The split is the skill.

Hand-writing isn’t a step backward. It’s a deliberate choice for parts of work where it adds value. AI tools enable this — they handle the bulk so you can focus your attention on the parts that earn it.