Tinker AI
Read reviews
7 min read Owner

A pattern I’ve heard from at least five different engineering managers in the past six months: “Our juniors are shipping more, but I’m not sure they’re learning.”

The concern is specific. AI coding tools are good at exactly the kinds of tasks that used to be junior-developer apprenticeship work — boilerplate, test scaffolds, simple refactors, fixing the obvious bug. A junior engineer in 2022 spent the first six months grinding through this kind of work and emerged with the muscle memory of how a codebase actually fits together. A junior engineer in 2026 lets Cursor write the boilerplate and moves to the next ticket. The output velocity is higher; the embedded learning is unclear.

Whether this matters depends on what you think coding skill is for, and what you think a software engineer’s job is becoming.

What the optimists say

The case that AI tools accelerate junior development:

Faster feedback loops. A junior who’d previously have spent two days writing CRUD endpoints by hand now has them shipping in two hours, gets review feedback faster, and learns from the review. The grinding part wasn’t the learning — the review was.

Higher-leverage tasks earlier. The junior who’s not bogged down in scaffolding can spend more time on the design questions, the cross-cutting concerns, the parts of engineering that actually require thought. They’re getting senior-level reps earlier in their career.

Better than copy-pasting Stack Overflow. This is the analogy worth taking seriously. Juniors have always offloaded mechanical work to references. AI is a more capable reference. The skill of “knowing what to copy and verifying it works” is similar to the skill of “knowing what to ask AI for and verifying it’s right.”

Discovery of patterns through volume. Seeing 100 AI-generated implementations of similar patterns might teach pattern recognition faster than writing 10 by hand.

These are reasonable arguments. The first three are observable in some teams. The fourth is more speculative.

What the pessimists say

The case that AI tools hurt junior development:

You learn by writing code, not reviewing it. The act of producing the code — the typing, the back-and-forth with the type checker, the small mistakes you fix — is where pattern recognition happens. Reviewing AI output isn’t the same. You learn to assess code; you don’t learn to write it.

Confident output is worse than slow trial and error. When a junior writes a function and it doesn’t work, they debug. They learn what doesn’t work and why. When AI writes a function that doesn’t work, the junior often can’t tell — the output looks right. They learn to trust the output, which is the wrong lesson.

Atrophy of fundamentals. Knowing how to write a for loop without thinking is a fundamental that gets worn into your hands by writing many for loops. Juniors who never wrote those loops by hand have less fluent fundamentals when they need them — like during a whiteboard interview, or when the AI is wrong about a basic thing.

Inability to recognize bad code. A junior whose normal mode is “AI writes, I review” doesn’t develop the visceral sense of “this code feels wrong” that comes from writing many bad versions of code. Code review skill is upstream of writing skill, not a substitute for it.

The senior gap widens. Senior engineers who learned without AI have foundations that AI-native juniors don’t. Eventually those juniors will need to be seniors. The path from “AI-assisted junior” to “senior who can architect a system” isn’t obvious because the missing skills aren’t visible in day-to-day work.

These are also reasonable arguments. The first and second are observable. The third is also observable, in degree. The fourth and fifth are speculative.

What the data sort of says

There’s no good study on this yet. The closest evidence:

Self-reported confidence vs measured ability. A few internal team studies have found that junior engineers report feeling more productive with AI tools, but their performance on “AI off” tasks (whiteboarding, debugging without help, explaining code they wrote) is mixed. The pattern is consistent with both “AI is helping them ship more” and “AI is making their underlying skill less developed.”

Onboarding time to productive contribution. Some teams report shorter ramps for new juniors with AI tools. Other teams report no difference. Probably depends heavily on the codebase and the mentorship culture.

Attrition and growth trajectories. Too soon to tell. The first cohort of juniors who started their careers heavily AI-assisted is two or three years in. Their long-term trajectories will be informative in another two or three years. We don’t know yet.

The honest summary: there’s a real concern, the evidence is incomplete, and reasonable people on the same team can look at the same juniors and reach different conclusions.

What the seniors I trust are doing

I asked five senior engineers I respect what they’re advising junior teammates. The answers were more aligned than I expected:

Use AI for tasks you’ve done before. If you’ve written this kind of code three times by hand, let AI do the fourth. The repetition has done its job. Time to move on.

Don’t use AI for tasks you’ve never done. Write the first three implementations yourself. Even if they’re slow and bad. The badness is part of the learning.

Always read AI output carefully. Not “scan, accept” but “read, understand, decide whether to keep.” If you can’t explain why each line is the way it is, you didn’t learn from it.

Maintain a “no-AI hour” once a week. Pick something you’d normally use AI for and do it without. Not as punishment — as practice. The skill of writing code from scratch is a skill, and skills need use to stay sharp.

Pair with senior engineers without AI in the room. When learning a new codebase or pattern, have a session with a senior where neither of you uses AI. The conversation forces both of you to articulate what’s actually happening, which is the highest-bandwidth way to teach and learn.

These aren’t rules. They’re heuristics that try to capture the nuance: AI tools are useful, AI tools have failure modes, the failure modes are more dangerous for people still building foundations.

The personal angle

I went through this trade-off as a senior, not a junior. Adopting AI tools after 8 years of writing code by hand is different from starting your career on AI tools.

What I’ve noticed about my own use:

  • My fluency with patterns I established before AI is unchanged
  • My fluency with patterns I’ve only used through AI is meaningfully worse — I can recognize them but can’t produce them clean from memory
  • My ability to evaluate code is sharper than ever, because I’m reviewing more code per day than I was before
  • My ability to write code from scratch on hard problems is unchanged — those problems weren’t AI-friendly anyway

If I were starting my career today, the second bullet is what I’d worry about. The patterns you don’t deeply learn now are the patterns you can’t critique later.

Where this leaves managers

For engineering managers reading this: the honest answer is that you can’t fully avoid the trade-off. AI tools are real productivity gains; banning them is harder than it sounds and probably costs you in hiring competitiveness anyway.

What you can do:

  • Make sure juniors are building foundations, even if it costs short-term velocity. Code katas, paired programming without AI, regular sessions where AI is off.
  • Watch for the failure modes. Juniors who can ship features but can’t explain them. Juniors who panic when AI is unavailable. These are signals.
  • Accept that the answer to “is this generation of juniors as capable?” is unknowable for a few more years. Don’t be the manager who decides early it’s fine, or the one who decides early it’s not.

Both extremes are confidently wrong. The middle ground — careful use, deliberate skill-building, honest assessment — is harder to execute and probably right.