I’ve been mentoring junior engineers in environments where AI tools are used heavily. The pattern that’s emerging: the engineers who’ll be excellent in three years are the ones who treat AI suggestions as starting points to evaluate, not solutions to apply. The engineers who treat AI suggestions as authoritative are heading toward stagnation.
This isn’t a Luddite take. The tools are useful. The skill that matters is using them critically.
What “AI skepticism” looks like
A junior who’s developing this skill exhibits patterns like:
- Reads AI-generated code carefully before accepting
- Questions when the AI’s suggested approach doesn’t match what they were thinking
- Notices when the AI is being confidently wrong
- Asks “is this actually right?” rather than “does this run?”
- Pushes back when AI suggestions don’t fit the codebase’s patterns
- Catches edge cases the AI missed
A junior who’s not developing this skill:
- Accepts AI output without reading it carefully
- Treats the AI’s confidence as accuracy
- Ships code they can’t fully explain
- Defers to the AI when their instinct disagrees
- Uses AI to skip past hard understanding rather than work through it
The first pattern compounds positively over time. The second pattern compounds negatively.
Why this matters more for juniors
For a senior engineer, AI suggestions land on a base of strong code intuition. When the AI suggests something wrong, the senior notices because the wrongness violates patterns they’ve internalized over years. The skepticism is automatic.
For a junior, the intuition is still being built. AI suggestions land on a base of “I’m not sure what’s right.” When the AI is wrong, the junior may not notice because the patterns aren’t yet internalized. The wrong code becomes the pattern they internalize.
This is the trajectory risk. Junior engineers who outsource pattern formation to AI in their formative years end up with weaker pattern intuition than juniors who built the patterns themselves. Three years later, this shows.
What I tell juniors
The principles I emphasize:
Read the AI’s output, line by line. If you don’t understand a line, ask questions until you do. Don’t ship code you can’t fully explain.
When the AI’s suggestion surprises you, slow down. Surprise often means the AI is being clever in a way that doesn’t fit, or that you don’t yet understand a concept. Either way, slow examination pays off.
When the AI’s suggestion matches what you would have written, you’ve validated your intuition. That’s good. Note it.
When the AI’s suggestion is meaningfully different from what you would have written, do the comparison. Sometimes the AI is right and you learn something. Sometimes you’re right and the AI is generic. Either way, the comparison builds judgment.
Never accept code you can’t defend in review. When a senior asks “why does this work?”, you should be able to answer. If the answer is “the AI suggested it,” you’ve outsourced your own learning.
A specific failure I’ve seen
A junior I worked with was asked to add caching to a slow query. They asked Cursor for help. Cursor suggested adding a Redis-based cache with TTL. Junior implemented it.
In review, the senior asked: “Why Redis? We already have an in-memory cache for this kind of data.”
Junior didn’t have an answer. The AI had picked Redis because Redis is the popular caching choice in its training data. The team’s actual pattern was in-memory because the data was small and the performance benefit of an external cache wasn’t worth the complexity.
The junior wasn’t wrong for getting AI help. They were wrong for not asking “why Redis?” themselves before accepting the suggestion. A 30-second pause to think about the project’s context would have led to a different question to the AI: “Add caching to this query using our existing in-memory pattern from src/cache/.” The AI would have followed.
The lesson the junior took from this: ask the AI in a way that includes the project’s context. The deeper lesson: think about the project’s context before asking the AI, so you can frame the question correctly.
What seniors should be doing
For senior engineers mentoring juniors, the implications:
Review AI-assisted code with extra care. The AI’s confidence can mask the junior’s lack of confidence. Push past “does it work?” to “does the junior understand why?”
Ask “why this approach?” frequently. When a junior submits AI-assisted code, ask why they chose the approach they did. If the answer is “the AI suggested it,” that’s a coaching moment.
Pair occasionally to model the skepticism. Show juniors how you read AI output. Show them where you push back. The skepticism is observable; juniors can copy it.
Don’t accept “the AI did it” as a substitute for understanding. The bar for being able to defend code in review is the same regardless of who or what wrote it.
The compounding effect
The skill of AI skepticism compounds in interesting ways. Juniors who develop it:
- Learn faster because they’re examining each AI suggestion as a teaching moment
- Build pattern intuition that lets them spot issues earlier
- Develop independent judgment that lets them work without AI when needed
- Become reliable code reviewers because they can spot the same kinds of issues in others’ code
Juniors who don’t develop it:
- Stagnate at “can use AI to produce passable code” without progressing past it
- Don’t develop the deep judgment that distinguishes senior engineers
- Become brittle when AI tools change or aren’t available
- Struggle in code reviews because they don’t see what the AI missed
These trajectories diverge slowly. At six months, both juniors look productive. At three years, the difference is large.
What junior-focused tooling should do
The AI tools could help here. Some ideas:
Show alternatives, not just one answer. When the AI generates code, show 2-3 different approaches with tradeoffs. Force the junior to choose, which forces engagement.
Explain why this approach. Auto-generate a brief rationale: “I chose Redis here because… but you could also use in-memory if…” The rationale invites questioning.
Highlight uncertainty. When the AI is less confident (rare patterns, ambiguous specifications), say so. Currently the AI’s tone is uniformly confident regardless of underlying certainty.
Encourage questions. A “what should I ask about this code?” suggestion would prompt the kind of critical engagement that builds skill.
These would help. None are standard yet. The current AI tooling defaults to “produce confident answer” because that’s what tests well in benchmarks. Tests poorly: “produce answer that builds long-term engineer skill.”
My summary advice
For juniors using AI tools:
- The tools are useful. Use them.
- Treat every AI suggestion as a draft, not a final answer.
- Read everything you ship.
- When you can’t explain why, you haven’t done your job yet.
- The skill that matters is judgment, not speed.
The juniors who follow this will be excellent in three years. The ones who don’t will be replaceable.
A note to organizations
The companies that hire juniors should think about how AI tools fit into their early-career development. Default settings (“write fast, ship lots”) select for the wrong skills. Settings that emphasize critical evaluation (“understand deeply, ship correctly”) select for the skills that matter.
This is partly culture, partly tooling, partly mentorship. A company that gets it right has juniors who, after two years, look more capable than juniors at peer companies. The advantage compounds in ways that matter for the long run.
Don’t let AI tools substitute for the work of becoming a great engineer. Use them; learn from them; don’t outsource the learning itself.