Tinker AI
Read reviews
7 min read Owner

A friend who runs engineering at a 60-person company told me last quarter that they’d quietly stopped using their standard coding interview. The reason: too many candidates were obviously using AI tools through their second monitor, and the company’s interview wasn’t designed to detect this. The candidates who were honest about using AI tools were doing roughly as well as the candidates who weren’t, and the company couldn’t tell whether either group was actually capable of writing code without help.

This is the dilemma every hiring team is now facing. The traditional take-home and live-coding formats were designed in a world where candidates couldn’t have a senior engineer whispering in their ear during the interview. They can now. Adapting takes more thought than most teams have done so far.

The wrong responses

A few common reactions that don’t work:

“Ban AI tools from interviews.” Practically impossible. You can’t tell what’s running on a candidate’s other monitor or in another tab. Candidates who use AI when they shouldn’t will just lie. Candidates who don’t will be at a disadvantage relative to those who do. The honest candidates are penalized.

“Make problems harder so AI can’t solve them.” This makes the interview a test of “can you find a problem AI can’t solve” rather than a test of programming ability. The candidate’s ability to recognize an AI-resistant problem is uncorrelated with their ability to do the job.

“Test AI usage skills directly.” Some teams have shifted to “use AI to solve this in real time, narrating your process.” This is closer to the work but raises a different problem: AI tool fluency varies more by recent practice than by underlying engineering ability. A candidate who’s been heads-down on a Rust project for six months may have less recent AI tool experience than a junior developer who’s been “vibe coding” with Cursor for the same period.

“Just ignore the issue.” Some teams have done this — kept the old interview, hope AI doesn’t pollute the signal too much. In my friend’s data, AI use produced enough noise that they couldn’t tell good candidates from mediocre ones. Ignoring the issue doesn’t make it go away.

None of these is a clean answer. Hiring in 2026 with AI tools requires either changing what you’re testing or accepting more noise.

The shift in what to test

The interview question worth thinking about: what does an engineer’s job look like in 2026, and what abilities does that job require?

For the engineers I’ve watched succeed at AI-augmented work, the abilities that matter:

Specifying clearly. Knowing what they want before they ask, expressing it precisely enough for the AI to act on. This is the skill the new interview should test, because it’s the skill that separates effective AI use from ineffective AI use.

Reviewing AI output critically. Catching subtle bugs, recognizing when output is plausible-but-wrong, knowing when to reject and re-prompt vs. correct manually. This is harder to interview for than to observe, but possible.

Architectural judgment. Deciding what shape a system should take, what pieces fit together, where the boundaries should live. AI tools don’t do this; humans still do. An engineer who can architect well is more valuable than one who can produce code quickly with AI.

Debugging real systems. Reading logs, forming hypotheses, narrowing scope, validating against evidence. AI helps but doesn’t substitute for the underlying skill.

Communicating with humans. Writing PR descriptions, explaining decisions, mentoring teammates. AI doesn’t help much here, and the skill matters more as more code becomes AI-generated.

The traditional algorithmic coding interview tests “can you write code from a verbal description without aids.” That’s a real skill but it’s a smaller part of the actual job in 2026 than it was in 2018. The interview should reflect the change.

Interview formats that test the right things

Some formats I’ve seen work, with AI tools allowed:

Format 1: Specify-then-build

The candidate is given a vague feature request. They have 90 minutes. They’re told they can use AI tools.

The first 30 minutes are spent producing a written specification — what they understand the requirement to mean, what edge cases they identified, what assumptions they’re making, what they’re proposing to build. No code yet.

The remaining 60 minutes are spent building, with AI tools allowed.

The interview rates the candidate on:

  • Quality of the spec (is it precise? does it cover edge cases? does it ask the right clarifying questions?)
  • Quality of the implementation given the spec (does it match the spec? is it well-organized?)
  • The candidate’s articulation of where they used AI and where they didn’t, and why

This tests “can you frame a problem clearly enough that an AI can help you solve it” — which is most of the job.

Format 2: Review and revise

The candidate is given an existing piece of code that has bugs and design issues. AI-generated code is fine; human-written code is fine. They have 60 minutes to identify problems and propose fixes. AI tools allowed.

The interview rates:

  • How many real issues they identified
  • Whether their proposed fixes address root causes or just symptoms
  • The quality of their reasoning about the code

This tests “can you critically review code, including code AI might have generated” — a core skill in modern engineering.

Format 3: Live debugging

The candidate is given a reproducible bug in a small project. They have 45 minutes to find the cause and propose a fix. AI tools allowed.

The interview rates:

  • How they approached the investigation (did they form hypotheses? did they verify before fixing?)
  • Whether they identified the root cause vs. a symptom
  • How they used AI — did they ask it to “find the bug,” which usually fails, or did they use it for specific lookups during a structured investigation?

This tests problem-solving rigor, which AI tools amplify but don’t replace.

Format 4: Pair on a real problem

The candidate works alongside an interviewer on a 90-minute task in a real codebase. AI tools allowed for both. The interviewer is observing how the candidate works, not how fast.

The interview rates:

  • How the candidate communicates intent before acting
  • How they handle disagreement (with the interviewer or with AI suggestions)
  • Whether they’re verifying their work or running on autopilot

This is closer to actual engineering than any algorithmic interview. It’s also harder to standardize, which is why most companies don’t do it. For senior hires, it’s worth the cost.

What to drop

The formats that don’t work in 2026 should probably be dropped:

LeetCode-style algorithmic problems. AI solves these in seconds. Even if you ban AI use, the test is mostly “did you study LeetCode” rather than “can you engineer.” Drop these unless the role specifically requires algorithmic problem-solving (small fraction of jobs).

“Build a feature in 90 minutes.” AI tools make speed a less interesting differentiator. Either drop or restructure to focus on quality of approach (Format 1 above).

Take-homes designed pre-AI. A 4-hour take-home from 2019 is now a 30-minute take-home with AI. The candidates who do it in 30 minutes look the same as the ones who do it in 4 hours, but you’ve selected for the AI-fluent rather than the careful.

The interviewer side

Interviewers also need different skills now. Specifically:

Recognizing AI-generated code. Some patterns are recognizable: certain comment styles, certain helper-function-naming conventions, certain over-engineering tendencies. Not always reliable but useful as a starting question.

Asking probing follow-ups. “Walk me through this part” surfaces whether the candidate understood what they wrote. AI-generated code that the candidate doesn’t really understand falls apart under follow-up.

Distinguishing “uses AI well” from “uses AI as a crutch.” Both candidates produce working code. Only one of them can debug it when something breaks.

These are skills interviewers develop with practice. Hiring committees should explicitly train for them, not just assume the senior engineer doing the interview can tell.

What’s not changing

Some things about hiring stay the same:

Cultural fit and team chemistry matter. No tool change affects this.

Communication skills matter. Writing well-structured comments, PR descriptions, and design docs is unchanged.

Domain knowledge transfer. A candidate’s ability to learn your specific domain quickly is unaffected by AI tooling.

Track record signaling. Past work, references, contributions to public projects — all still valuable.

The interview format change is real but bounded. The candidate quality you’re trying to identify is similar to what you’ve always been trying to identify; the methods need updating, the underlying signal you’re looking for hasn’t changed much.

The honest summary

If your interview process hasn’t been updated since 2023, it’s probably underperforming in 2026. Candidates who use AI tools effectively look like top candidates by metrics that no longer measure what they used to. Candidates who don’t use AI in interviews are systematically disadvantaged regardless of their underlying ability.

The fix isn’t a tweak. It’s reconsidering what an engineer’s job is now and testing for that. The interview formats above are starting points, not solutions. Each company will have to adapt them to their specific stack, role, and team.

The companies that adapt thoughtfully will hire better than the ones that pretend nothing has changed. The gap between thoughtful and thoughtless hiring has gotten larger, not smaller, with AI tools. That’s the real shift to plan for.