Tinker AI
Read reviews
6 min read Owner

The “10x faster with AI” claim is a fixture of LinkedIn posts and conference talks. Sometimes it’s “5x,” sometimes “20x,” sometimes “I shipped a year of work in a weekend.” The claims are remarkably consistent in pattern even when the specific number varies.

I want to take this seriously rather than dismiss it. Engineers making these claims aren’t lying. They’ve experienced something. Understanding what they’re actually measuring is more useful than rolling your eyes at the number.

What “10x faster” probably means

After listening carefully to a few people who make these claims, the patterns I’ve identified:

They’re comparing peak speed, not sustained speed. “I built this entire feature in 30 minutes that would have taken a day.” Both numbers are accurate, but the day estimate includes lunch, meetings, context switches, and ambient distraction. The 30-minute number is uninterrupted heads-down focus. The 10x is partly real productivity gain and partly the difference between focused vs. typical-day pace.

They’re comparing best-case AI vs. average-case manual. The 30-minute AI build was a feature in the AI’s sweet spot — well-specified, pattern-following, in a familiar stack. The day estimate is averaged across all kinds of work, including the hard parts where AI doesn’t help. Comparing best-case to average-case produces favorable ratios.

They’re not counting time spent learning the tool. The 30 minutes doesn’t include the previous month spent learning Cursor’s prompting patterns. Once amortized, the savings are still real but smaller per task.

They’re not counting the rework. Some of the AI-generated code from the 30-minute build will need revision next week, next month, or in a bug investigation. Some won’t. The accounting only counts the up-front time, not the lifecycle cost.

They’re including “tasks I wouldn’t have done.” A surprising amount of “10x” claims involve tasks the engineer wouldn’t have done at all without AI. “I added these 5 features over the weekend” isn’t really a productivity comparison — without AI, they wouldn’t have done those features. The right comparison would be “0 features in a weekend without AI vs. 5 features with AI,” which is undefined rather than 5x.

What the data actually shows

A few real measurements from people who’ve tracked this carefully:

My own three-month tracking (covered in another article): roughly 20-25% net productivity gain on my work mix. Not 10x, not even 2x.

A Stripe internal study cited at a conference: developers who used AI tools reported subjectively feeling 30-50% faster, but measured PR throughput showed roughly 15% increase. The gap between subjective and measured was meaningful.

A GitHub-published study on Copilot: reported 55% faster on a specific controlled task (writing an HTTP server), with ranges from 15% to 80% depending on the developer. The 55% headline is the median for one task; the numbers don’t generalize cleanly to overall productivity.

Various team-level reports: 10-25% range is the most common when measured rather than self-reported. A few teams report higher, usually in contexts heavy on greenfield work or boilerplate.

The honest range is 10-30% on actual work, with peaks higher on AI-friendly tasks. The “10x” number is real for specific moments and doesn’t generalize.

Why the inflation happens

Three structural reasons “10x” claims are common:

Selection bias on tasks. The tasks people remember as 10x experiences are the ones where AI happened to nail it. The tasks where AI was net-zero or slightly worse are forgotten. Available memory is biased toward dramatic successes.

Speed of typing isn’t most of engineering. Engineers who go from “typing 60 words per minute” to “AI types at machine speed” experience a 10x speedup at the typing layer. The typing layer is maybe 20% of the work. The other 80% is thinking, reading, debugging, communicating — none of which gets a 10x boost. But the typing speedup is what’s visible and what gets quoted.

Confidence overshoot. Engineers excited about a new tool exaggerate its impact in conversation. This is true of every new tool, not just AI. The exaggeration gets repeated, normalized, and treated as a baseline that other engineers feel pressure to match in their own claims.

What this matters for

For individuals: don’t beat yourself up if you’re not 10x with AI. The honest gain is 15-30% on average, and that’s a real productivity win that doesn’t need exaggeration. Aiming for 10x means optimizing for a metric that doesn’t exist.

For teams adopting AI tools: budget for 15-30% productivity gain in your planning, not 5x or 10x. Plans built on inflated expectations produce timelines that don’t get hit and morale that suffers when reality intrudes.

For engineering leaders making buying decisions: vendors will quote the high end of these numbers. Mentally discount to about 1/4 to 1/3 of the headline figure for budget purposes. The tools are still worth the cost; just don’t expect the marketing’s rate of return.

For engineers worried about being replaced: 10x productivity, if real, would be reshaping the job market faster than is actually happening. The fact that it’s 1.2-1.3x rather than 10x is part of why developer hiring hasn’t collapsed despite the AI tooling explosion. The tools amplify, not replace.

The conversation worth having

When someone tells you “AI made me 10x faster,” the productive question isn’t “show me your data” (defensive). The productive question is “what specifically did you do faster, and what does that 10x apply to?”

Usually the answer is something specific: “I wrote a CRUD endpoint that would have taken me 90 minutes in 8 minutes.” That’s 11x for that specific task in that specific context. Real, useful, true.

The mistake is when “11x for this specific task” gets generalized to “11x for engineering” — the same engineer’s 11x on CRUD is 1.1x on debugging and 0.9x on system design. The average is much lower than the peak.

The marketing for AI tools collapses these into a single number. The engineering reality has variance. Treating the variance as the more useful information produces better decisions than treating the single peak number as the headline.

My own honest claim

If I had to put a number on my own AI productivity gain: roughly 25% on my normal work mix, ranging from “maybe 0%” on hard architecture work to “maybe 200%” on test scaffolding and CRUD. The mix-weighted average is the relevant number for budgeting.

That 25% is real. It’s worth the $20/month subscription and the time spent learning the tools. It’s not the 10x my LinkedIn feed claims, and pretending otherwise has costs:

  • It makes me feel inadequate when my real numbers are 25% (because clearly I’m not doing it right)
  • It makes my plans miss when I assume more than 25% gain
  • It makes me distrust the engineers I respect when their claims sound inflated

The honest framing is more useful than the marketing framing. AI tools provide a real, modest, measurable productivity gain. The gain is unevenly distributed across task types. The gain doesn’t compound to multipliers in the 5-10x range without specific narrow framing that doesn’t generalize.

That’s the claim worth making. It’s not as catchy. It’s more useful.