Aider with Claude Haiku for cheap, fast iteration
Published 2026-04-30 by Owner
Aider’s documentation, like most AI tooling docs, defaults to “use the smartest model.” For Aider that means Claude 3.5 Sonnet or GPT-4o. They work, they’re good, and at roughly $3 per million input tokens, they cost real money on a heavy day.
The thing the docs don’t emphasize: Claude 3.5 Haiku is fast enough and accurate enough for the majority of edits Aider gets asked to do. At $0.80 per million input tokens, it’s roughly 4x cheaper. For boilerplate-heavy iteration, that ratio reshapes what the tool feels like.
This is the routing setup I’ve landed on after a month of mixed-model use.
The cost gap, real numbers
Across 80 Aider sessions in March, my actual API spend by model:
| Model | Sessions | Avg input tokens | Avg cost/session |
|---|---|---|---|
| Claude 3.5 Sonnet | 48 | 41,200 | $0.18 |
| Claude 3.5 Haiku | 32 | 38,500 | $0.04 |
Same kind of work, different models, ~4.5x cost difference per session. At my usage rate, switching half my Sonnet sessions to Haiku saved roughly $20 over the month. Not life-changing, but real.
Where it does matter: heavy iteration days. If you’re doing a 6-hour session of stepwise edits — adding tests, generating boilerplate, applying mechanical refactors — Sonnet adds up to $8-15 in a day. Haiku for the same workload is $1-3.
When Haiku is good enough
Tasks Haiku handles cleanly:
- Adding test cases against an existing function (when the test pattern is established in the file)
- Writing migrations from a schema description
- Generating boilerplate (routes, controllers, validators) where the pattern is clear from existing files
- Renaming or moving things
- Translating type definitions between languages (TS to Zod, Python to Pydantic)
- Adding logging or instrumentation
Tasks where I switch back to Sonnet:
- Anything requiring multi-file reasoning beyond the obvious
- Refactoring where the new structure isn’t a clean mapping from the old
- Bug fixes where the cause isn’t in the file Aider is editing
- Code that needs to be cleverly small (algorithmic problems, performance-sensitive paths)
- Any prompt where I’m tempted to write more than 200 words to explain the task
The dividing line: if I can describe the task in one sentence and the answer is mostly mechanical, Haiku. If I’m explaining intent, constraints, and edge cases at length, Sonnet.
The setup
Aider supports specifying model via flag, environment variable, or per-session command. I use a shell alias for each:
# In .zshrc or .bashrc
alias aider-fast='aider --model claude-3-5-haiku-20241022'
alias aider-smart='aider --model claude-3-5-sonnet-20241022'
alias aider-arch='aider --model claude-3-5-sonnet-20241022 --architect'
aider-fast is the default for casual iteration. aider-smart for harder tasks. aider-arch for the architect mode (Sonnet plans, a faster model edits — covered below).
The --architect mode is one of Aider’s underused features. It uses two models: a “smart” one for planning and a “fast” one for executing the plan. Aider 0.55+ has good defaults; you can also configure explicitly:
aider --architect \
--model claude-3-5-sonnet-20241022 \
--editor-model claude-3-5-haiku-20241022
For tasks that need careful thinking but mostly mechanical edits — large refactors, schema migrations across files — this gives you Sonnet’s planning quality at closer to Haiku’s cost.
Switching models mid-session
Inside Aider, the slash command /model <name> switches without restarting:
> /model claude-3-5-haiku-20241022
Aider v0.55.0
Model: claude-3-5-haiku-20241022
> Add tests for the parseUrl function in this file
I use this when I start a session with Sonnet for a planning question, then switch to Haiku once I know what I want and the work becomes mechanical:
> Help me think through the auth flow refactor
[discussion with Sonnet about the approach]
> /model claude-3-5-haiku-20241022
> OK, let's start. Rename the AuthContext to AuthSession in src/auth/context.ts and update imports.
This pattern — Sonnet for thinking, Haiku for typing — is most of where the cost savings come from in practice.
When Haiku produces bad output, what to do
Haiku’s failure mode is different from Sonnet’s. Sonnet, when wrong, is wrong with confidence and detail. Haiku, when wrong, is more often “almost right but missing a piece” — it’ll write a function that compiles but doesn’t handle the edge case you described, or generate a test that passes but doesn’t actually verify the behavior.
When I see this:
- Don’t ask Haiku to fix it. The fix often has the same gap.
- Switch to Sonnet with
/model claude-3-5-sonnet-20241022 - Re-ask the same question
- After the fix, switch back to Haiku for the next mechanical step
The friction of switching is small enough that I don’t lose iteration speed. The benefit is I don’t burn 30 minutes trying to coax Haiku into solving a problem it can’t.
Where this isn’t worth the trouble
If you’re a casual Aider user — a couple of hours a week — the cost difference is negligible and the cognitive overhead of model switching isn’t worth it. Stick with Sonnet, which is “good enough” everywhere.
If you’re a heavy user, the model switching becomes second nature within a few days, and the savings on a busy month justify the discipline. The bigger benefit is actually responsiveness — Haiku finishes responses noticeably faster than Sonnet, which on a back-and-forth iteration session is meaningfully better than waiting.
What this is not
This is not “use a small model for everything.” Haiku is good for a real fraction of tasks, not all tasks. The discipline is knowing which fraction you’re in. If you find yourself fighting Haiku to produce something Sonnet would have produced cleanly, you’ve miscategorized the task. Switch up, finish, switch back.
The cost optimization shouldn’t cost you correctness. Both matter; one isn’t worth sacrificing for the other.