Tinker AI
Read reviews

13 min read · Reviewed 2026-05-11 by tinker-editor

Aider 2026 review: the git-first terminal AI pair you actually own

8.0 / 10 Try Aider
Aider is the open-source AI pair-programmer that lives in your terminal, edits files in your local git repo, and commits each change with a meaningful message. Free, BYOK, and refreshingly auditable — but the lack of inline completion and the per-token bill are real tradeoffs.

The verdict

Pros

  • 100% open source (Apache 2.0) and free; you only pay your model provider
  • Auto-commits every change with a descriptive message — clean git history is the default, not the exception
  • Repo map gives Claude/GPT/Gemini real cross-file context on monorepos without sending the whole tree
  • Supports 100+ models out of the box (Claude Opus/Sonnet, GPT-4o/5, Gemini, DeepSeek, Mistral, Ollama-hosted local models)
  • Auto-runs your linter and tests after each change and self-repairs detected failures in the same loop

Cons

  • Terminal-only — there is no inline completion, no chat panel sitting next to your code, no GUI
  • BYOK math gets real fast: a typical coding hour on Claude Sonnet 4.5 is $1–3 in API costs, and heavy users land at $50–150/month
  • Onboarding has more sharp edges than a polished editor — model config, repo scoping, and the chat command surface all need reading the docs
  • No native multi-agent or background-task model — one chat, one task at a time

Best for: Terminal-native developers, open-source-first teams, regulated shops that need full agent auditability, and anyone pairing Aider with their existing editor as a multi-file refactor tool.

Worst for: GUI-first developers, beginners who want one-click setup, or teams that need a managed billing relationship rather than a per-token API tab.


What Aider is, in one minute

Aider is a Python command-line program that turns your terminal into an AI pair-programmer. You run aider in a git repository, talk to it in plain language, and it edits files in your local checkout, runs tests, and commits each change with a generated commit message. The model is whichever you point it at — Claude Sonnet, GPT-5, Gemini, DeepSeek, a local Ollama model — via your own API key.

The project is open source under the Apache 2.0 license, written by Paul Gauthier, and as of this review the GitHub repo sits at roughly 44,600 stars with a release on April 25, 2026. HackerNews mention volume is around 860 in the last 30 days, which puts it in the same conversational tier as the major commercial AI editors despite being a small-team open-source effort.

The product position is unusual and worth saying clearly upfront. Aider is not an editor. It does not draw inline completions. It does not have a chat panel that lives next to your code. It is a CLI that you keep open in a second terminal pane while you work in whatever editor you already use — Neovim, JetBrains, VS Code, even Cursor. That positioning — a focused multi-file refactor and Q&A tool that complements an existing editor rather than replacing one — is the thing it does better than anything else.

What changed in 2026

Aider in May 2026 is a more capable tool than it was a year ago, but the shape has not changed. The 2024 version was already a multi-file editor with auto-commit and a repo map. The 2026 version is the same primitives plus more polish.

Three things are worth naming.

First, model coverage has expanded to over 100 supported models. Claude Sonnet 4.5 and Claude Opus 4 are first-class. GPT-5 is supported, as is the GPT-4o family. Gemini 1.5 Pro and 2.0 are supported. Local-model paths via Ollama, LM Studio, and llama.cpp work out of the box. The DeepSeek and Mistral families are included. The bring-your-own-endpoint pattern means any OpenAI-compatible API gateway (Together, Fireworks, Groq, OpenRouter) plugs in via a config flag.

Second, the auto-test-and-lint loop has matured. Aider runs your project’s lint and test suite after every change, parses the output, and self-repairs detected failures in the same conversation. This was always a feature in name, but the failure-parsing in 2025 has gotten reliable enough that the loop closes most of the time without manual prompting. For a Python project with a pytest suite and a ruff config, the experience is “ask for the change, get a passing test, get a clean lint, get a commit” without intervention.

Third, the image and webpage primitives let you paste a screenshot, a Figma export, or a URL into the chat, and Aider will use the visual or fetched content as additional context for the model. This is useful for “here’s the design, build me the component” or “here’s the error screen from the staging environment, find the bug.” It is not unique to Aider — Cursor and Cline have similar primitives — but it has been around in Aider longer and is well-supported.

What hasn’t changed is the philosophy. Aider treats your git history as the system of record. Every change is a commit. Every commit has a generated message. If you want to revert, you git revert. If you want to inspect what the AI did, you git log and git show. There is no proprietary state, no IDE database, no telemetry pipeline you have to opt out of.

The git-first workflow is the moat

The single feature that keeps Aider users on Aider is the git-first workflow.

Other AI editors think of changes as “edits in flight” that the human accepts or rejects in a review queue. Cursor’s Composer, Cline’s Plan/Act, Copilot’s agent mode — they all stage diffs in some kind of pending state, ask you to accept, then write the changes. The human is the gate.

Aider inverts that. Each change is committed immediately, with a generated commit message, in a clean Aider commit. The human is not a gate; the human is a reviewer of recorded history. If you don’t like a change, you git revert or git reset. If you want to keep some changes and drop others, you git rebase -i. The interface for managing AI work is the same interface you already use for managing human work.

The benefits of this design are concrete. First, the audit trail is honest. There is no “Aider thought about this for 30 seconds and then did something that doesn’t show up in git” — every operation has a commit. Second, the rollback story is simple. git revert HEAD undoes the last AI change. git reset HEAD~5 undoes the last five. Third, multi-task isolation is free. Aider on branch A and Aider on branch B don’t conflict; they’re just two terminal sessions making commits to two branches.

The cost of this design is also concrete. Your git history fills with AI commits — typically one per request, sometimes several for a single multi-file task. Teams that care about a curated history will need to squash before merging. Aider’s docs cover the squash-merge workflow, but the default aider config produces a busier log than a human-written one.

The other cost is that Aider’s model of “complete the request, commit, return prompt” doesn’t fit work that requires intermediate review. If you want to see the proposed change before it lands, you have to ask Aider to “show me the diff first,” then iterate. The default optimizes for fast forward motion. The careful-review path exists but is opt-in.

Repo map and how Aider sees your codebase

Aider’s second moat is the repo map. When you start Aider in a project, it scans the codebase, builds a structural index of files, classes, and functions, and includes the most relevant pieces of that index in every prompt to the model.

The implementation is more clever than a flat file listing. Aider uses tree-sitter to parse source files into ASTs, extracts the symbol graph, and selects which symbols to include based on which files have been touched, which the user has mentioned, and the model’s available context window. On a 50,000-file monorepo, the repo map fits into the prompt budget of a Claude Sonnet 4.5 request without overflowing.

What this means in practice: Aider can answer “where is parse_config called from” or “rewrite the auth middleware to use the new token format” on a real project without you having to manually attach files. The model gets enough structural context to know where to look, and Aider’s per-request file attachment (/add and /drop) lets you scope manually when the auto-selection misses.

The tradeoff is the same one every codebase-aware tool makes: there is a cost in tokens to send the repo map, and a cost in latency to build it on first run. On a fresh checkout of a large repo, the first Aider command is slower than the second, because the AST parse and symbol extraction is not cached yet. After the first run, subsequent commands hit the cache and are fast.

Models, costs, and the hidden BYOK math

The advertised price for Aider is $0. The actual price for Aider is whatever your model provider charges, and that math is the single most important thing to understand before adopting it for primary use.

Claude Sonnet 4.5 — the most common Aider model in 2026 — costs $3 per million input tokens and $15 per million output tokens through Anthropic’s direct API. A typical coding hour with Aider uses 200,000 to 400,000 tokens of input (because the repo map and prior conversation are re-sent each turn) and 30,000 to 100,000 tokens of output. That works out to $1–$3 per coding hour with Claude Sonnet, in line with public benchmarks.

For someone using Aider as their primary tool four hours a day, twenty days a month, that’s $80–$240 per month in API spend. A heavy user landing at $150 per month is not unusual. Compare that against $20 Cursor Pro or $60 Cursor Pro+ — Aider is cheaper at low volume and roughly even or more expensive at high volume, because Cursor amortizes Composer 2 across paying users and absorbs some of the model cost.

The escape hatch is local models. Ollama running Qwen 2.5 Coder, DeepSeek Coder, or a Llama 3 variant on a developer-class machine drops the per-request cost to electricity. The quality gap is real — local models in 2026 are roughly where GPT-3.5 was in 2023 for serious coding work — but for Q&A, repo navigation, or boilerplate generation, they are good enough and the cost is zero. Aider’s BYOK design is the only major AI tool where this path is fully supported.

The third option is OpenRouter, Together, Fireworks, or Groq — third-party model gateways that often price below Anthropic and OpenAI direct, sometimes with free tiers. Aider’s --model flag accepts any OpenAI-compatible endpoint, which means moving between gateways is one config change.

The honest summary on cost: Aider’s marketing says “free.” The reality is “free if you bring your own model and you understand the per-token economics.” Heavy users on premium models will spend more on Aider than on a Cursor subscription. Light users or local-model users will spend less or nothing.

Where Aider shines

Three concrete scenarios where Aider is the best tool for the job.

Multi-file refactors with clean history. “Rename User.email to User.contact_email everywhere, update every caller, update the migration, update the tests.” Aider does this in 30–90 seconds, produces one commit per logical step, and leaves you with a git log that reads like a junior engineer wrote it. The auto-test-and-lint loop catches any callers that broke. Cursor’s Composer can do the same task, but the result is a single huge diff that you have to manually split for a clean PR. Aider’s natural output is the clean PR.

Open-source contributions where audit matters. When you submit a patch to a project that has a “no AI-generated code” policy or a “AI-generated code must be disclosed” policy, Aider’s commit messages and the per-step audit trail make the disclosure honest and easy. The PR description can link to the Aider session log. The reviewer can see exactly what the AI proposed and what the human accepted. Cursor and Copilot make this story harder; Aider makes it the default.

Working in a real codebase you don’t control. Maintainers of large open-source projects often work on a checkout of someone else’s repo. Aider’s “no proprietary state, no opinionated IDE setup, drop into any directory and run” model is friction-free in this context. You don’t need to install an extension, configure a project, or trust a cloud index — you run aider and it works.

Where Aider falls short

Three weaknesses to name honestly.

The lack of inline completion is real. Aider does not draw suggestions in your editor as you type. If you want the “Tab to accept the next line” experience, you need a separate tool — Copilot, Codeium, Cursor’s Tab — running alongside Aider. Most Aider users do exactly this, and the doubled tooling cost is a price they pay. For developers who want a single integrated setup, Aider’s CLI-only model is a non-starter.

The chat surface has more friction than a polished editor. Aider’s commands (/add, /drop, /diff, /run, /test, /web, /ask) are powerful but require reading the docs. The first two hours of using Aider feel less smooth than the first two hours of using Cursor, because the muscle memory is different and the visual feedback is sparser. Once the muscle memory builds, productivity is competitive. The onboarding cliff is the cost.

Single-task model. Aider runs one conversation at a time per terminal session. There is no native multi-agent, no background task queue, no parallel-run model the way Cursor 3.0’s Agents Window or Cline’s parallel sessions support. Power users open multiple terminal panes with different aider instances on different branches, which works but is not the same as a built-in multi-agent UI.

Aider vs Claude Code, OpenCode, and Cline

The CLI AI coding assistant category in 2026 has stratified. Aider is one of three or four real positions.

Aider vs Claude Code. Claude Code is Anthropic’s official CLI agent, launched in early 2025 and now well-established. The differences: Claude Code is Anthropic-only (no model choice, no BYOK to other providers), priced as part of the Claude.ai subscription bundle, and tightly integrated with Claude’s tool-use and computer-use capabilities. Aider is provider-neutral, priced per-token at the provider you choose, and lighter on the agent-runtime side. Public benchmarks suggest Aider uses roughly 4x fewer tokens per task than Claude Code, because Aider’s repo map is more parsimonious. Claude Code is the better choice if you’re already on the Claude subscription and want zero config. Aider is the better choice if you want provider neutrality, lower per-token cost, or local-model support.

Aider vs OpenCode. OpenCode is the newer, smaller, more terminal-native open-source CLI agent. It targets the same audience as Aider but with a more minimal surface and a more modern Rust implementation. OpenCode is faster to start, has a smaller dependency footprint, and a sharper command surface. It is also younger, with fewer integrations, less battle-tested model coverage, and a smaller community. Aider is the better choice for stability and breadth. OpenCode is the better choice if you want the lightest possible CLI and you don’t need every feature.

Aider vs Cline. Cline is the VS Code extension comparison — same target audience (autonomy-first, BYOK, open-source), different surface. Cline lives inside VS Code; Aider lives in the terminal. Cline has Plan/Act and a sidebar UI; Aider has the chat command surface. Cline’s MCP marketplace and computer-use are richer; Aider’s git-first workflow and repo map are sharper. The honest read: if you live in VS Code, Cline is the better default. If you live in Neovim, JetBrains, or any terminal-first setup, Aider is the better default.

Verdict: who should use Aider

Use Aider if any of these apply: you live in the terminal, you want a clean git history as the audit log, you contribute to open-source projects with strict AI disclosure rules, you need provider neutrality on models, you want local-model support without a side project, or you want the lowest-friction tool for multi-file refactors that ship as clean PRs.

Skip Aider if you want one-click setup, an integrated editor experience, inline completion, or a managed billing relationship instead of a per-token API tab. The polish gap with Cursor or Windsurf is real, and Aider does not try to close it — it tries to be the best tool at a different job.

The honest summary: Aider is the open-source backbone of the AI coding tools category. Most professional developers using AI for serious work have an Aider checkout somewhere on their machine, even if their primary editor is a commercial product, because the git-first workflow and the cleanness of the audit trail are uniquely useful for certain tasks. Treat it as a complement to your editor, not a replacement, and the math works out.