Tinker AI
Read reviews
intermediate 6 min read

AI coding and git: commits, branches, and the etiquette of an AI co-author

Published 2026-05-11 by Owner

Most git workflows were designed around one person per branch, one commit message per idea. AI-assisted coding breaks that assumption in subtle ways. The agent writes code that you didn’t type. It edits across multiple files in one turn. Left unchecked, it can rewrite history in ways that are hard to untangle.

None of this is a reason to avoid AI tools for coding. But the workflow habits that work well for purely human coding transfer only partially. A few adjustments save real headaches.

The challenges cluster around two things: attribution and safety. Attribution — who wrote this, and is that reflected in the commit? Safety — what’s the blast radius if the agent goes off-script, and how quickly can you recover? Both are solvable with conventions that don’t add much friction once they become habit.

Commit attribution: who actually wrote this?

GitHub and GitLab both support the Co-Authored-By trailer, introduced to handle pair programming but equally applicable to AI sessions:

git commit -m "Add rate-limit handling to API client

- Retry on 429 with exponential backoff
- Surface error details in ApiError type

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>"

The trailer appears in the commit detail view and counts the AI toward the contribution graph (or not, depending on whether the email is registered — usually it isn’t, which is fine). The more important use is human: a new maintainer scanning the log can see which commits had AI involvement and calibrate expectations accordingly.

Whether to include it depends on your team’s norms. A reasonable policy: add Co-Authored-By when the AI wrote more than half the lines in the commit, skip it when the AI only suggested a rename. The threshold is arbitrary but any consistent policy beats no policy.

What the commit message itself should contain. The “why” stays human, always. The AI knows what it did; it doesn’t know why the task existed, what the ticket said, or what was decided in the meeting before the coding session started. Commit messages like “AI implementation of feature” are almost useless. Messages like “Skip null check on user.id — auth middleware guarantees non-null after login” are useful even if an AI wrote the code. Write the message yourself, or review and edit the one the AI drafts.

Some teams ask the AI to draft the commit message as part of the session, then the developer edits it before committing. That works well as long as editing actually happens — rubber-stamping an AI-drafted message erases the human context that makes commit history searchable six months later.

A practical structure that works across team sizes:

<one-line summary in imperative mood>

<optional context paragraph: what problem this solves and why this approach>

Co-Authored-By: <AI name> <noreply@anthropic.com>

The paragraph slot is where the human judgment lives. It’s also where reviewers look first.

Branch-per-task hygiene: never let the agent edit on main

The strongest safeguard in AI-assisted git workflows is also the simplest: the agent never touches main. Every AI-assisted task gets its own branch.

git checkout -b feat/add-rate-limit-retry
# run the AI session here
git push -u origin feat/add-rate-limit-retry
# open PR for review

This matters more with AI than with a junior developer, because an AI agent will cheerfully make dozens of file edits in a single turn without pausing to check whether any of them conflicts with something in flight on main. On main, there’s no checkpoint. On a branch, the PR diff is the checkpoint.

A secondary benefit: one task per branch keeps the blast radius of a bad AI session contained. If the agent went in the wrong direction for 20 minutes and generated 400 lines of unhelpful code, a git checkout main && git branch -D feat/bad-idea costs nothing. On main, the equivalent is a messy revert commit that pollutes the history.

Branch naming doesn’t need special AI-specific conventions. feat/, fix/, chore/ prefixes are enough. The convention is about what the task is, not who did it.

One argument for keeping the scope of AI sessions narrow: the wider the task, the longer the agent runs before you see a diff, and the harder the review is. A PR that says “refactor authentication module” and touches 24 files is opaque to review regardless of whether a human or AI wrote it. A PR that says “extract token validation into a standalone function” and touches 4 files is clear. AI sessions compound this effect because the agent will happily generate scope, whereas a human developer gets tired. Narrow the task before the session starts; it makes the branch and the PR both easier to manage.

The “AI rewrote my history” recovery

This happens. An AI agent asked to “clean up these commits” or “squash before merging” can run git rebase -i or git reset --hard and lose work. The changes may look gone.

They usually aren’t. Git’s reflog records every place HEAD pointed, including positions from before destructive operations:

git reflog
# output like:
# a1b2c3d HEAD@{0}: rebase -i (finish): returning to refs/heads/feat/my-task
# 9f8e7d6 HEAD@{1}: rebase -i (squash): squash commits
# 3c4d5e6 HEAD@{2}: commit: WIP: fix edge case in parser
# 7a8b9c0 HEAD@{3}: commit: Add initial parser implementation

The commits before the rebase are still in object storage. To get back to the state before the rebase ran:

git reset --hard HEAD@{3}

Replace 3 with whatever index shows the last commit before the destructive operation. The reflog entries expire after 90 days by default, so this recovery works as long as the loss is discovered reasonably soon.

The practical prevention: never give an AI agent access to git history-rewriting commands (rebase, reset --hard, push --force) unless you’ve committed everything you care about first. For tools like Claude Code or Cursor Agent that can run arbitrary shell commands, this means being explicit in the prompt: “don’t rebase; if you want to clean up commits, note what you’d do and I’ll handle it.”

If the loss is already done and the reflog approach above doesn’t surface the right commit, check the stash too: git stash list shows anything the agent might have stashed before a destructive operation. It’s less common, but it’s been the recovery path more than once.

Rebasing during an AI session: timing matters

Rebasing mid-session is a context-drift problem. Here’s why: an AI agent’s context window holds the state of files as they were when it read them. If a rebase runs midway through the session and shifts line numbers or renames functions, the agent’s subsequent edits are based on a mental model that no longer matches the disk. The agent doesn’t re-read files after every action unless forced to.

Two approaches, depending on how the session is going:

Finish the task, then rebase. If the AI session is productive and nearing completion, don’t interrupt it. Finish the logical unit of work, commit, then rebase onto the updated base. This is the cleaner path.

Pause the session, rebase, restart. If a rebase is urgent (a dependency was just merged that affects the exact files being edited), pause the AI session explicitly. Run the rebase, resolve any conflicts manually, then start a fresh AI session with the updated state. Continuing the old session risks the agent making edits against the pre-rebase file state.

The failure mode to avoid: running a rebase in the terminal while the AI session is mid-flight, then feeding the agent’s output (which references pre-rebase state) back into the post-rebase tree. The diffs will look wrong in subtle ways, and the resulting merge conflicts are harder to untangle than if either step had run cleanly.

The one habit that prevents most merge-conflict pain

Commit AI changes at logical steps, not at session end.

An AI session that runs for 45 minutes without a commit can produce changes across 15 files. If another developer (or another AI session on a different branch) touched any of those files during that time, the merge is painful. The longer the session runs without commits, the larger the divergence window.

The discipline that works: every time the AI completes a logical step — “added the parser class,” “updated the API types,” “fixed the failing tests” — commit before moving on.

# after the AI adds the parser class and tests pass:
git add src/lib/parser.ts src/lib/parser.test.ts
git commit -m "Add CSV parser with quoted-field support"

# after the AI updates the API types:
git add src/types/api.ts
git commit -m "Extend ApiResponse type to include pagination metadata"

These commits don’t need to be beautiful. They need to exist. Granular commits in an AI session serve the same function as autosave: they give you recovery points, they make the eventual rebase cleaner, and they interleave more safely with parallel work.

A session that produces 8 commits is easier to review than one that produces 1 commit. The PR diff is identical in size, but git blame and bisect work much better on granular history. If the AI’s change to parser.ts introduced a bug, a granular commit lets git bisect find it in one or two steps. A single giant commit means manually reading 400 lines of diff.

The counterargument is that granular commits produce messier history before squash-merge. That’s true. If the team uses squash-merge on PRs, the internal granularity collapses at merge time and doesn’t appear in main’s history. Commit often during the session, squash at merge — you get the recovery points without polluting the long-term log.

One habit, significant payoff: treat each AI-completed step like a save point in a game. Commit before moving to the next step.