Zed multi-cursors plus AI: the pattern Cursor can't match
Published 2026-05-11 by Owner
Cursor has a better agent loop for open-ended tasks. Zed has something Cursor’s agent loop structurally cannot replicate: the ability to place 30 cursors at 30 different callsites and ask the AI to suggest the right transformation at each one simultaneously, with each suggestion grounded in its own local context.
This is not a niche power-user trick. For the specific class of problem — mechanical multi-callsite refactors — it is the fastest approach available in any editor today. Understanding when to reach for it, and when not to, is the whole skill.
Setting up a multi-cursor session
Zed’s multi-cursor bindings will be familiar if you’ve used VSCode. The core set:
Cmd-D— select the next occurrence of the current selection, adding a cursor thereCmd-Shift-L— select all occurrences of the current selection in the file at onceAlt-click— place an additional cursor at any arbitrary positionCmd-Shift-Alt-↑/Cmd-Shift-Alt-↓— add a cursor one line up or down
For a multi-callsite refactor, the standard setup looks like this:
# 1. Select the function name you're replacing
# 2. Cmd-D repeatedly to build up a selection, or
# Cmd-Shift-L to select all occurrences at once
# 3. Scan each cursor visually — verify they all land where you expect
# 4. Invoke the inline AI assistant
The cursors don’t have to be uniform occurrences of the same token. You can alt-click to add one at line 12, Cmd-D to grab three occurrences starting at line 45, and then alt-click again at line 88. The multi-cursor state is additive. This matters for real refactors, which rarely look like “find and replace all” — sometimes only a subset of callsites need the change, and you can scope the cursor set to exactly those.
Pattern matching helps for larger files. Cmd-F in Zed accepts regular expressions. A pattern like fetchUser\( will highlight every callsite that matches; selecting the first result and pressing Cmd-Shift-L puts a cursor at every match in the file. For callsite shapes that are syntactically distinctive, this is faster than building up matches one at a time with repeated Cmd-D.
One detail worth knowing: the step where you verify cursor positions is not optional. Regex matches can land inside string literals, comments, or test files you weren’t planning to touch. Before invoking the AI, scan the set of highlighted positions and use Esc followed by Alt-click to remove cursors that don’t belong. Thirty seconds of verification here prevents thirty seconds of cleanup after.
Using AI to generate variation across cursors
Once cursors are placed, Ctrl-Enter opens the inline AI assistant. The prompt you write applies at every cursor simultaneously. The behavior that makes this valuable: the model receives an independent context window anchored at each cursor position, not a single merged view of the whole file. Each suggestion is generated based on the surrounding code at that cursor’s location.
This is the part Cursor’s architecture cannot match. Cursor’s agent loop processes a task serially — it reads files, reasons about them, makes changes, and reads more files. For multi-callsite work, it visits each callsite in sequence. Mistakes accumulate in a single context. You don’t find out the fourth callsite was handled wrong until the agent has already modified callsites five through eighteen.
Zed’s multi-cursor AI produces N independent suggestions for N cursors, each evaluated in isolation. Critically, you review them before accepting. The inline assistant shows a diff at each cursor; Tab accepts, Esc skips. You move through the set at whatever pace you want, accepting the ones that look right and skipping the ones that need manual attention.
A prompt that works well across cursors has three parts:
// 1. State the old form precisely
// 2. State the new form precisely
// 3. Call out any edge case you expect
Replace this callsite from the old 2-argument form:
fetchUser(userId, includeProfile)
To the new options-object form:
fetchUserV2(userId, { include: ['profile'] })
Map: true → include: ['profile'], false → include: []
If includeProfile is a variable, preserve it with a ternary.
The model at each cursor sees that callsite’s actual arguments. A callsite passing the literal true gets include: ['profile']; one passing false gets include: []; one passing a variable shouldInclude gets include: shouldInclude ? ['profile'] : []. Each suggestion is grounded in what’s actually at that position.
A concrete refactor: 31 callsites of a changed logger signature
The scenario: logger.warn(message, meta) is being replaced with logger.warn({ message, ...meta }). The old signature took a message string and a metadata object as separate positional arguments; the new one takes a single merged object. There are 31 callsites spread across 8 files.
With an agent loop (Cursor Composer or Cline in Act mode): describe the refactor in natural language, let the agent read files and apply changes. Typical wall-clock time: 4–6 minutes. Typical outcome: 27–29 callsites correct, 2–4 with subtle mistakes — wrong spread order, dropped meta field, or a callsite where meta was conditionally undefined and the agent missed the guard.
With Zed multi-cursor:
# Step 1: Open first file, Cmd-Shift-L on `logger.warn(`
# This selects all occurrences in the file
# Step 2: Write the inline AI prompt:
#
# Rewrite this logger.warn callsite from:
# logger.warn(message, meta)
# To:
# logger.warn({ message, ...meta })
#
# If meta might be undefined at this callsite, preserve
# the guard: logger.warn({ message, ...(meta ?? {}) })
# Step 3: Tab through suggestions, accept or skip each one
# Step 4: Repeat for remaining 7 files
Each of the 31 cursors gets its own suggestion. The whole review pass takes about 90 seconds. The edge cases — callsites where meta is inline-constructed, where it’s a variable that might be undefined, where it’s already a spread — are visible as distinct proposals during review, not buried in a diff that went past without stopping.
The comparison isn’t that multi-cursor is faster in wall-clock time (for 31 callsites across 8 files, it’s roughly equivalent). The difference is the review surface. With the agent loop, you review a final diff that contains all 31 changes at once, and subtle mistakes are easy to miss. With multi-cursor, you make 31 micro-decisions in sequence, and the mistake at callsite 19 is as visible as the mistake at callsite 1.
The multi-cursor approach doesn’t eliminate judgment. It concentrates judgment into a structured review pass over individually presented proposals, rather than a retrospective scan of a large aggregate diff.
Where this fails
Cursors that drift out of sync. Zed anchors cursors to tokens rather than line numbers, so simple transformations are stable. The problem appears when a transformation is multi-line: if accepting a suggestion at cursor A inserts three lines, the visual positions of cursors B through Z shift on screen. The cursors themselves don’t move to the wrong code — they stay anchored to their tokens — but the visual scanning rhythm breaks. For multi-line transformations, accept all suggestions first, then review the diff as a whole rather than trying to track each cursor position.
Local context the AI misses. The context window at each cursor includes the surrounding code at that location, not the whole file. If the correct transformation depends on something defined elsewhere — a type alias, a constant, an import — the model may not see it. Inline that context into the prompt:
// Context for this prompt:
// UserOpts = { timeout: number; retries: number }
// The old callsite used an inline object matching this shape.
//
// Rewrite to pass a UserOpts variable instead of the
// inline object literal. Name the variable `opts`.
This works for one or two shared definitions. If the transformation requires understanding five different cross-file types, the prompt becomes a documentation block and the AI starts making assumptions. That’s the signal to use the agent loop instead.
Transformations with non-local dependencies. If the right output at callsite A depends on what callsite B does — a counter, a shared cache key, a derived value — multi-cursor AI cannot help. Each suggestion is generated independently with no awareness of the other cursors. These cases require either the agent loop or manual editing.
Too many cursors for too much variation. With 5 cursors and 5 structurally different transformation shapes, the review overhead erases the speed gain. Multi-cursor is efficient when the transformation is the same mechanical operation applied to the same call shape with varying arguments. When each callsite needs a genuinely different approach, scope the cursor set to only the uniform subset and handle the outliers separately.
When single-cursor is actually faster
For fewer than 3 callsites, multi-cursor setup takes longer than the alternatives. The overhead — selecting occurrences, verifying positions, composing a prompt that works at N locations — only pays off at scale. For 1–2 callsites, a single inline AI prompt or a manual edit is faster.
For context-heavy transformations, the single-cursor inline assistant wins. Placing a cursor at one callsite and writing a detailed prompt with full context is more reliable than packaging that same context into a prompt that has to work at 30 different positions without assuming too much. The tradeoff is always between how much per-callsite context the model needs and how mechanically uniform the callsite shapes are.
For whole-function rewrites, the agent loop is better. Multi-cursor AI works at the callsite level — the call expression and its immediate surroundings. If the task is “rewrite this entire function to use a new data shape,” that’s a single-location transformation that benefits from the agent having the full file in context, not a multi-cursor spread across 30 callsites.
A rough threshold: if the transformation is the same mechanical operation applied to variations of the same call shape, and there are more than 3 callsites, multi-cursor. If the callsites are fewer, or if each one requires deep local context to transform correctly, single-cursor or the agent loop.
The honest version of the pattern: it handles the boring part of refactoring well. The mechanical callsite transformation where the shape of the output is known but typing it 30 times is tedious and error-prone. The part where mistakes come not from not knowing what to do but from not having enough attention left to do it 30 times in a row. For that class of work, Zed’s multi-cursor plus AI is the clearest path from “I need to update all of these” to “I’ve reviewed every change and accepted the ones that are right.” No other tool currently does this as directly.