Tinker AI
Read reviews

Outcome

Refactor finished in 6 days with no regressions; Zed's speed made large-workspace navigation easier, while AI assistance stayed limited to tests and small rewrites

7 min read AI-assisted

I used Zed as the primary editor for a Rust workspace refactor that touched six crates. The goal was to move request parsing and validation out of a large API crate and into smaller crates that could be tested independently.

This was not a case where the AI wrote most of the code. Rust refactors with ownership changes, trait bounds, and crate boundaries still require careful human judgment. The useful part of Zed was more basic: it stayed fast while the workspace was noisy, and its AI panel helped with small slices when the context was precise.

That made the refactor smoother than it would have been in a heavier editor.

The workspace

The project had:

  • 6 Rust crates
  • about 38k lines of Rust
  • a large api crate doing too much
  • request parsing mixed with validation and persistence
  • 214 tests across the workspace
  • CI running clippy, fmt, unit tests, and integration tests

The refactor target was:

  • move parsing into request_parser
  • move validation into domain_validation
  • keep HTTP-specific code in api
  • avoid changing response behavior
  • add tests at the new crate boundaries

The risk was not syntax. The risk was moving logic and accidentally changing behavior.

Why Zed was worth testing

I had been using Cursor for similar work, but large Rust workspaces can make editor performance matter. When every rename triggers language-server work, every second of lag makes you more likely to batch changes and review less carefully.

Zed’s pitch is speed, and this project gave that pitch a fair test.

The setup:

  • Zed as the editor
  • rust-analyzer enabled
  • AI panel connected to Claude Sonnet
  • terminal tests run outside the editor
  • small, manual git commits after each boundary change

I did not use Zed as an autonomous coding agent. I used it as a fast editor with occasional AI help.

The refactor sequence

I split the work into six commits:

  1. Create request_parser crate with no behavior changes.
  2. Move pure parsing helpers and add direct tests.
  3. Create domain_validation crate.
  4. Move validation rules behind a small public API.
  5. Update the api crate to call the new crates.
  6. Remove duplicated tests and add integration coverage.

Zed’s AI helped most in commits 2 and 4, where the task was repetitive but bounded.

Example prompt:

This parser test file has three cases for valid JSON request bodies.
Add tests for:
- missing required field
- invalid enum value
- extra unknown field

Follow the existing test style. Do not change parser behavior.

That produced useful test cases. I still edited names and assertions, but the structure was correct.

Where Zed’s speed changed the work

The biggest difference was navigation.

During this refactor I was constantly jumping between:

  • route handlers
  • request DTOs
  • validation functions
  • error mapping
  • tests
  • crate manifests

Zed stayed responsive. Search results came back quickly. Multi-buffer editing made it easy to keep the relevant files open without building a messy tab stack.

That sounds mundane, but refactoring is mostly moving through code without losing the thread. Editor performance is not a cosmetic feature when the project is large enough.

Where the AI helped

Test expansion. Zed’s AI was good at writing additional Rust tests when the first case existed. It followed #[test] naming, fixture helpers, and assertion style reliably.

Error enum cleanup. After moving validation errors into a new crate, I needed to update mappings from domain errors to API errors. The AI helped fill out match arms and suggested a couple of missing cases.

Doc comments for public crate APIs. I usually dislike generated comments, but crate boundary functions needed short explanations. Zed’s AI produced a reasonable first draft that I tightened.

Small rewrites after compiler errors. For simple trait-bound or lifetime messages, asking the AI to explain the error was occasionally faster than reading the full diagnostic myself.

Where I kept it out

Crate boundary decisions. The AI wanted to expose too much from the new crates. I kept the public APIs small by hand.

Ownership-sensitive changes. A few helpers passed borrowed request data through validation. Suggestions that cloned the data would have made code compile but changed allocation behavior. I rejected those.

Behavior-preservation review. The important question was not “does this compile?” It was “does every invalid request still produce the same public error shape?” That required reading tests and API snapshots.

The numbers

MetricValue
Total time6 working days
Files touched41
Crates added2
Lines changedabout 2,800
Tests added39
Tests removed or consolidated18
CI failures during branch7
Regressions after merge0

The original estimate was about 7 to 8 days. Zed did not create a dramatic time saving. The better claim is that it made the 6 days less jagged. I spent less time waiting on the editor and less time recovering from context loss.

One AI mistake worth calling out

The AI suggested this pattern in one validator:

if request.items.is_empty() {
    return Ok(());
}

That compiled. It also changed behavior. Empty items should have returned a validation error, not a successful no-op.

The mistake was easy to catch because the old behavior had a test. Without that test, the suggestion would have looked like a harmless guard clause.

That is the recurring lesson with AI in refactors: the dangerous suggestions often look reasonable.

What I would repeat

I would use Zed again for Rust refactors where code navigation matters more than full-agent automation.

The pattern that worked:

  • make crate-boundary decisions manually
  • use the AI for tests and repetitive match arms
  • keep compiler output visible
  • commit after each coherent move
  • reject code that preserves types but changes behavior

Zed is not the strongest AI coding product for large autonomous edits. It is a strong editor that happens to have useful AI features. For this kind of Rust work, that was enough.

Verdict

This case made me more convinced that editor speed still matters in the AI era. A slow editor with a powerful agent can still make refactoring feel heavy. A fast editor with modest AI can be the better tool when the human needs to stay in control.

For Rust specifically, I would use Zed for careful refactors and reach for a more agentic tool only when the task is mechanical, well-tested, and easy to review.