#workflow
36 items tagged #workflow.
36 items tagged #workflow.
Starting Claude Code at the wrong directory in a monorepo costs you context. Here's how to manage session roots, per-package CLAUDE.md files, and cross-package edits without losing your mind.
Claude Code's three permission modes gate different levels of agent autonomy. Getting the mode wrong costs you either hours of unnecessary round-trips or a destructive operation you can't undo.
Claude Code's plan mode produces a written proposal with no file edits or commands. Here's when to use it, how to refine plans through dialogue, and when to skip it.
Claude Code stores conversation history locally. Here's how to resume sessions with --continue and session IDs, when it works well, when it doesn't, and one habit that makes re-entry fast.
Subagents give Claude Code an isolated context window for grunt work. Here's when that matters, when parallel dispatch pays off, and when it backfires.
Most days one AI coding tool is enough. This is about the narrower case where running three in parallel — each doing the thing it was built for — actually earns its cognitive overhead.
Zed lets you place cursors at dozens of callsites simultaneously, then invoke the inline AI assistant to transform each one with local context — a mechanical refactor pattern that Cursor's agent loop can't replicate.
AI editors like Cursor are great inside the editor. For email drafting, browser-side research, and quick model comparisons, a general AI aggregator does what coding-native tools weren't built for. Here's how I split the two.
A small team tested Cursor, Copilot, and Aider as separate review passes before human review. The useful result was not more comments, but better self-review before opening PRs.
Copilot's auto-review feature misses real bugs and flags style nits. Here's a three-pass workflow that uses Copilot for what it's good at and humans for what it isn't.
Asking the AI 'why is this broken' produces plausible-but-wrong answers. A four-step structure produces useful ones. Here's the structure and what each step actually does.
Inheriting a 10-year-old codebase is its own kind of work. AI tools won't fix the legacy, but they can dramatically speed up the understanding-what-this-does phase.
Free-form prompting feels like pair programming and often isn't. A four-role structure produces better outcomes and is closer to how human pairing actually works.
AI-generated PR reviews can catch real issues or flood your team with low-signal noise. The difference is in what you ask the AI to do and how you wire it into the human review.
When AI lets one engineer ship 3x more code, the team's bottleneck moves to review. Most teams haven't adjusted. Here's what's happening and what to do.
Reviewing AI output one chunk at a time feels slower than letting it produce a feature and reviewing the diff at the end. Across many sessions, the reverse turns out to be true.
Repos with detailed READMEs work better with AI tools. The market is responding. Here's how documentation expectations have shifted.