GitHub Copilot Chat slash commands: the ones that earn their keep
Published 2026-04-04 by Owner
Copilot Chat exposes slash commands as a way to invoke specific behaviors. The full list (/explain, /fix, /tests, /doc, /optimize, /clean, /new, /help, /clear, /api, /workspace, /vscode) reads like a checklist of plausible features. In practice, three of them carry their weight; the rest add UI clutter.
/tests — high value
Of all the slash commands, this is the one I use most. The flow:
- Open a file (or select a function)
/tests- Copilot generates a test file or test cases for the selection
The output is generally good. For pure functions and well-bounded code, the tests are usable as-is. For functions with complex dependencies, the tests typically need adjustment but provide a useful skeleton.
What /tests does well that just-asking-for-tests in chat doesn’t: it picks a test framework that matches your project (vitest if you have vitest, jest if you have jest, etc.) without you specifying. It uses your existing test patterns if there are tests in the same directory. The defaults are reasonable.
The pattern that makes this most useful: write the function, run /tests, accept and refine. Don’t ask /tests to validate behavior you haven’t specified — it can only test what’s in the code, so missing edge cases stay missing.
/explain — medium value
/explain produces a natural-language description of selected code. Useful for:
- Understanding code you didn’t write
- Generating a comment block from a complex function
- Catching up on legacy code in an unfamiliar area
The output is verbose by default. Lots of “this function takes an X and returns a Y.” The signal-to-noise ratio is mediocre.
It’s most useful when you ask follow-up questions. /explain then “what would happen if input is empty?” is a useful pattern. The first command frames the context; the follow-up extracts the specific information you wanted.
For code you wrote yourself, /explain is rarely worth the keystrokes. You already know what it does. For someone else’s code, it’s a faster way to grok unfamiliar logic than reading line-by-line.
/fix — medium value, situational
/fix looks at diagnostics in the current file and proposes fixes. When the diagnostics are clean (typed errors, ESLint warnings, simple syntax errors), it works well. The fix is usually correct.
When the diagnostics are subtle (type narrowing issues, complex generic constraints), /fix produces something that suppresses the diagnostic without addressing the underlying issue. The most common pattern: adding a type assertion that silences the error.
I use /fix for the obvious cases — simple typos, unused imports, missing await. For anything that requires reasoning about why the diagnostic exists, I describe the issue in chat instead. The chat-form prompt produces better fixes because it lets me explain context.
/doc — low value
/doc adds documentation comments to selected code. The output is generally bland — describing types and parameter names without adding context.
Example output for a typical function:
/**
* Calculates the total price for a list of items.
*
* @param items - The items to calculate the total for
* @param taxRate - The tax rate to apply
* @returns The total price
*/
function calculateTotal(items: Item[], taxRate: number): number { ... }
This adds zero information beyond what the type signature already provides. Documentation that doesn’t add information is noise.
When /doc is useful: legacy code in dynamic languages where the types aren’t expressed. JavaScript without types, Ruby, Python without type hints. Here, the doc adds the type information that’s not otherwise available.
For typed code, skip it.
/optimize — low value, often misleading
/optimize proposes performance improvements to selected code. The proposals are often:
- Genuinely incorrect (“use Array.from instead of map” — they have similar performance)
- Trivially better in microbenchmarks but irrelevant in practice
- Premature optimizations that hurt readability
The flagging quality is mixed. Real performance issues (N+1 queries, accidental O(n²) loops, missing memoization in hot paths) get caught about half the time. Non-issues get flagged regularly.
If you want real performance review, use a profiler and a human. Don’t rely on /optimize.
/workspace — useful in concept, limited in execution
/workspace supposedly searches your whole codebase to answer questions. In practice, the search is approximate and the results are mediocre on large codebases.
What works: questions about the codebase’s structure (“where is auth handled,” “what tests exist for the user module”). The answer is usually a list of files that’s roughly right.
What doesn’t: questions requiring synthesis across multiple files. “Why does this fail?” “What’s the call chain when X happens?” The model can’t reliably trace through call graphs in a way that reasons about behavior.
For Copilot Enterprise users with the workspace indexing turned on, /workspace is more useful — indexing produces better search. For Copilot Individual or Business without the index, /workspace is a “best guess” search that’s right often enough to be tempting but wrong often enough to mislead.
/clean — tempting and dangerous
/clean rewrites code to be “cleaner.” This sounds appealing. The result is often:
- Reformatted to a different style (which fights with your formatter)
- Refactored in ways that change behavior subtly
- Made “more idiomatic” in ways that don’t match the project’s idioms
I’ve stopped using /clean. The risk-to-benefit ratio is too high. If I want to refactor something, I describe what I want refactored, not invoke a vague “make it better” command.
/new — useful when you remember
/new scaffolds a new file or project from a description. For project scaffolding, you’d usually use a CLI (create-next-app, cargo new, etc.). For new files within a project, the patterns from the rest of the project usually carry the right shape, so /new adds little.
The narrow case where /new shines: scaffolding a file pattern that’s complex enough to be tedious but not common enough to have a generator. A test fixture file with specific shape, a config file that requires domain knowledge, a migration template. For these, /new saves real time.
What’s missing
A /test-failure command — given a failing test, propose changes to make it pass — would be useful and isn’t in the default set. The chat panel handles this fine (paste the failure, ask), but a slash command would streamline it.
A /diff-explain command — given the changes in a PR, produce a summary — would be useful. The closest is asking in chat, which works but isn’t first-class.
For now, slash commands are a useful but limited toolkit. The three winners (/tests, /explain, /fix) are the ones that fit common workflows well. The rest, you’d be surprised how often raw chat outperforms.