GitHub has rolled out a model selection interface for Copilot Chat on Business and Enterprise plans. Users can now switch between Claude 3.5 Sonnet (Anthropic), Gemini 1.5 Pro (Google), and GPT-4o (OpenAI) from a dropdown in the chat panel, without leaving VS Code or the GitHub web interface.
Individual plan access to the model picker is on the roadmap for this quarter, according to GitHub’s changelog.
Why this matters
Until now, Copilot’s model was a black box. GitHub rotated underlying models without announcing it, and users had no way to request a specific one. The result was unpredictable behavior shifts between sessions, and no recourse if a model update made Copilot worse for your specific workflow.
The model picker changes this. You can now:
- Use Claude for tasks where instruction-following matters most (precise refactors, documentation generation)
- Use GPT-4o for general coding assistance where speed matters
- Use Gemini for long-context tasks where you need to load large files or entire repos into context
That’s an actual choice, not a marketing claim.
What it changes in practice
For most developers, GPT-4o will remain the default — it’s fast and broadly capable. The picker becomes useful when you hit a specific limitation.
Claude 3.5 Sonnet is notably better at following complex, multi-step instructions without improvising. If you’ve found Copilot tends to “do something close to what you asked” rather than what you actually asked, Claude is worth trying. The tradeoff is slower response times on Copilot’s infrastructure compared to direct API access.
Gemini 1.5 Pro’s 1M token context window is accessible through the picker, which opens up use cases that weren’t possible before — loading an entire large file or multiple long files into context without chunking.
What it doesn’t change
The model picker applies to Copilot Chat only. Inline completions still use GitHub’s own model selection, which is not user-configurable.
This is the more important limitation. The autocomplete experience — which is how most developers interact with Copilot for the majority of their time — remains unaffected. The chat panel is useful but secondary for most workflows.
Enterprise plan admins can restrict which models are available to their organization, which will matter for teams with data governance requirements around specific providers.
The competitive context
The model picker announcement comes as Cursor, Windsurf, and other Copilot competitors have offered model selection for months. Copilot Business at $19/seat has held its market position largely on distribution (GitHub integration, enterprise trust) rather than feature parity.
Adding model selection closes one meaningful gap. The more significant gaps — codebase indexing quality, multi-file editing UX, and background agent capabilities — remain. Those are harder to close with a settings menu.
For teams already on Copilot Business, the picker is a useful addition at no extra cost. For teams evaluating whether to switch from Copilot to a competitor, it narrows the delta but probably doesn’t change the outcome of that evaluation.