I built a real-time multiplayer game over five weeks using Phoenix LiveView and Zed. The choice of Zed was deliberate — I wanted a fast editor with good vim support, and I’d been hearing positive things about its built-in AI panel.
The result: I’d use Zed again for Elixir work. The AI was useful for parts but not transformative. The editor itself was the bigger win.
The project
A turn-based multiplayer game with real-time updates:
- Phoenix 1.7 with LiveView
- Ecto + Postgres for persistence
- Phoenix Presence for the online player tracking
- About 5500 lines of Elixir
Stack: typical modern Elixir/Phoenix.
What Zed got right (the editor part)
Independent of the AI features, Zed was excellent for this work:
Fast on a small project. Zed’s snappiness shows on smaller codebases. Subsecond startup, instant search, no lag during typing.
Strong vim mode. Better than the IdeaVim plugin in IntelliJ; better than the vim emulator in VS Code. Macros worked. Text objects worked. The interaction with the editor’s commands was smooth.
Good Elixir-LSP integration. ElixirLS is solid; Zed talked to it cleanly. Go-to-definition, autocomplete, error reporting — all worked.
Clean panes for split work. I often had test file in one pane, source in another, terminal at the bottom. Zed’s pane management is smooth.
For an editor experience, Zed beat my prior setups (VS Code with extensions, JetBrains’ RubyMine with Elixir plugin) by a meaningful margin.
What Zed’s AI did well
The assistant panel was useful in specific ways:
Explaining unfamiliar Elixir patterns. When I encountered code in the existing project I didn’t immediately understand, asking the assistant produced reasonable explanations. The AI’s Elixir knowledge is decent for explanation tasks.
Generating test code. ExUnit is well-trained. The assistant could generate tests against existing functions that were structurally fine.
Refactoring within a file. Cmd+? on selected code with a refactoring instruction produced workable results for routine refactors.
Generating Ecto schemas. From a description, the assistant could produce reasonable Ecto schemas with associations and validations.
What Zed’s AI did less well
LiveView component logic. LiveView’s stateful components have specific patterns (mount/render lifecycle, event handlers, assigns). The assistant’s training on these is uneven. About 40% of suggestions had structural issues.
OTP patterns. GenServers, supervisors, supervision trees — the higher-level OTP patterns are niche enough that suggestions were often subtly off. Could be a starting point but rarely correct.
Complex Ecto queries. Simple queries were fine. Multi-join queries with subqueries and CTEs had issues — sometimes syntactically wrong, sometimes semantically wrong.
Pattern matching nuances. Elixir’s pattern matching is a key feature. The assistant occasionally produced patterns that were technically valid but missed obvious match cases or had subtle ordering issues.
Comparing to my TypeScript experience
For a TypeScript project of similar scope, AI tools (Cursor, Cline) typically save me 40-50% of my time. For this Elixir project, the savings was more like 20-25%.
The gap reflects what I’d seen with Flutter/Dart — niche languages with thinner training data produce less reliable AI assistance. Elixir’s community is healthy but smaller than JavaScript’s, and it shows.
A specific friction
LiveView’s component model differs from React’s component model. Both have components, both have state, both have events — but the lifecycle and assigns patterns are different.
Zed’s assistant frequently described LiveView components in React terms. “This is like a controlled component in React.” Sometimes the analogy helped; sometimes it led the assistant to suggest patterns that don’t translate.
A specific example: I asked for a debounced input handler. The assistant suggested useEffect-style cleanup logic. LiveView doesn’t have useEffect; the equivalent is Process.send_after/3 and assigns updates. The first attempt was structurally wrong; correcting it took a few iterations.
For mainstream patterns where the assistant has clear training, suggestions are good. For Elixir-specific patterns where the assistant might confuse them with similar patterns from other languages, suggestions need careful review.
What I configured
Zed-specific config that helped:
# ~/.config/zed/settings.json (excerpt)
{
"languages": {
"Elixir": {
"language_servers": ["elixir-ls"],
"format_on_save": "on",
"tab_size": 2
},
"HEEX": {
"language_servers": ["elixir-ls"],
"format_on_save": "on"
}
},
"assistant": {
"default_model": {
"provider": "anthropic",
"model": "claude-3-5-sonnet"
}
}
}
The HEEX language config matters — Phoenix’s templates use HEEX syntax that Zed needs to know about.
Productivity numbers
Estimated time: 7 weeks. Actual: 5 weeks. Saving: ~2 weeks.
Subscription cost: I used my existing Anthropic API key (BYOK). API spend during the project: ~$28.
The 2-week savings was meaningful. The split was roughly:
- 1 week saved on routine implementation (Ecto schemas, basic CRUD, tests)
- 0.5 week saved on debugging and research (assistant explanations were faster than searching)
- 0.5 week saved on documentation
The remaining work — LiveView components, OTP patterns, complex queries — wasn’t meaningfully accelerated. That work remained at human speed.
What I’d do differently
If I were starting again:
More aggressive use of pinned context. Zed lets you pin files to the assistant’s context. Pinning my reference LiveView modules at the start of each session would have improved suggestions.
A more comprehensive assistant prompt. I used a mostly-default system prompt. A custom prompt that explicitly described LiveView’s lifecycle and OTP patterns would have helped suggestions match the project’s conventions.
Earlier acceptance that some tasks are manual. I tried to use the assistant for several LiveView component tasks early on. Most attempts wasted time. Manual implementation was faster.
Worth using Zed for Elixir?
Yes, especially compared to VS Code. The editor itself is meaningfully better. The AI assistant is comparable to other editors’ AI features (not better, not noticeably worse) for Elixir specifically.
For Elixir teams, my recommendation:
- Try Zed for two weeks
- The editor improvements alone may justify the switch
- The AI is bonus value; calibrate expectations to “useful for routine work” rather than “transformative”
For teams sticking with VS Code or other editors, the AI tooling experience won’t change much. Cursor on Elixir vs. Zed on Elixir is similar AI capability with different editor capabilities.
What’s next for Elixir + AI
The interesting question is whether Elixir’s AI tooling story improves. Two factors:
Training data growth. As more Elixir code is written and shared publicly, the training data improves. This is happening but slowly.
Specialized models. A model fine-tuned on Elixir/Phoenix patterns would help. None exists publicly that I know of. The Elixir community might benefit from one.
For now, Elixir + AI is workable but not a flagship experience. The community is small enough that vendor investment in Elixir-specific features is limited.
A note on Zed itself
I started this project skeptical of switching editors. By the end, I was sold on Zed. Not for the AI — for the editor.
This was a useful re-frame. AI features are sometimes the headline, but the underlying editor’s quality matters too. A great editor with mediocre AI can be better than a mediocre editor with great AI for some workflows.
For Elixir work specifically — where the AI is mediocre across all editors due to language coverage — the editor’s quality determines the experience. Zed’s editor quality made this project pleasant to work in regardless of the AI’s ups and downs.