Claude Code from zero: installing the CLI and getting through the first useful session
Published 2026-05-11 by Owner
Most engineers who bounce off Claude Code do so in the first 20 minutes — either because install stalled on auth, or because the first prompt was bad enough that the tool looked useless. Both problems are fixable. This is the shortest path from zero to a session that actually produces something.
Installing and staying current
Claude Code is distributed as an npm package. The install command:
npm install -g @anthropic-ai/claude-code
That puts the claude binary in your npm global bin directory. To confirm it landed:
claude --version
npm vs Homebrew vs direct download. At time of writing, the official distribution channel is npm. There’s no Homebrew tap and no standalone binary download from Anthropic’s site. Some third-party taps have appeared; avoid them — they lag on updates and you lose any integrity guarantees. npm is the one that updates cleanly: npm update -g @anthropic-ai/claude-code whenever you want the latest.
If you’re on a machine where npm global installs go somewhere inconvenient (common on macOS systems without a Node version manager), the cleaner approach is to install Node via nvm or fnm first. Both put npm global bins on your PATH without root permissions, and global updates don’t require sudo.
Version pinning note. Claude Code moves fast. Features mentioned in guides from three months ago may have been replaced or renamed. Running claude --version before trusting any documentation, including this one, is a reasonable habit.
One practical detail: if claude isn’t found after install, the npm global bin directory probably isn’t on your PATH. Running npm bin -g shows where npm puts global binaries. Add that to your shell rc file if it’s missing. On macOS with the system Node, this is a surprisingly common first stumble.
The auth flow on first run
Running claude in any directory starts an interactive session. On first run, it prompts you to authenticate.
There are two auth paths:
Path 1: Claude.ai subscription (Pro or Max plan). The CLI opens a browser URL and you complete an OAuth flow against claude.ai. Your session token gets written to a credentials file in your home directory — on macOS and Linux, that’s typically ~/.config/claude/ or a similar XDG-compliant path. The CLI will tell you the exact path if you look at the output during auth.
Path 2: Anthropic API key. If you have API access with billing enabled, you can set ANTHROPIC_API_KEY in your environment and the CLI uses that instead of the OAuth flow. This is what CI/CD usage and scripted invocations typically use. The API key path is also useful if you’re switching between multiple Anthropic accounts or want to track usage separately.
export ANTHROPIC_API_KEY=sk-ant-...
claude
The credentials file holds your session token, not your password. Losing it means you re-auth; it doesn’t expose your account in a serious way. That said, don’t commit ~/.config/claude/ to git, and if you’re running Claude Code in a shared development environment, check that the credentials file isn’t world-readable. On Linux, chmod 600 ~/.config/claude/credentials is the right posture if the directory was created with permissive defaults.
The billing model differs between the two paths. With a Claude.ai subscription, usage counts against your plan’s included capacity. With an API key, usage is metered per token and charged to your Anthropic account. For casual personal use, the subscription path is usually cheaper at current prices. For team automation or high-volume scripting, the API key path gives you more control over cost attribution.
One thing that trips people up: the CLI requires an active network connection to start a session, even after the first auth. Claude Code is a thin client; it doesn’t run a model locally. Every prompt goes to Anthropic’s API. If you’re trying to use it on a flight, don’t expect it to work.
The minimum useful command
Open a terminal in a repository you know well. This part matters: don’t start in an unfamiliar codebase.
cd /path/to/your-project
claude
The first prompt determines whether the session is useful. The common mistake is asking something vague like “look at this project and improve it.” That’s not a prompt; it’s an invitation for the model to thrash.
A concrete example of a good first prompt:
I need to add input validation to the createUser function in src/api/users.ts.
The function currently accepts a plain object. I want it to reject requests
where email is missing or not a valid email format, and where name is an empty
string. Return a 400 with a message describing the first failing field.
Don't add a new library — we already use zod, check package.json.
What makes this prompt work:
- Specific file and function name
- Describes current state and desired state
- Specifies the error contract
- Mentions an existing dependency to avoid redundant installs
Claude Code will read the file, check package.json, and produce a diff. It may ask a clarifying question before writing. Both are correct behavior — the tool isn’t supposed to hallucinate your intent.
The prompt format that works consistently: current state → desired state → constraints. The middle part (desired state) is the part people usually include. The outer two are what most prompts skip, and skipping them is where the thrashing comes from. “Current state” tells the model what it’s looking at without making assumptions. “Constraints” (existing dependencies, error format, style conventions) prunes the solution space to what fits your codebase.
A bad version of the same prompt: “add validation to createUser.” This will produce something — maybe correct, maybe not — but the model has to guess the schema, the error response format, and whether to add a library. Each guess is a place where the output diverges from what you wanted. The extra 30 seconds writing a better prompt saves 5 minutes of editing the output.
Five things first-time users get wrong
1. “Fix everything.” Variants: “clean up this codebase”, “make the tests pass”, “review all the issues.” Asking for broad open-ended tasks produces broad, often contradictory changes. The autonomous loop doesn’t know which of your 47 failing tests are in-progress features and which are genuine regressions. Start narrower than you think you need to.
2. Running it in an unfamiliar repository. Claude Code reads files, runs commands, and makes changes. If you don’t know what the test suite does, you can’t tell whether the changes it makes are correct. The tool amplifies what you bring: if you’d catch a bad change in code review, you’ll catch a bad change from Claude Code. If you wouldn’t, you won’t.
3. Forgetting about permission modes. By default, Claude Code asks before running terminal commands. This is intentional — an AI running rm -rf in your home directory without asking is worse than one that asks first. Some users, frustrated by the prompts, find flags that disable confirmations. That’s fine for trusted contexts. It’s not fine when you haven’t read what the session is about to do.
4. Treating it like a chat interface. Claude Code is a coding agent, not a chat window. Prompts like “what do you think about using Redis here?” waste the tool’s specific strengths. Use the web interface or the API directly for architecture discussions. Use the CLI when you want file reads, diffs, and command runs — i.e., when you want work done, not opinions.
5. Expecting the autonomous loop to be one-shot. “Do this whole feature” and then leaving it alone for 30 minutes is rarely a good idea until you’ve built enough trust with the tool to know its failure modes in your specific codebase. The reliable pattern is: prompt, review the plan or first diff, confirm or redirect, then let it continue. A few short feedback loops beats one long session you have to revert entirely.
This last one is the most consequential. Claude Code has a mode where it operates more autonomously without confirmation prompts. That mode is genuinely useful — after you’ve run enough sessions to know that the tool’s judgment in your codebase is trustworthy. Using it as the default posture on day one, before that trust is established, tends to produce a large diff that’s 80% correct and 20% wrong in ways that take longer to untangle than the original task would have taken to do by hand.
When to leave the CLI for the IDE extension
Claude Code has IDE extensions for VS Code and JetBrains. The CLI and the extension run the same underlying agent, but the surfaces are different.
Use the CLI when:
- Working in a repo where the IDE isn’t configured (servers, containers, quick edits)
- Running automated or scripted sessions
- Doing a focused task that doesn’t require jumping between files visually
- Piping output somewhere or integrating with other CLI tools
Use the IDE extension when:
- You’re already in the IDE and want the agent alongside your open files
- The task involves reading a lot of code in context — the extension renders diffs inline, which is easier to review than text in a terminal
- You’re doing a multi-file refactor where being able to see affected files matters for sign-off
The extension doesn’t offer capabilities the CLI lacks, or vice versa. It’s purely about where your attention is. If you find yourself context-switching between terminal and editor to check changes, the extension removes that friction.
One thing the CLI does better: anything involving pipes, env vars, or scripts. claude --print "..." produces plain-text output you can redirect or process. The extension’s output lives in a panel, not stdout.
Starting with low-stakes tasks
The first few sessions should be on code you wrote, on a branch, with clean git state before you start. That combination means:
- You know when the output is correct
- Reversing a bad session is
git checkout . - Nothing in
git diffis ambiguous about its source
Good first tasks: adding a specific validation, writing a test for a function that already exists, converting a callback-style async function to async/await. These are bounded, verifiable, and produce diffs small enough to read in full.
Bad first tasks: architectural changes, anything involving database schema migrations, anything that touches auth or security boundaries. Not because Claude Code can’t help with those — it can — but because verifying the output requires enough context to be certain, and that context takes time to build.
Claude Code is at its best when the subject-matter expert is in the loop and the tool handles the mechanical execution. The sessions that go wrong tend to be the ones where the human in the loop has stopped reading the diffs.
One thing worth knowing early: the model will sometimes produce code that’s syntactically correct and tests-pass but is wrong in a subtle semantic way that only makes sense given some domain constraint it didn’t know about. This isn’t a flaw specific to Claude Code — it’s true of any AI coding tool. The mitigation is reading the output with the same skepticism you’d apply to a junior developer’s PR. Not paranoia, just normal review.
Once you’ve used it enough to know where it’s reliable in your setup — which file types, which kinds of tasks, which prompt shapes — expanding scope becomes a matter of calibrated trust rather than hoping for the best.