Tinker AI
Read reviews
beginner 5 min read

Resuming Claude Code sessions without re-explaining the entire task

Published 2026-05-11 by Owner

The problem with multi-session work isn’t forgetting where you left off — it’s explaining it again. You finished work on a feature branch, closed the terminal, came back the next morning, and now you’re typing a two-paragraph summary of what the agent already knows because the conversation is gone.

Claude Code has session persistence built in. The conversation history lives on disk; you can resume the most recent session, or a specific earlier one, without any re-explanation. This guide covers how that works, when it’s reliable, and where it breaks down.

Where sessions live and how to work with them

Claude Code writes conversation history to a local directory in your home folder. On macOS and Linux the location is roughly ~/.claude/projects/, with subdirectories keyed to your working directory. Each session is stored as a JSONL file — one JSON object per turn. You don’t need to read these directly, but it’s useful to know they exist and that they persist until you explicitly clean them up or they age out.

The two flags you need:

# Resume the most recent session for the current working directory
claude --continue

# List recent sessions so you can pick a specific one
claude --resume

--continue is the one you’ll use daily. It loads the most recent session for whatever directory you’re in and picks up the conversation as if you never stopped.

--resume shows an interactive picker: a short list of recent sessions with timestamps. Select one and the conversation loads from that point.

If you want to pass a specific session ID directly (visible in the picker or in the filenames under ~/.claude/projects/), you can pass it as an argument:

claude --resume <session-id>

The practical workflow: claude --continue for the common case, claude --resume when you’ve been juggling multiple workstreams and need a session from a few days ago.

When resume actually works

The cases where resume is reliable:

Paused work mid-task. You started a refactor, got pulled into a meeting, came back an hour later. The code hasn’t changed while you were gone; the agent’s mental model of the task is still accurate. --continue and type “pick up where we left off” — this works exactly as expected.

Picking up after a good night’s sleep. Short-horizon tasks (a few hours of work, an overnight break) resume cleanly. The context is intact, the task scope hasn’t shifted, and the model’s understanding of the codebase is still current.

Tasks where the full history matters. Some tasks have a long trail of decisions: “we tried approach A, it failed because X, then we tried B, it’s working but has problem Y.” Resuming keeps that history visible rather than forcing you to reconstruct it from scratch.

When the model version hasn’t changed. Sessions from last week using the same Claude model resume smoothly. The behavior and capabilities are consistent.

When resume doesn’t work well

The failure modes are predictable once you know them:

The code changed under the agent’s feet. This is the most common problem. You resumed a session, but in the meantime you merged a PR, another developer pushed changes, or you manually edited files the agent worked on. The agent’s mental model of the code no longer matches what’s on disk. It will confidently reference state that doesn’t exist. The tell is when the agent says “as we discussed, the function looks like X” and X is what it was three commits ago. At that point, starting a fresh session and giving a brief re-summary is faster than debugging the mismatch.

Task scope has shifted. You started a session to add feature A. Now you want to add feature B that depends on A but is meaningfully different work. Resuming the old session adds noise rather than context. The conversation is full of reasoning about A that doesn’t apply to B. Start fresh with a summary of what A produced, not the full history of how you got there.

Model deprecated since the session. If a model version is retired or updated between sessions, resume may work mechanically but the model behavior can drift enough to be confusing. Earlier turns reflect a model that no longer exists. For long-running projects, this matters more than people expect.

After compaction. See below — this is its own case.

The compaction step on long sessions

Claude Code automatically compacts long conversations when the context window fills up. Compaction summarizes earlier turns and drops the raw exchange, replacing it with a condensed representation.

What this means in practice: after compaction, the session still loads, but the model’s access to earlier turns is filtered through a summary rather than the original. Specific details — the exact error message from turn 12, the precise file path discussed early on, an assumption you corrected three hours ago — may be gone. The summary preserves the broad strokes but loses precision.

How to tell compaction happened: Claude Code shows a notice in the conversation when compaction fires. The conversation history in the JSONL file will have a summary block at the point where earlier turns were collapsed. The raw turns before that point are no longer in the active context.

What gets kept: goals, major decisions, the current state of the task, any explicit instructions given. What gets dropped: intermediate reasoning, dead-ends explored, turn-by-turn dialogue, the specific wording of errors seen early on.

The scenario where compaction causes real problems: you said something like “don’t use the useCallback pattern — we established that’s not idiomatic in this codebase” in turn 8. Compaction fires at turn 45. The model summarizes “user prefers certain React patterns” but loses the specific prohibition. Then you resume the next day, and by turn 50 the model is back to suggesting useCallback. The original instruction is gone. You don’t remember saying it — it was yesterday. And so the conversation drifts.

The mitigation is the same regardless of whether you suspect compaction happened: at the start of a resumed session, restate the constraints that matter. Not the full history — just the specific instructions that are still in force. One or two sentences is usually enough.

If you resume after compaction and the model seems to be missing something important, paste the relevant detail back in. Compaction is not a failure mode — it’s the tool managing context limits — but the behavior is different from a fresh-context session. Expecting it changes how you use resume.

One habit worth building: the session-end memo

The best thing I’ve found for making session resume reliable is leaving a memo at the end of each session. Before closing the terminal, send one last message:

Summarize where we are: what's done, what's next, any blockers or open questions.
Keep it short — this is for me to re-read when I resume tomorrow.

The model produces a compact status summary. Paste that into a comment in the relevant file, a scratch note, or just leave it as the last message in the chat.

When you resume, that memo is the most recent context in the conversation. The model sees it; you see it. No re-explanation needed — just read the memo, confirm it’s still accurate, and continue.

A concrete example of what a good memo looks like:

Status as of this session:
- Finished: migrated auth middleware to use JWT (src/middleware/auth.ts)
- Finished: updated all route handlers that relied on session cookies
- Next: write tests for the new middleware — none exist yet
- Blocker: the refresh-token flow is stubbed out. The spec in docs/auth.md says
  it should use sliding window expiry but we haven't implemented that yet.
  Don't assume the current stub is correct.

That’s four bullet points. It takes the model thirty seconds to generate. The next morning you resume, read the memo, and know exactly where you are. The “Blocker” note in particular is worth including — it captures the constraint that’s most likely to be forgotten and most likely to cause problems if ignored.

This sounds obvious, but almost nobody does it. The habit of ending a session with a memo is more valuable than any flag or configuration. It’s the difference between resuming in thirty seconds and spending three minutes reconstructing context.

The memo also survives compaction. Because it’s the most recent message in the conversation, it gets preserved even when earlier turns are summarized away. It’s your anchor.

If you open a new session entirely because the context drifted too far, the memo becomes the starting prompt. Copy it in, add “continue from here,” and the new session inherits the key state without the accumulated drift.

What resume is not

Session resume is not a backup or version control mechanism. The JSONL files on disk can be deleted, corrupted, or aged out. Don’t treat them as a permanent record. For anything that matters, your real record is the git history.

Session resume also doesn’t give the model fresh awareness of file changes between sessions. It loads the conversation; it doesn’t re-read the codebase. If files changed since the last session, the model’s understanding of those files is stale until it reads them again. Asking it to read the relevant files at the start of a resumed session is a cheap and reliable way to resync.

The combination that works: --continue for the common case, a quick “here are the files that changed since we last spoke” message, and a session-end memo habit. That’s the whole workflow.

Anthropic has signaled that longer context windows and improved memory features are on the roadmap. When those land, some of these workarounds will become unnecessary. Until then, knowing how sessions are stored and where they break down is the practical edge over treating each Claude Code session as a clean start.