Tinker AI
Read reviews
intermediate 6 min read

SessionStart hooks: injecting context the model would otherwise miss

Published 2026-05-11 by Owner

CLAUDE.md is the right place for durable project conventions—things that are true whether you run a session today or three months from now. But a lot of useful context is transient: the current git branch, how many files are staged, whether there’s an open PR, which engineer is oncall today. None of that belongs in a committed markdown file, because the facts change daily and the file can’t update itself.

Without SessionStart hooks, the model starts each session essentially blind to the current state of the repository. It knows what you’ve told it in CLAUDE.md and what you type in your first message. To get branch awareness, it has to run git status; to get PR info, it has to call gh pr view. Those are tool calls, which cost context tokens and add a round-trip before the model can reason about your actual question. For a simple session they’re minor friction. For a complex debugging session that needs accurate context from the start, that friction compounds.

SessionStart hooks eliminate the blind spot. They’re the mechanism Claude Code provides for injecting exactly this kind of real-time context—once, at the start of the session, before the model takes any action.

SessionStart is the one hook that fires before the model does anything. The output it returns gets injected into the model’s system prompt for that session. The model sees it as ground truth before the first tool call. That makes it different from a user message typed at the start of a conversation—it’s not in the user turn, it shapes the context the model reasons from before it even responds.

What the hook receives and what it can output

When SessionStart fires, very little has happened. Claude Code passes a JSON payload that includes the working directory and the session ID. There are no tool results, no conversation history, no files read yet. The hook is running before the model has taken a single action.

The hook can be any executable—a shell script, a Python script, a Go binary. Claude Code captures its stdout and treats that output as additionalContext. That string gets stitched into the system prompt. The model then has this context when it processes your first message.

A minimal SessionStart hook in .claude/settings.json:

{
  "hooks": {
    "SessionStart": [
      {
        "matcher": "",
        "hooks": [
          {
            "type": "command",
            "command": "bash /path/to/project/scripts/session-context.sh"
          }
        ]
      }
    ]
  }
}

The matcher field is empty here because SessionStart isn’t matched against a tool name—it fires unconditionally at session start.

Useful things to inject

Git state. The model has no idea what branch you’re on, what’s staged, or whether the working tree is clean. A few lines of git output at session start means the model won’t propose work that conflicts with an open PR or accidentally suggest rebasing when there’s unpushed work.

#!/usr/bin/env bash
BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null || echo "unknown")
UNCOMMITTED=$(git status --porcelain 2>/dev/null | wc -l | tr -d ' ')
PR_URL=$(gh pr view --json url -q .url 2>/dev/null || echo "none")

echo "Current branch: ${BRANCH}"
echo "Uncommitted changes: ${UNCOMMITTED} files"
echo "Open PR: ${PR_URL}"

This 8-line script tells the model things it would otherwise have to discover via tool calls, burning turns before it could start reasoning about your actual question.

Failing test count. If CI is red, a useful model should know that before suggesting more features. A quick pytest --co -q 2>/dev/null | tail -1 or equivalent gives the model a signal that the session context includes existing failures. It won’t pretend the test suite is green.

For a JavaScript/TypeScript project with vitest or jest, something like this works:

FAIL_COUNT=$(bun run test --reporter=verbose 2>&1 | grep -c "FAIL" || echo "0")
if [ "$FAIL_COUNT" -gt "0" ]; then
  echo "Failing tests: ${FAIL_COUNT} (run tests before adding new features)"
fi

This is a light check—count only, not full output. Full test output in a hook would be too large and would drown out everything else in the context string.

Sprint focus or project context. Many teams keep a lightweight FOCUS.md or a TODO.md that describes what’s in flight this week. Reading two or three lines from it in the hook means the model’s answers are naturally scoped to what’s actually in progress, not to the full project surface.

if [ -f FOCUS.md ]; then
  echo "Current sprint focus:"
  head -5 FOCUS.md
fi

Oncall rotation. If the repository is for a service with an oncall rotation, injecting the current oncall name and their area of expertise means incident-related sessions start with the right framing. This is particularly useful when multiple engineers might open a Claude Code session against the same repo.

A more complete script combining several of these:

#!/usr/bin/env bash
set -euo pipefail

BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null || echo "unknown")
STAGED=$(git diff --cached --name-only 2>/dev/null | wc -l | tr -d ' ')
MODIFIED=$(git diff --name-only 2>/dev/null | wc -l | tr -d ' ')
PR_URL=$(gh pr view --json url -q .url 2>/dev/null || echo "none")

echo "=== Session context ==="
echo "Branch: ${BRANCH}"
echo "Staged files: ${STAGED}"
echo "Modified (unstaged): ${MODIFIED}"
echo "PR: ${PR_URL}"

if [ -f .sprint-focus ]; then
  echo ""
  echo "Sprint focus: $(cat .sprint-focus)"
fi

# Oncall from a simple rotation file, if present
if [ -f .oncall ]; then
  echo "Oncall: $(cat .oncall)"
fi

The output is short plain text. The model doesn’t need JSON or structured format—it reads prose context fine. Keep the total output under a few hundred tokens so it doesn’t crowd out actual system prompt content.

What not to inject here

Anything that changes during the session. If a file gets modified mid-session, the injected context from SessionStart is already stale. The model won’t re-run the hook. For state that evolves during a session—like which files have been edited or what the current test output is—PostToolUse hooks are the right mechanism. They fire after each tool call and can update context based on what just happened.

Large static content. If the content is stable across sessions—architecture diagrams, API conventions, a list of deprecated patterns—it belongs in CLAUDE.md, not in a SessionStart hook. CLAUDE.md is always present; a hook adds the overhead of a subprocess invocation every session. Use the hook for what genuinely changes session-to-session.

Sensitive credentials or tokens. The hook output lands in the system prompt, which means it’s visible in conversation logs. Read what you need from environment variables or config files for the session, but don’t echo secrets into the context string. If you’re logging sessions, that log will contain whatever the hook outputs.

Long justifications. The point of the inject is to tell the model what’s true, not why it matters. “Branch: feature/auth-refactor” is useful. A three-paragraph explanation of why the auth refactor is happening is not. Keep lines short and factual. The model is good at drawing inferences from compact state; it doesn’t need the narrative.

A common mistake

The most frequent problem with SessionStart hooks is confusing them with CLAUDE.md. People write hooks that output the same content that’s already in CLAUDE.md: the project description, the coding conventions, the stack. This doubles the context without adding signal, and it makes the hook harder to maintain because there’s now a second place to update when conventions change.

The right mental split: CLAUDE.md carries what’s always true. SessionStart carries what’s true right now.

If something in your hook output doesn’t vary from one day to the next, it probably belongs in CLAUDE.md instead.

Debugging the hook output

The simplest debugging approach is to run the hook script directly in a terminal and read the output. What the script prints to stdout is what the model receives. If the output looks right there, it will look right in the session.

Claude Code also surfaces hook output in verbose mode. Running claude --verbose shows the assembled system prompt including the additionalContext field, so you can verify exactly what the model starts with.

If a hook command fails—non-zero exit code—Claude Code logs the failure and continues without injecting context. The session still starts; the model just doesn’t have the hook output. This is usually the right behavior for optional context like sprint focus, but if you rely on git state being present, test the script’s failure modes (no git repo, no gh installed, etc.) and add fallbacks.

One pattern that handles missing tools gracefully:

# Safe fallback pattern
BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null || echo "not a git repo")
PR_URL=$(command -v gh >/dev/null && gh pr view --json url -q .url 2>/dev/null || echo "gh not installed or no open PR")

The 2>/dev/null || echo "..." idiom means the hook always exits 0 and always produces output, even in environments where git or gh isn’t available. Avoid letting the hook silently fail on a developer machine that’s missing a dependency.

Where this fits in the hooks system

Claude Code has four hook events: SessionStart, PreToolUse, PostToolUse, and Stop. SessionStart is the only one that runs before the model has taken any action. The others run in response to tool calls or session end.

For context injection, SessionStart is almost always the right hook. PreToolUse can modify tool arguments before a call executes, but by then the model has already formed its initial understanding of the session. PostToolUse fires after tool results arrive and can append to context dynamically, but it runs after every tool call, making it heavier. SessionStart runs once and sets the stage.

The practical result: a SessionStart hook that takes two seconds to run and outputs 20 lines of git state gives the model accurate situational awareness that it would otherwise spend several tool-call turns building from scratch. Those turns are not free—they add latency, consume context window, and delay the model getting to your actual question.

Context injection done at session start is the cheapest version of the problem. It runs once, it’s scoped to real-time state, and it disappears when the session ends. That’s the right shape for transient information.