A custom MCP server in 60 lines: the minimum viable shape
Published 2026-05-11 by Owner
Most tools you’d ever want Claude Code to call already exist as MCP servers. But occasionally you hit a gap: your internal ticket system, a company-specific API, or a CLI tool that no one has wrapped yet. Building a custom server is the right answer in that case — and it’s smaller than you’d expect.
This is not a protocol deep-dive. It’s the minimum path from “I have a use case” to “Claude Code is calling my server reliably.” The full protocol surface fits in your head inside an afternoon. What takes time is picking the right scope, which is a judgment call before a single line of code gets written.
What MCP actually is at the wire level
MCP (Model Context Protocol) is JSON-RPC 2.0, run over either stdio or HTTP+SSE. Stdio is almost always right for local developer tooling: your server is a child process, Claude Code pipes messages to stdin and reads responses from stdout, and there’s no port, no auth, no network config to manage. HTTP+SSE exists for servers that need to be shared across machines or accessed by multiple clients — that’s a real case, but not the starting point.
Every server announces itself the same way. On startup, Claude Code sends an initialize request; your server replies with its name, version, and declared capabilities. After that, the only two interactions that matter for a tool server are:
tools/list— Claude asks “what tools do you have?”tools/call— Claude asks “run this tool with these arguments”
That’s the whole surface for 95% of custom servers. No resources, no prompts, no streaming required unless you specifically want them. The spec has more — sampling, roots, progress notifications — but none of that is necessary to ship a useful tool server.
The TypeScript SDK (@modelcontextprotocol/sdk) handles the JSON-RPC framing entirely. You declare tools and a handler function; the SDK handles the rest. A Python SDK (mcp) exists with the same shape if you prefer that runtime. The Python version uses decorators over your handler functions rather than explicit setRequestHandler calls, but the mental model is identical.
One thing to internalize before writing code: tools in MCP are described declaratively, not discovered at runtime. Claude reads your tool descriptions and schemas when it decides whether to call them. A vague description leads to missed calls or misuse; a precise one leads to correct invocation without needing prompting. The description field in your tool schema is load-bearing.
The example: git-log-search
Here’s a concrete problem. You want Claude Code to be able to search commit messages for a keyword — useful when you’re tracking down when something changed or who introduced a behavior. The built-in shell tool could do this with git log --grep, but you want a tighter interface: pass a query, get back structured results with commit hash, date, and message, not a wall of raw git output that the model has to parse.
More importantly, you want Claude to be able to call this tool on its own initiative, without you constructing the shell command every time. That’s where MCP earns its place: the tool becomes something the model knows about and can invoke autonomously, not something you manually trigger.
No existing MCP server does exactly this for your repo config. Two hundred lines of shell plumbing in a system prompt? Ugly. A 60-line MCP server? Clean.
// git-log-search-server.ts
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
CallToolRequestSchema,
ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
import { execSync } from "child_process";
const server = new Server(
{ name: "git-log-search", version: "0.1.0" },
{ capabilities: { tools: {} } }
);
// Declare available tools
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
{
name: "search_commits",
description:
"Search git commit messages for a keyword. Returns matching commits with hash, date, and subject.",
inputSchema: {
type: "object",
properties: {
query: {
type: "string",
description: "Keyword or phrase to search in commit messages",
},
limit: {
type: "number",
description: "Max commits to return (default 20)",
},
},
required: ["query"],
},
},
],
}));
// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name !== "search_commits") {
throw new Error(`Unknown tool: ${request.params.name}`);
}
const { query, limit = 20 } = request.params.arguments as {
query: string;
limit?: number;
};
try {
const raw = execSync(
`git log --grep=${JSON.stringify(query)} --format="%H|%ad|%s" --date=short -n ${limit}`,
{ encoding: "utf8" }
);
const commits = raw
.trim()
.split("\n")
.filter(Boolean)
.map((line) => {
const [hash, date, ...subjectParts] = line.split("|");
return { hash, date, subject: subjectParts.join("|") };
});
return {
content: [
{
type: "text",
text:
commits.length === 0
? "No commits matched."
: JSON.stringify(commits, null, 2),
},
],
};
} catch (err) {
return {
content: [{ type: "text", text: `git error: ${String(err)}` }],
isError: true,
};
}
});
// Start
const transport = new StdioServerTransport();
await server.connect(transport);
That’s 62 lines including imports, blank lines, and comments. The shape: one ListToolsRequestSchema handler that declares the schema, one CallToolRequestSchema handler that runs the logic. Add more tools by extending both handlers.
A few things worth noting about this implementation:
The inputSchema is standard JSON Schema. Whatever you put there, Claude Code will use it to construct the argument object it sends. Keep descriptions short but specific — they’re what the model reads when deciding how to call the tool. The required array matters: if query isn’t listed as required, the model might omit it and your handler will get undefined where it expects a string.
Error handling belongs inside the handler, not outside it. Return isError: true with a message rather than throwing, except for unknown tool names. Claude Code can relay error content back to the conversation; an unhandled exception crashes your server. The distinction matters because a graceful error (wrong query syntax, empty result) is useful information for the model — it can retry or tell you what went wrong. A crashed server just shows “connection lost” in the UI with no actionable information.
Stdout is the protocol channel. Never write to stdout for debugging — it’ll corrupt the JSON-RPC stream. This is the single most common mistake when writing a first server. Use process.stderr.write() or console.error() for any logs. Both are safe because Claude Code only reads stdout for protocol messages.
The execSync call here blocks the event loop, which is fine for a single-user local server where commands finish quickly. For anything with real latency (API calls, slow queries), use the async equivalents and await them inside the handler.
Wiring it into .claude/settings.json
Claude Code reads MCP server configuration from .claude/settings.json in the project root (or the global equivalent in ~/.claude/settings.json). Add your server under the mcpServers key:
{
"mcpServers": {
"git-log-search": {
"command": "npx",
"args": ["tsx", "/path/to/git-log-search-server.ts"],
"env": {}
}
}
}
If you’ve compiled to JS, swap tsx for node. The command is whatever starts your server process; args is the argument array passed to it. Claude Code will start the process when it launches and kill it on exit.
The env key lets you inject environment variables into the server process — API keys, config paths, anything your server needs that shouldn’t be hardcoded. This is cleaner than sourcing them from a dotfile, and it keeps secrets out of your server source.
After saving the config, restart Claude Code. The server should appear in the MCP panel in the UI, and Claude will list search_commits when you ask it about available tools. If the server doesn’t appear, the most common causes are: the command path is wrong, tsx isn’t installed, or there’s a syntax error in your server file that prevents startup.
Project-level settings in .claude/settings.json apply only when Claude Code opens that project directory. Global settings in ~/.claude/settings.json apply everywhere. For a server tied to a specific repo (like this git tool), project-level is correct. For a server you want available in every project, use the global file.
One gotcha: the path in args needs to be absolute or relative to the working directory Claude Code runs in. Relative paths that work in your terminal may not resolve the same way inside Claude Code’s process environment. Use absolute paths until you’re confident.
The iteration loop
The first run rarely works perfectly. Here’s the cycle that makes debugging fast:
Stderr is your friend. Any console.error() calls in your server appear in Claude Code’s MCP logs panel (or in the terminal if you started Claude Code in a terminal). Add logging around the entry point of every handler during development:
server.setRequestHandler(CallToolRequestSchema, async (request) => {
console.error("[git-log-search] tool call:", JSON.stringify(request.params));
// ... handler body
});
Strip those logs before shipping, but during development they’re invaluable.
Restart is instant. When you edit the server file, Claude Code doesn’t auto-restart MCP servers. You have to restart Claude Code itself, or use the “Restart MCP Server” action in the MCP panel if it’s available. Keep a terminal open with a quick alias for this during active development.
Test the server standalone first. Before wiring into Claude Code, run the server in a terminal and send it raw JSON-RPC manually. The initialize exchange looks like this:
{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"0.0.1"}}}
Pipe that into your server binary. If the server responds with its capabilities JSON, the transport is working. If it hangs or errors, the problem is in your startup code, not your tool logic.
Schema validation errors. If Claude Code calls your tool but your handler throws because the arguments aren’t what you expected, the inputSchema you declared doesn’t match what you’re expecting in the handler. The schema is a contract — it should exactly describe the structure you’re unpacking.
When not to build
Three situations where building a custom server is the wrong call:
An existing server already covers it. Search the MCP registry and GitHub before starting. The git use case above is illustrative — real git MCP servers exist with more complete implementations. The cost of building and maintaining a custom server is real; don’t pay it when a published server will do.
You’d be reimplementing built-ins. Claude Code’s built-in tools include file reading, shell execution, and web fetching. If your “custom tool” is a thin wrapper around bash with one specific command, just use the shell tool and pass the command in the prompt. The MCP abstraction adds friction without adding capability.
You only need it once. If the task is “analyze this one CSV and summarize it,” paste the CSV content into the conversation. Building a server to ingest it is engineering work that amortizes over repeated use. A one-off doesn’t have repeated use by definition. Build servers for durable access patterns — tools you’ll invoke dozens of times across different sessions.
The right signal for “build a server” is: this is a system I access regularly, no existing MCP server covers it at the granularity I want, and the interface I’d design is stable enough to be worth maintaining.
What you get past 60 lines
The 60-line example is a complete, working server. Past that, common additions are:
- Multiple tools in one server — just extend both handler blocks. Five tools in one server is common; the startup overhead is paid once.
- Input validation beyond JSON Schema — add a Zod parse at the start of your handler if the schema alone isn’t strict enough.
- Stateful connections — if your server needs an authenticated session (a database connection, an API client), initialize it at startup and close on process exit. The MCP lifecycle lets you hook
server.onerrorand handle cleanup. - HTTP transport — for servers shared across machines or accessed from multiple clients simultaneously, swap
StdioServerTransportforSSEServerTransport. The handler code is identical; only the transport initialization changes.
The protocol stays small as you add features. That’s intentional in the design — the complexity budget belongs in your tool logic, not in protocol machinery.
A well-scoped MCP server is essentially a typed interface over something your codebase already knows how to do. The 60 lines are mostly boilerplate; the real work is in that one handler function. Start there, keep it focused, and you’ll spend more time using the tool than maintaining it.