MCP on Claude Code: from `mcp__` tool names to a working connection
Published 2026-05-11 by Owner
The promise of MCP sounds clean: instead of waiting for Anthropic to add GitHub support or database access to Claude Code, you point Claude at a server that exposes those capabilities, and they show up as callable tools. No SDK update required, no plugin format to learn. The reality is mostly that clean, with a few friction points that are worth knowing about before you spend an afternoon debugging a server that turns out to have a path typo.
MCP — Model Context Protocol — is an open protocol Anthropic published in late 2024. The core idea is language-agnostic: any process that speaks the protocol can expose a set of named tools to any client that understands it. Claude Code is one such client. The tools surface in Claude’s tool list exactly like native tools, callable by name, with the same structured call/response pattern. What MCP avoids is the alternative: waiting for tool support to be bundled into the agent itself, or writing glue code in every project that needs it.
The configuration format
MCP servers are configured in .claude/settings.json (project-level) or in ~/.claude/settings.json (global). Both use the same mcpServers key. A minimal project-level config looks like this:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/dir"]
}
}
}
Each key under mcpServers is the server name you choose. The command is the executable Claude Code will launch; args passes arguments to it. The server runs as a subprocess, communicates over stdio, and is restarted if it crashes.
That filesystem server from Anthropic’s reference implementations is a reasonable first test. It exposes tools for reading and writing files within a directory tree you whitelist on the command line. The directory argument is the last element in args above — if you omit it, the server starts but rejects most operations. It is easy to miss this and see no errors but also no useful behavior.
Multiple servers are supported in the same config block — just add more keys under mcpServers. Claude Code starts all of them at launch and merges their tool lists:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/me/projects"]
},
"fetch": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-fetch"]
}
}
}
For servers that need environment variables — database connection strings, API tokens — add an env key:
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_..."
}
}
}
}
Do not put tokens directly in a settings file you’re going to commit. The global ~/.claude/settings.json is the safer home for anything with credentials. Project-level settings work well for servers where the only configuration is a path.
What the filesystem server actually unlocks
With filesystem running, Claude Code can read any file under the whitelisted directory and write back to it — without those file paths being in the active conversation context. This matters for tasks that involve a lot of files: searching for a pattern across a hundred test fixtures, checking every config file in a directory, summarizing the shape of an unfamiliar codebase.
Without an MCP server, Claude works with files you explicitly paste or reference. The tool use loop still works, but you’re the one shuttling content in. With filesystem attached, you can say “read all the .env.example files in this monorepo and tell me which environment variables are documented inconsistently” and Claude will actually do it.
The fetch server is similarly useful: it exposes an HTTP GET tool, which means Claude can pull documentation, API responses, or RSS feeds on demand. The workflow that opens up is asking Claude to research something and synthesize it in one step, rather than copying content from a browser manually.
Neither server is spectacular on its own. The value shows up in compound tasks: filesystem plus fetch plus a schema-aware database server is a meaningfully different capability set than any one of them alone.
The postgres server is where compound capability starts to matter seriously. Once Claude can read your schema, query rows, and write files, tasks like “check whether any column named email is missing a uniqueness constraint and produce a migration file if so” become real. That’s not a pattern I could have done quickly by manually shuttling SQL results into the context window.
How tools are named: the mcp__ convention
Every tool from an MCP server is prefixed with mcp__<servername>__<toolname> — two underscores between each segment. If the server is named filesystem and exposes a read_file tool, the tool Claude sees is mcp__filesystem__read_file. The server named github exposing create_pull_request becomes mcp__github__create_pull_request.
The double-underscore delimiter exists to avoid collisions. Tool names within a server are arbitrary strings defined by whoever wrote the server. Without a namespace, two servers that both expose a read tool would conflict. The mcp__<servername>__ prefix makes the namespace explicit and unique without requiring any coordination between server authors.
In practice, you see this naming mostly in two places: when Claude tells you what tool it’s calling in its output, and when you’re writing system prompts or skills that reference specific tools by name. If you’re just using MCP servers interactively, the naming scheme is background knowledge — Claude handles the dispatch.
One thing worth knowing: mcpServers key names are case-sensitive and flow through to the tool names. A server named fileSystem (camelCase) would produce mcp__fileSystem__read_file, not mcp__filesystem__read_file. Keep server names lowercase to avoid confusion.
Another place the naming matters: if you’re writing a Claude Code skill that needs to call a specific MCP tool, you reference it by full name in the skill’s instructions. Skills are just system-prompt text, so mcp__filesystem__read_file in a skill’s instructions tells Claude precisely what to call without any ambiguity about which tool is meant. The explicit namespace is what makes cross-server tool references in skills reliable.
When a server doesn’t show up
This is where MCP spending an afternoon goes. The most common failure modes, in rough order of frequency:
Wrong working directory. If command is a relative path, or if the server binary depends on being run from a specific directory, Claude Code may start the process from a path where the binary doesn’t exist. Use absolute paths in command, or use npx with a fully-qualified package name. The clearest sign of this is a startup error you can only see in Claude Code’s own log.
Missing executable. npx -y @modelcontextprotocol/server-filesystem downloads and runs the package. Without -y, npx prompts for confirmation, which hangs the subprocess since there’s no tty. If the package isn’t on npm or the name is wrong, npx exits with an error and no tools appear. Test the command in your own terminal first.
Config syntax error. A trailing comma in settings.json (JSON doesn’t allow it), a misquoted string, or a missing closing brace will cause Claude Code to silently ignore the file. Run the file through a JSON validator — node -e "JSON.parse(require('fs').readFileSync('.claude/settings.json','utf8'))" — before blaming the server.
Server starts but exposes no tools. Some servers require arguments or environment variables before they return a tools list. The filesystem server with no directory argument is one example. Check the server’s own README for required configuration.
Claude Code’s startup log. When Claude Code starts, it launches all configured MCP servers and logs the result. On macOS, this appears in ~/.claude/logs/ or can be surfaced by running claude --debug. The log will show whether each server started, how many tools it registered, and any error output from the subprocess. If the server name appears with 0 tools, the server started but configuration is wrong. If the name doesn’t appear at all, the process failed to launch.
The useful habit: after adding a new server to settings.json, restart Claude Code and immediately ask it to list available MCP tools. If the expected tool names don’t appear, the startup log has the answer.
A fifth failure mode worth naming separately: permission mismatches on the server binary. If the command is a local script rather than npx, and the script isn’t executable, the subprocess exits immediately with a permission error. chmod +x on the script, or invoke it through an interpreter explicitly: "command": "node", "args": ["scripts/my-mcp-server.js"]. This is less common with npx-based servers but routine with home-grown ones.
Global vs. project config
The decision of where to put a server config comes down to whether you want it everywhere or per-project.
Global ~/.claude/settings.json makes sense for servers you use across every project: filesystem pointed at your home directory, a personal fetch server, anything involving your own credentials. These load regardless of which project directory Claude Code opens.
Project .claude/settings.json makes sense for servers that are project-specific: a Postgres server pointed at this project’s database URL, a filesystem server scoped to this repository’s path. Checking this file into git (without credentials) means the team gets the same server configuration without each person hand-configuring it.
When both files define an mcpServers block, they merge — you get all servers from both configs. If the same server name appears in both, the project-level config wins for that entry.
The most common starting point is global filesystem and fetch for general use, plus a project-level database or API server when a specific project needs it. That setup covers most of what MCP is useful for without duplicating configuration across every project.
What to reach for next
The reference servers from Anthropic (filesystem, fetch, github, postgres, memory) cover the most common needs and are maintained with the protocol. They’re the right starting point before evaluating community servers.
When you do look beyond the reference set, the useful question is whether the server you’re considering exposes tools that have a clear call/response structure — read this file, create this issue, run this query. MCP works well for discrete operations. It works less well as a streaming pipe or for servers that need long-lived state between tool calls.
The protocol is young enough that the best server for a given integration may not exist yet. Writing one is more tractable than it looks: any process that reads JSON from stdin and writes JSON to stdout can implement MCP. The spec is public, and the reference implementations are readable if you need a working example to start from.
One area where MCP is underused right now: internal tooling. If your team has a script that queries your metrics system, wraps your deploy pipeline, or interfaces with an internal API, packaging it as an MCP server makes it accessible to Claude without anyone writing new prompt engineering to invoke it. The naming convention is an annoyance to type but makes these internal tools first-class citizens alongside the public servers.
The ground-level investment is low: get filesystem and fetch running globally, confirm the tool names appear when you ask, and pick one compound task to try. The evidence for whether MCP is worth maintaining will show up in the first session that would have taken thirty minutes of manual file copying and instead runs unattended.