Tinker AI
Read reviews
intermediate 8 min read

Three MCP servers worth installing: Postgres, Notion, and Linear

Published 2026-05-11 by Owner

The MCP (Model Context Protocol) ecosystem has grown fast enough that the signal-to-noise problem is real. Most servers are either toys or so narrowly scoped that the setup cost exceeds the value. A handful are genuinely useful for a working developer.

These three are in that set: Postgres, Notion, and Linear. Not because they’re the most impressive demos, but because they solve specific friction points that come up every week. Each has one task it makes meaningfully easier, and each has a gotcha that isn’t obvious from the README.

Postgres MCP: schema introspection and debugging queries

What it does. The @modelcontextprotocol/server-postgres package connects Claude Code to a running Postgres instance. The model can list tables, inspect schemas, run SELECT queries, and show you query plans.

Install:

npm install -g @modelcontextprotocol/server-postgres

Add to your ~/.claude/claude_desktop_config.json (or project-level .mcp.json):

{
  "mcpServers": {
    "postgres": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-postgres",
        "postgresql://readonly_user:password@localhost:5432/mydb"
      ]
    }
  }
}

The task it makes easy. Schema introspection during debugging. Before this, the usual pattern was: copy-paste \d tablename output from psql, paste it into the chat, ask the question. With the MCP connected, you skip all of that. “Why is this query slow?” becomes a real back-and-forth where the model can inspect the actual schema, check index definitions, and run EXPLAIN — without you acting as a data relay.

This is most useful when the schema is large or unfamiliar. Onboarding to a new codebase with 80 tables is where this pays off most. You can ask “what tables reference user_id?” and get an accurate answer from the live schema rather than stale documentation.

The gotcha: never give write access to a production database.

The connection string in your config is what the model uses. If it can UPDATE and DELETE, it will — eventually. Not maliciously, but because the model is trying to be helpful and “fix” something, and sometimes “fix” means a mutation it thought you wanted.

Use a read-only Postgres role. Create one specifically for the MCP:

CREATE ROLE mcp_readonly LOGIN PASSWORD 'your_password';
GRANT CONNECT ON DATABASE mydb TO mcp_readonly;
GRANT USAGE ON SCHEMA public TO mcp_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO mcp_readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO mcp_readonly;

This is not optional for a production database. For a local development database it’s lower stakes, but the habit is worth building regardless. Point the MCP at a dev replica or a local copy, not at the instance your staging environment also writes to.

A secondary issue: large result sets bloat context. If you ask a question that causes the model to SELECT * from a table with 50,000 rows, your context window fills with data and subsequent responses degrade. The server has no automatic limit. Either constrain your queries explicitly (“query with a LIMIT of 20”) or set up a role that has row-level security applied.

Notion MCP: pulling design docs while coding

What it does. The official @notionhq/notion-mcp-server connects Claude Code to your Notion workspace via the Notion API. The model can read pages, search databases, and write back to pages.

Install:

npm install -g @notionhq/notion-mcp-server

You need a Notion integration token. Create one at notion.so/profile/integrations, set it to Internal, and share the relevant pages with it. Then add to your config:

{
  "mcpServers": {
    "notion": {
      "command": "npx",
      "args": ["-y", "@notionhq/notion-mcp-server"],
      "env": {
        "OPENAPI_MCP_HEADERS": "{\"Authorization\": \"Bearer ntn_your_token\", \"Notion-Version\": \"2022-06-28\"}"
      }
    }
  }
}

The task it makes easy. Cross-referencing design docs while implementing a feature. The scenario: you’re building something whose behavior is specified in a Notion page — API contract, data model, product spec, whatever. Without the MCP, you either keep a browser tab open and copy-paste relevant sections, or you trust that you remembered the spec correctly. With it, you tell the model “the spec is at page ID abc123, implement according to that” and it reads the spec directly.

This is particularly useful for the kind of work where the spec changes mid-implementation. The model re-reads the page on each turn rather than reasoning from a stale copy you pasted at the start of the conversation.

The gotcha: rate limits and large pages bloating context.

Notion’s API has rate limits — 3 requests per second per integration. In a long Claude Code session with many tool calls, you’ll hit this. The symptom is the model pausing or returning errors mid-session. The workaround is to be deliberate: fetch the page once at the start of a focused session rather than letting the model reach for it repeatedly throughout.

The larger problem is page size. Notion pages with embedded databases, nested toggles, and large tables can easily exceed 50,000 tokens when fetched as raw JSON. One large Notion page can consume a significant fraction of your context window and cost more than the rest of the session combined. Before pointing Claude Code at a Notion page, check its actual size. A doc with 2,000 words of prose is fine; a database with 300 rows of rich text cells is not.

The write access question matters here too. The official server supports writes. If you have write access configured and you ask the model to “update the status of this design doc,” it will. For documentation pages that represent shared team knowledge, that’s a meaningful risk. Scope the integration token to only the pages you actually need, and consider whether write access is necessary at all for your use case.

Linear MCP: issue context and status updates

What it does. The @linear/mcp-server connects Claude Code to your Linear workspace. The model can list your assigned issues, read issue details, add comments, and update issue status.

Install:

npm install -g @linear/mcp-server

Get an API key from Linear under Settings → API. Then:

{
  "mcpServers": {
    "linear": {
      "command": "npx",
      "args": ["-y", "@linear/mcp-server"],
      "env": {
        "LINEAR_API_KEY": "lin_api_your_key_here"
      }
    }
  }
}

The task it makes easy. Pulling issue context at the start of a coding session. The scenario: your ticket has acceptance criteria, links to related issues, and comments from the design review. Normally you’d have the Linear tab open and mentally translate between “the ticket says X” and “what that means in the code.” With the MCP, you tell Claude Code “I’m working on issue ENG-4821, read it and let’s start” — and it reads the full issue including comments before the first code turn.

Marking issues done after committing is the other obvious use: “mark ENG-4821 as Done and add a comment with the PR link.” One prompt, no tab-switching.

The gotcha: this is a per-task helper, not a workflow replacement.

Linear MCP is useful when the model needs issue context to make better code decisions. It is not useful as a way to manage your Linear board through Claude. There’s a temptation — especially after you see how fluently it can query and update — to start using it for sprint planning, backlog grooming, bulk status updates.

That path leads to problems. Linear’s state is shared with your team. A model that has write access and is given a vague instruction like “update the sprint” will make decisions about issue prioritization and status that you haven’t reviewed. Linear’s audit log will show the changes, but the damage to team trust and project state can be harder to undo than you’d expect.

Use it narrowly: read the current issue, update the current issue. Anything involving more than one issue at a time should go through the Linear UI where a human is making the choices.

Another practical issue: Linear’s API doesn’t return comments in a flat format — threads, reactions, and edit history come back as nested structures that consume more tokens than the text content warrants. For complex issues with long comment histories, the model may spend a significant chunk of context on threading metadata rather than actual content.

The anti-pattern to avoid

The underlying risk with all three of these servers is using them as a substitute for understanding what you’re querying.

If you connect the Postgres MCP but don’t know your own schema, the model will make plausible-looking queries that have subtle errors — wrong join conditions, missed indexes, implicit type coercions. The model doesn’t know your data distribution or your query patterns; it only sees the schema it fetches. You do.

If you point the model at a Notion page as a proxy for understanding the design yourself, you’ll miss the gaps in the doc. Specs have ambiguities that a careful reader would flag; the model tends to fill them in with assumptions.

MCP servers are useful when you already understand the system and want to skip the mechanical relay work — copy-pasting schema output, switching tabs for the spec, tracking down the issue title. They’re not useful as a way to hand off understanding to the model.

There’s a version of this that’s subtler: using the MCP to avoid reading documentation you should have read. If the model can query your Linear issues, you might stop reviewing your own board and just ask “what should I work on?” The model will give you an answer based on issue titles and priorities. That answer will miss the team context, the unwritten agreements, the things that are blocked for reasons not captured in the ticket. The MCP reads the data; it doesn’t read the room.

The better framing: MCP servers reduce friction for work you know how to do. They don’t replace knowing how to do it.

When to skip the MCP and just paste

Setup has a cost. Creating the read-only Postgres role, creating the Notion integration and scoping its page access, setting up the Linear API key and testing the connection — that’s 30-60 minutes the first time.

If you’re querying something once, paste is cheaper:

  • Running a one-off migration and need the schema? Copy from \d tablename in psql.
  • The design doc is a single Notion page you’ll reference once? Copy its text.
  • You need the acceptance criteria for one issue? Copy from Linear.

The MCP setup cost amortizes across many uses. For something you’ll do dozens of times over weeks, the setup is worth it. For a one-time task or a tool you’ll use once and then forget, paste the data and move on.

The decision is straightforward: if you find yourself manually relaying the same kind of data into Claude Code more than a few times a week, set up the MCP. If you’ve done it once and don’t expect to repeat it, save yourself the auth configuration and context management overhead.

These three servers have held up well in regular use. Postgres is the most consistently useful because schema introspection comes up constantly during debugging. Notion is high-value but high-maintenance given the rate limits and page-size risks. Linear is narrowly useful but genuinely frictionless for the “start a task, mark it done” loop. All three are better when scoped tightly than when given broad access.

The MCP ecosystem will keep expanding. The evaluation framework won’t change much: does this server save me from doing something mechanical more than once a week, and can I scope its access to exactly what it needs? If yes on both, worth the setup. If the answer to either is no, a paste still works.