Tinker AI
Read reviews
5 min read Owner

The MCP ecosystem is growing fast. Cline’s marketplace has 25 servers; Anthropic’s official server list adds another 15; the broader npm ecosystem has dozens more. Most of them are useful. None of them have received the kind of supply chain scrutiny that other privileged software gets.

This is a problem in waiting.

What MCP servers actually do

An MCP server, mechanically, is:

  • An npm package (in most implementations)
  • Installed via npm install or npx
  • Run as a subprocess by the AI tool
  • Given access to a defined set of capabilities (filesystem, network, processes)
  • Returns data and accepts commands from the AI tool

The typical use case is “give the AI access to my database” or “let the AI control my git.” The server is the bridge between the AI tool’s sandbox and the external system.

The risk surface: the npm package can do whatever its code does. The AI tool tries to sandbox what the server does, but the server is running in your environment. The AI tool’s sandbox can be bypassed in various ways, and even when it works, the server’s legitimate access (to your database, your git, your filesystem) is plenty for an attacker.

The threat model

The threats:

Malicious package author. Someone publishes a useful-looking MCP server, gains adoption, then ships a malicious update. Your AI tool auto-updates; suddenly the server is exfiltrating your database during normal queries.

Compromised package author. A legitimate author’s npm account gets compromised. The attacker pushes a malicious version. Same outcome.

Dependency confusion. An MCP server depends on a typo-squatted package. The dependency adds malicious code that runs when the server runs.

Compromised infrastructure. npm’s infrastructure (or PyPI, or whatever language ecosystem) is the trust root. Past compromises have happened; future ones are likely.

These aren’t hypotheticals. The npm and PyPI ecosystems have had repeated supply chain incidents. The kind of access an MCP server has — your database, your git credentials, potentially your shell — makes them a higher-value target than most npm packages.

Why this isn’t getting attention

A few reasons:

MCP is new. The protocol launched in late 2024. The ecosystem is still small enough that the obvious risks haven’t been demonstrated by real incidents. People are excited; the security questions feel like buzzkill.

The trust chain is implicit. When you install an MCP server from Cline’s marketplace, you’re trusting Cline’s review process, the package author, and the package’s dependencies. None of these are independently audited; the trust is granted in bulk.

The capabilities are powerful by design. MCP servers exist to give the AI tool access to things the tool can’t access by default. Restricting the access negates the value. You can’t really sandbox an MCP server while preserving its usefulness.

No precedent for AI tooling supply chain. Other privileged software (kernel modules, browser extensions, IDE plugins) has decades of history about how to think about these risks. MCP is too new for that.

What good practice would look like

For comparison: browser extension supply chain has matured over 15 years. Reasonable practices include:

  • Centralized review by the platform vendor
  • Reproducible builds
  • Code signing
  • Strict permission models with user consent at install time
  • Permission revocation tools
  • Mandatory disclosure of changes that expand permissions
  • Reputation/trust scoring based on age, downloads, audit history

MCP doesn’t have most of these. The closest is Cline’s marketplace review, which is a one-time check rather than an ongoing oversight model.

For a security-conscious team, the practices I’d want:

Pin specific versions. Don’t run npx -y @org/server-postgres; run npx -y @org/server-postgres@1.2.3. New versions don’t auto-install.

Audit on update. When a server has a new version, read the changelog. Diff the source if it’s a critical server. Update only after review.

Run in containers. MCP servers in Docker containers with restricted access. The container limits damage if the server is compromised.

Use minimal-capability servers. A Postgres MCP that only needs SELECT access should run as a database user with only SELECT permission. Don’t grant broader permissions for convenience.

Rotate credentials regularly. Whatever credentials the MCP server has access to (database passwords, API tokens), rotate them. A compromised server has the credentials at the time; rotation limits the window.

Monitor for anomalies. If your MCP server suddenly starts making outbound network calls to unexpected destinations, that’s a signal. Logging the network activity helps.

What tool vendors should do

Cline, Windsurf, and others publishing MCP marketplaces should:

Sign packages. Cryptographic signing prevents tampering between the marketplace and the user.

Verify reproducible builds. When the marketplace signs a package, it should be able to verify the source matches the published binary.

Continuous scanning. Once a package is in the marketplace, scan new versions for known-bad patterns (data exfiltration shapes, suspicious network calls).

Granular permissions. Cline 3.5’s per-server permissions are a step. More granularity is needed — file-by-file access, domain-by-domain network allowlists, etc.

Audit logs the user can review. What did the MCP server do during my session? Right now this is buried in tool logs; it should be a first-class view.

Mandatory disclosure for permission expansions. If a server update expands the permissions it needs, the user should approve before the update runs.

These are reasonable expectations for software with this much access. The tool vendors are well-positioned to implement them; the question is whether the ecosystem demand requires it yet.

What users should do today

For now, the practical advice:

Be selective. Install MCP servers only when you actually need them. Don’t install “just in case.”

Read the source. For servers you install, read the code. MCP servers are usually small (a few hundred lines). The investment is real but bounded.

Pin versions. Always specify a version, never @latest. New versions deserve review.

Limit credentials. Use credentials that are scoped narrowly to what the server needs. No “and while we’re at it, here’s admin access” shortcuts.

Watch for unusual behavior. Outbound network calls from an MCP server you didn’t expect. New permissions requested on update. Changes in the maintainer.

The incident I expect

The way I expect this to play out: a popular MCP server (Postgres, GitHub, or similar) gets compromised in a supply chain attack. Affected users have their database contents exfiltrated, or their GitHub tokens stolen, or their shell history captured.

The aftermath includes:

  • Tool vendors scrambling to add tighter permissions
  • Users distrusting MCP servers for several months
  • A security audit pass over the popular servers
  • Better tooling for the supply chain risks

This is how every other privileged software ecosystem has matured. The lesson is rarely learned without an incident.

What I’d hope happens instead

The optimistic version: tool vendors and the MCP community get serious about supply chain hygiene before an incident happens. Signing, scanning, granular permissions, audit logs — these are doable. The first vendor to ship them gets a competitive advantage in security-sensitive markets.

The pessimistic version: the ecosystem grows, the incident happens, the response is reactive and patchy.

Both are plausible. The question is whether the MCP-using community has the foresight to demand security maturity before the cost of not having it becomes obvious.

For now, treat MCP servers like any other privileged software. Don’t extend trust based on the marketing. Audit, pin, scope, and monitor. The security posture you’d apply to a kernel module or a browser extension is the right starting point.