Tinker AI
Read reviews
intermediate 4 min read

Cline with Supabase MCP server: schema-aware agent development

Published 2026-01-30 by Owner

Cline’s MCP support means it can integrate with Supabase via the Supabase MCP server. The agent gets schema awareness, query introspection, and (with care) the ability to test queries against your dev database. For Supabase-heavy projects, this is a meaningful productivity unlock.

Setup

Install the Supabase MCP server (community package, search for @supabase-community/mcp-server or similar — the canonical name shifts).

In Cline’s MCP config:

{
  "mcpServers": {
    "supabase-dev": {
      "command": "npx",
      "args": ["-y", "@supabase-community/mcp-server"],
      "env": {
        "SUPABASE_URL": "https://your-project.supabase.co",
        "SUPABASE_SERVICE_ROLE_KEY": "${env:SUPABASE_SERVICE_ROLE_KEY}"
      }
    }
  }
}

Use the dev project’s service role key, not production. The service role key bypasses RLS — appropriate for development tooling, dangerous for production access.

What the server provides

The Supabase MCP exposes:

  • Schema browsing (tables, columns, types, indexes, foreign keys)
  • Read queries (SELECT against your tables)
  • RLS policy listing (read-only)
  • Function listing (Postgres functions, edge functions)
  • Migration history (when configured to read it)

Write operations are restricted by default. The server can be configured to allow writes; I keep them disabled.

A specific workflow

When asking Cline to add a feature involving database access, the typical flow:

> add an endpoint that returns the user's most recent 10 orders, with
> total amount and item count for each order. Use TanStack Query on
> the client.

With the Supabase MCP server connected, Cline:

  1. Queries the schema to find the orders table
  2. Discovers it’s named orders with columns I might have guessed wrong
  3. Sees the relationship to order_items table
  4. Generates a query that joins correctly
  5. Generates the TanStack Query hook for the client side
  6. Generates the endpoint correctly using the actual column names

Without the MCP, Cline would either ask me for the schema or guess column names (often wrongly). With the MCP, the code matches the actual database from the first attempt.

A specific failure I’ve seen

The MCP server has a way to fail subtly. If your dev database schema diverges from production, Cline will write code matching the dev schema. When deployed against production, the code may fail.

Mitigation: keep dev and production schema in sync. Use migrations rigorously. Test against a database that mirrors production’s schema before merging.

This isn’t an MCP-specific problem; it’s a “your environments are inconsistent” problem. The MCP just makes it visible.

Permissions setup

In Cline 3.5+, you can scope the MCP server’s permissions:

  • Network: allow only the Supabase project’s domain
  • Filesystem: deny (the server doesn’t need disk access)
  • Process: deny (the server doesn’t spawn subprocesses)

These restrictions limit what could go wrong if the server has a vulnerability or if Cline is asked to do something unexpected.

Reading vs writing

The Supabase MCP can be configured to allow write queries (INSERT, UPDATE, DELETE). I don’t enable this. The reasoning:

The agent’s ability to write to the database during code generation creates risk. Even with all the right intentions, “let me just check by inserting a test row” can have side effects you don’t see.

Better pattern: Cline reads the schema and example data; you (the human) execute writes manually if needed. The MCP for read-only access is enough for code generation; writes happen through your normal application code.

Cost implications

The MCP server adds tokens to each Cline turn. Schema browsing and example queries can add 1-3k tokens. Across a session, this is meaningful but bounded.

For a typical 30-minute Cline session on a Supabase-heavy project, the MCP overhead is around $0.30-0.50 in tokens. The productivity gain from correct schemas usually justifies this.

If you find MCP costs spiraling (the agent goes exploring), tighten your .clinerules:

Use the Supabase MCP for schema introspection only. Do not query
sample data unless the task specifically requires understanding actual
data shapes. Do not browse the full schema; query only tables relevant
to the current task.

This bounds the agent’s exploration without preventing the useful introspection.

Comparison to without MCP

For Supabase projects, my workflow before MCP support:

  • Either I’d paste the schema into chat manually
  • Or Cline would guess column names from convention
  • About 1 in 4 generated queries had wrong column references
  • I’d correct, regenerate, sometimes correct again

After MCP:

  • The schema loads automatically
  • Generated queries reference real columns
  • About 1 in 25 has issues (still occasional, but rare)
  • Correction cycles are faster

For a 4-week project, the difference is several hours of saved time on database-related code. Plus the codebase is more consistently correct (less likely to have a query with a hallucinated column that compiles but fails at runtime).

Edge functions

Supabase edge functions are Deno + TypeScript. They run separately from your main app. Cline’s edge function support via MCP is partial — it can list edge functions but not deeply analyze them.

For edge function development, I treat MCP as schema reference and let Cline focus on the function code itself. The schema awareness still helps for the database queries within edge functions.

What I’d recommend

For Supabase projects:

  1. Set up the Supabase MCP server with your dev project
  2. Use the service role key (in dev only)
  3. Enable read-only mode
  4. Add .clinerules guidance about responsible use of the MCP
  5. Update your dev schema to match production before doing significant work

The setup is 10-15 minutes. The productivity gain is real for the duration of the project.

For non-Supabase projects, the equivalent pattern works with whichever MCP server fits your stack — Postgres MCP for raw Postgres, MySQL MCP for MySQL, etc. The principle is the same: give the agent schema awareness through a controlled integration.

A note on data sensitivity

The MCP server reads from your database. Whatever the agent reads becomes part of its context, which goes to the model provider’s API. For development databases with synthetic or anonymized data, this is fine. For databases with real customer data (even in dev), be more careful.

The right pattern: use a sanitized dev database. Generated test data, anonymized fixtures, or empty schema are all fine. Real customer data going through the AI tool’s pipeline is a privacy concern.

This isn’t unique to Supabase MCP — it applies to any MCP that reads sensitive data. The general rule: don’t connect AI tools to databases containing real customer information unless your contract with the AI provider specifically supports it and your business permits it.