Handling secrets safely with Cline: never letting the model see what shouldn't leave your machine
Published 2026-03-07 by Owner
Cline is an autonomous agent. It reads files when it thinks it needs context. Without configuration, “files it needs context for” includes .env, secrets.yaml, AWS credential files, and anything else that lives in your repo or project directory. Once those bytes leave your machine via the model API, they’re not yours anymore.
Three patterns prevent secret leakage. None are perfect; combined they’re solid.
Pattern 1: .clinerules with explicit exclusions
The first defense is telling Cline what not to read. In .clinerules:
# Do not read these files. They contain secrets that must not leave this machine.
- .env
- .env.local
- .env.production
- .env.development
- secrets.yaml
- credentials.json
- *.pem
- *.key
- aws-credentials/
- .aws/
- config/local.yml
If you need to know what env variables are used in the project, look at
.env.example or env.d.ts. These contain the variable names but not values.
If you need to know what secrets configuration looks like, do not read the
actual files; ask the user.
This goes at the project root. Cline reads .clinerules on session start and respects the exclusions.
The model is generally good at following these instructions. Not perfect — about 1 in 30 sessions, the model attempts to read an excluded file. Cline’s tool implementation rejects the read attempt before sending the file content to the model. So the protection has two layers: the rule (model behavior) and the tool implementation (system enforcement).
Pattern 2: keep secrets out of the project directory
The cleaner pattern: don’t have secrets in the project directory at all.
Instead of .env in the repo root, use:
- A password manager (1Password CLI, Bitwarden CLI) for personal secrets
- A vault (HashiCorp Vault, AWS Secrets Manager) for shared secrets
- Environment variables sourced from your shell config
Your shell loads the secrets into env vars when you cd into the project (using direnv or similar). Cline runs in that shell, so the env vars are available to commands the agent runs (npm start, pytest, etc.). But the secret values aren’t in any file Cline can read.
Setup with direnv:
# .envrc (committed)
export DATABASE_URL="$(op read 'op://Project/DB/url')"
export STRIPE_SECRET_KEY="$(op read 'op://Project/Stripe/secret')"
# Generated by direnv on cd into project
The secrets live in 1Password. The .envrc file references them but doesn’t contain them. Cline can read .envrc (it’s not sensitive) but learns nothing about the actual secrets.
This is more setup than keeping a .env file but removes a category of risk entirely.
Pattern 3: review what Cline actually read
Cline shows you what files it read in each session. Periodically review.
The session log includes:
- Files read with
read_filetool - Files searched via grep
- Files listed in the project tree
- File contents included in the agent’s context
Check this for surprises. If Cline read something it shouldn’t have, you want to know now, not after a leak.
For sensitive projects, consider running Cline in a “limited” mode where the project directory is a clean checkout without local config files. The agent does its work; you copy the diffs to your real working directory.
Pattern 4 (occasional): use a sandbox
For especially sensitive work, run Cline in a sandbox:
- A separate user account with no access to your home directory
- A Docker container with the project mounted but no host access
- A VM with limited filesystem visibility
This is heavy for everyday work but appropriate for code that touches highly sensitive systems (payments, healthcare, defense).
The tradeoff: the sandbox makes Cline less useful (the agent can’t run your real test suite, can’t access your databases) but limits the damage if something goes wrong.
What the model providers see
Beyond your own secrets, think about what Anthropic, OpenAI, or Google sees when Cline sends data:
- The model provider sees every prompt and completion
- They claim not to train on your data (verify this in your contract)
- They retain logs for some period (varies by provider, typically days)
- They have employees who could in principle access logs (most have policies and audit logs around this)
For most software development, this is acceptable. For highly regulated industries, the data flow may not pass compliance review. Use BYOK with on-prem models in those cases.
A specific incident I learned from
Early in my Cline use, I asked it to help debug a database connection issue. Cline read .env to understand the connection string. The string included a production database URL with credentials.
The credentials were rotated within a week (we rotate quarterly anyway). But the lesson: I’d given Cline (and Anthropic’s logs) my production credentials without thinking about it.
After this:
- I added
.env*to.clinerulesexclusions - I switched to direnv + 1Password for personal secrets
- I added an audit step after each significant Cline session: review what was read
The audit step is the discipline that’s hardest to maintain. It’s also the most important. The exclusions prevent obvious mistakes; the audit catches the non-obvious ones.
What Cline should add
A few things I’d want from the tool:
A “no secrets” mode. A flag that aggressively prevents reading anything that looks like a secret (matches a regex of common patterns). False positives would be acceptable for a security-conscious mode.
A diff highlighter for “this looks like a secret”. When a file Cline is about to read or include in context contains values matching common secret patterns (long random strings, JWT-shaped tokens, PEM headers), warn before sending.
Clear audit logs. A daily summary of what files were read, what data left the machine, what tasks were performed. Currently this is buried in session history; a dedicated audit view would help.
These are reasonable feature requests. Until they ship, the manual patterns above are the right level of paranoia.
The principle
Treat Cline like a contractor you trust but who occasionally makes mistakes. Don’t give the contractor access to things the contractor doesn’t need. The cost of access scoping is low; the cost of a leak is high.
For most users, patterns 1 and 2 are enough. Pattern 3 catches the occasional misstep. Pattern 4 is for the rare cases where the work is sensitive enough that defense in depth is warranted.
The default — running Cline with no configuration in a project that has secrets in .env — is a small, ongoing risk. It hasn’t bitten me yet beyond the one production credential exposure. Don’t wait for it to bite you.