Tinker AI
Read reviews
intermediate 7 min read

Cline for Terraform and infrastructure-as-code: high leverage with one big caveat

Published 2026-03-29 by Owner

Terraform is the work I most often pair with Cline. The reasons are structural: Terraform is verbose, repetitive, well-documented in HashiCorp’s registry, and full of patterns the AI has seen many times. Generating the boilerplate for a new resource, an IAM policy, or a module wiring is exactly the kind of mechanical-but-tedious task AI excels at.

The caveat is that infrastructure mistakes have unusual blast radius. A wrong line in a Terraform config can delete a production database, expose a service to the public internet, or run up a four-figure cloud bill in an hour. The AI doesn’t know this; it generates plausible HCL the same way it generates plausible Python.

This is the workflow that lets you get the productivity gains without the production incidents.

The setup

Cline configuration in .cline/settings.json:

{
  "autoApprovedTools": [],
  "alwaysAllowReadOnly": true,
  "alwaysAllowExecute": false,
  "shellCommandTimeoutSeconds": 60
}

Specifically:

  • autoApprovedTools: [] — every shell command requires your explicit approval. Not negotiable for terraform work.
  • alwaysAllowReadOnly: true — Cline can read files freely
  • alwaysAllowExecute: false — cannot run shell commands without approval

This is more conservative than my normal Cline settings. For terraform, the conservatism is the point.

A workflow that works

The pattern I’ve internalized for terraform with Cline:

Phase 1: Generate

Ask Cline to write the Terraform changes. Be specific:

Add a Postgres RDS instance for the staging environment. It should match the 
production instance's configuration except: db.t4g.medium instead of db.t4g.large, 
no multi-AZ, no read replicas. The instance goes in the existing staging VPC 
(vpc_id is in modules/network/staging/outputs.tf as vpc_id).

Update the staging app's security group to allow connections from the new RDS 
on port 5432.

Don't apply anything. Just write the .tf changes.

The “don’t apply anything” is critical. Cline will respect it; the agent doesn’t run terraform commands unsolicited even when execute is disabled, but stating it explicitly makes the boundary clear.

Cline produces the diff. Review it.

Phase 2: Plan

Run terraform plan yourself. Don’t have Cline run it.

cd terraform/staging
terraform plan -out=staging.tfplan

Read the plan output. Look for:

  • Any resources marked # forces replacement — this is the most dangerous flag in Terraform. Replacing a database means destroying it and creating a new one. If a forces-replacement appears on data resources, stop.
  • Any unexpected destroy operations — Cline sometimes generates code that removes a resource you didn’t ask to remove. Especially common with for_each changes that shift indexing.
  • Any resources you didn’t expect to see — sometimes Cline pulls in dependencies or modules that have surprising effects.

If anything in the plan is unexpected, do not apply. Go back to Cline with the specific concern.

Phase 3: Apply (carefully)

If the plan looks right:

terraform apply staging.tfplan

Apply the saved plan, not a fresh one. Saved plans guarantee that what you applied is what you reviewed. A fresh terraform apply could pick up changes that occurred between plan and apply.

Phase 4: Verify

After apply, ask Cline to help you verify the result:

Verify the new RDS instance is running and accessible from the staging app. Check:
- The instance status via aws rds describe-db-instances
- The security group rules via aws ec2 describe-security-groups
- That the staging app can connect (you can suggest a kubectl exec into the app pod 
  with a psql command, but I'll run it myself)

Cline produces verification commands. You run them. The asymmetry — Cline suggests, you execute — is the safety belt.

What Cline is good at

Where this workflow saves real time:

Boilerplate-heavy resources. AWS IAM policies, security group rules, RDS configurations — these have a lot of mandatory fields and Cline knows them all. Generating these by hand from docs is 30+ minutes; with Cline, it’s 5 minutes plus review.

Module wiring. When you have an existing module and want to add another instantiation with slightly different parameters, Cline produces the wiring cleanly.

Translation between providers. “Convert this AWS S3 bucket configuration to GCP Storage” — Cline handles this well, with the usual caveat of reading carefully.

Documentation generation. Writing comments and READMEs for terraform modules. Boring, valuable, AI-friendly.

Refactoring HCL. Splitting a monolithic .tf file into modules, renaming resources, restructuring for readability. Mechanical work that’s tedious by hand.

What Cline is bad at

Where the workflow needs more care:

Understanding state. Terraform’s state is implicit; the AI doesn’t know what’s deployed. It can’t distinguish “I should add this resource” from “this resource exists and I’m reconfiguring it.” The plan output is your source of truth, not Cline’s analysis.

Provider quirks. Each provider has surprises. AWS’s eventual consistency on IAM changes. GCP’s IAM hierarchy quirks. Azure’s resource group lifecycle. Cline often produces code that’s “correct” but trips on these quirks.

Dependency chains. Terraform’s implicit dependency graph isn’t always obvious from HCL. Cline sometimes generates code that has subtle dependency issues — a security group rule that references a security group that’s being created in the same plan, with conflicting timing.

Production-vs-staging differences. Cline doesn’t know which workspace it’s targeting unless you say. “Add this RDS instance” is dangerous unless you specify staging.

High-risk patterns to watch

Three patterns where I’ve seen Cline produce dangerous output:

1. The destroy-and-recreate

Cline generates code that, when planned, shows a destroy of a resource followed by a create. For most resources this is fine. For data resources (databases, persistent volumes), it’s destruction.

Always read the plan for # (destroy) annotations on stateful resources.

2. The IAM expansion

Cline generates an IAM policy that’s broader than the user requested because the AI has seen many examples of broad policies in training data. “Allow this lambda to write to S3” can become “Allow this lambda Action: s3:* on Resource: *” if you’re not careful.

Always read IAM policies line by line. Verify each Action is necessary, each Resource is scoped.

3. The public-by-default

Cline sometimes generates security groups, S3 buckets, or storage configurations that are public by default. Sometimes it’s because the user’s prompt was ambiguous; sometimes it’s just training data bias.

Always check: is this thing reachable from the public internet? Should it be?

The discipline that survives

The single discipline that prevents most of these issues: always plan, always read the plan, never apply on autopilot.

Cline can be told to never run terraform apply (via alwaysAllowExecute: false), but the discipline of reading plans is yours. The AI doesn’t develop a feel for “this plan looks suspicious.” That feeling comes from running enough terraform that you know what your normal plans look like, and noticing when they don’t.

For new operators, this is more time than the AI saves. For experienced operators, it’s a five-minute scan that prevents incidents.

What I’d never use Cline for

Even with the careful workflow:

Disaster recovery actions. Restoring from backup, switching to a DR region, rotating credentials in an emergency. The mental load of an incident is high; adding “let me explain to Cline what’s happening” makes it worse. Do these by hand.

One-off terraform imports. Importing existing infrastructure into Terraform is sensitive — the wrong import can disconnect the resource from its real state. Read HashiCorp’s docs, do it carefully, don’t delegate.

Cross-account changes. When the action affects multiple accounts (organization-level IAM, cross-account VPC peering), the blast radius is larger and the verification path is longer. Slow down.

Production database operations. Migrations, snapshots, parameter group changes. The cost of a wrong action here is data loss. AI is for the boilerplate, not the operations.

What’s actually different about IaC

The reason this guide reads more cautiously than the equivalent for application code:

In application code, an AI mistake produces a bug. The bug is caught in tests, in QA, in code review, or by the user. There’s a feedback loop with multiple safety nets.

In infrastructure code, an AI mistake produces a state change. The change has consequences immediately. The feedback loop is shorter and the safety nets are fewer.

This isn’t a reason to avoid AI for infrastructure work. It’s a reason to use it differently. The discipline above is what differentiates “AI saves me 50% of my Terraform time” from “AI plus terraform = incident review on Tuesday.”

The leverage is real. The leverage requires the discipline. Both, not either.