Cline 3.7 dropped this week with a feature that’s been long-awaited: a shared prompt library. Engineers can now save common prompts as templates with parameters, share them across the team, and invoke them quickly.
For teams standardizing on common AI workflows, this is a meaningful improvement.
What it does
Three pieces:
Personal prompt library. Save your own commonly-used prompts. Invoke with a slash command (e.g., /migrate-component runs the saved prompt for migrating a component pattern).
Team prompt library. Admins publish prompts to a shared registry. All team members get them. Centralizes the team’s accumulated knowledge about effective prompting.
Parameters. Prompts can have parameters. /migrate-component <ComponentName> substitutes the name into the prompt. Reusability across many similar tasks.
A use case
For a team I work with, common tasks include:
Adding a new feature flag. The prompt is roughly: Add a feature flag named ${FlagName}. Default to off. Add it to the FeatureFlags type. Add a hook for accessing it. Update the documentation.
Previously each engineer typed this differently. With the prompt library:
/add-flag MyNewFeature
The prompt expands; Cline acts on it. Consistent results.
Generating a CRUD endpoint. Long prompt with conventions about routing, validation, response shapes. Saved as /crud-endpoint <ResourceName>.
Writing tests for a component. Test patterns specific to the team’s testing approach. Saved as /test-component <ComponentPath>.
These prompts are organizational knowledge captured in a reusable form.
Why it took so long
Prompt libraries seem like an obvious feature. Why wasn’t this in 1.0?
A few reasons:
Prompts evolve. Teams refine their prompts over months. Building a library before the prompts stabilize produces a library full of suboptimal templates.
Defining the right primitives. Should prompts have parameters? Conditionals? Loops? Cline kept the primitives simple — text with parameter substitution. More complex templating was rejected as scope creep.
Discovery problem. A library is only useful if engineers can find the right template. Cline’s UI for browsing the library is clean but not yet great. Search and tagging help.
The 3.7 release is the first version that’s good enough to ship without immediately wanting changes. Earlier iterations were experimental.
Comparison to alternatives
Cursor’s saved prompts. Cursor has had basic saved prompts for a while. Cline’s implementation is more team-oriented.
GitHub Copilot’s custom instructions. Roughly equivalent for personal config; team library is the differentiator.
Manual approaches. Many teams have a prompts.md in their repo. Engineers copy-paste. Works but adds friction.
For teams investing in standardized prompts, Cline 3.7’s first-class support is meaningfully better than the manual alternatives.
Best practices for prompt libraries
A few things I’ve learned about good prompt libraries:
Prompts should be specific. “Help me write a test” is too vague to template. “Write a Vitest test for ${functionName} covering happy path, error path, and at least one edge case, using our existing test patterns from ${referenceFile}” is template-worthy.
Parameters should have defaults. Make common cases easy. Override for variations.
Comment what each prompt does. When sharing across team, others need to understand the prompt’s purpose without running it.
Review and refine periodically. Prompts that worked 6 months ago may not match current code. Review the library every quarter.
Don’t over-template. Some tasks really are one-off. Forcing them into templates produces awkward parameters.
What’s still missing
A few capabilities I’d want:
Prompt analytics. Which prompts get used most? Which produce satisfaction? Without analytics, the team can’t optimize the library.
Per-project prompts. Some prompts make sense in one repo and not others. Currently it’s all-or-nothing per team.
Prompt versioning. When a prompt changes, the change history is gone. Versioning would help teams understand evolution.
Conditional logic. “If working in TypeScript, do X; if Python, do Y.” Right now you have separate prompts. Conditional templating would be useful.
These are reasonable next-cycle items.
Should you adopt it?
For Cline-using teams of any meaningful size: yes. The marginal cost is low (few hours to seed initial library); the benefit compounds as the library grows.
For solo Cline users: maybe. The personal library is useful even alone. The team features don’t apply but the personal templates have value.
For non-Cline users: this is a feature to want. Bring it up with your tool’s vendor. The pattern is general enough to apply across tools.
A broader observation
Capturing organizational knowledge in shared prompts is a new skill. Teams are figuring out:
- Which tasks deserve templates
- How specific to make the templates
- How to keep templates current
- Who owns each template
The teams that do this well develop a kind of “prompt library” alongside their codebase. The library grows in parallel with the code; both encode team knowledge.
This is interesting because it’s a new artifact in software engineering. Codebases have always had configuration files, scripts, docs. Now they have prompt libraries. The discipline of maintaining them is something teams are inventing.
Worth watching
The prompt library feature is the kind of thing that seems small but compounds. A team using it well over a year has accumulated patterns; a team that doesn’t is starting fresh on each task.
For Cline’s roadmap, this release suggests they’re prioritizing team-level features. Recent releases have been individual-focused. Shifting to team features matches the broader market trend (tools moving upmarket from individual to team to enterprise).
For users, this is good news. The features that benefit teams indirectly benefit individuals too — better team tooling means better individual workflows.
Update path
Standard install:
# VS Code: Update Cline extension
# Other editors: see your editor's plugin manager
The 3.7 features are opt-in. Existing Cline configs continue working. New features become available as users explore them.
For teams wanting to standardize: build a small initial library (5-10 prompts), test for a week, expand based on what works. The library is most valuable when it captures real team patterns rather than theoretical ones.