Tinker AI
Read reviews
3 min read Owner

The “prompt engineering” hype overstates one thing and understates another. The hype overstates the value of magic phrases (“you are an expert in…”) that have mostly stopped working. The hype understates the underlying skill: clear thinking expressed in clear words.

This skill compounds with practice. It’s also harder to teach than the magic phrases were.

What the skill actually is

A good prompt for code does several things at once:

States the goal precisely. Not “improve this code” but “make this function handle the case where input is undefined by returning null.”

Mentions constraints. What shouldn’t change. What patterns to follow. What libraries are forbidden.

References the relevant context. Which files matter. Which existing patterns to mirror.

Anticipates failure modes. “If this introduces type errors, prefer narrowing over assertions.”

Specifies the output shape. “Return only the modified function, not the surrounding code.”

These aren’t magic phrases. They’re a specific kind of clarity that’s helpful regardless of which AI tool you’re using.

A specific example

A vague prompt:

add validation to the form

A clear prompt:

add validation to the user signup form. Specifically:

  • email must match a basic email pattern
  • password must be at least 12 characters
  • on validation failure, set the form’s error state and prevent submission
  • use react-hook-form’s setError pattern, matching how the LoginForm handles errors

The clear prompt is longer. It also produces better output on first attempt across every AI tool I’ve tested.

Where this skill comes from

Notably, the skill of writing clear prompts is largely the same skill as writing clear PRs, clear bug reports, clear documentation. People who write clear PRs tend to write clear prompts.

This is interesting. The skill isn’t AI-specific. It’s the broader skill of articulating intent in words. AI tools made it visibly important; the skill predates them.

People who’ve been writing for a long time — engineers who’ve written many PRs, doc-heavy codebases, blog posts — have an advantage with AI tools. People who haven’t are at a disadvantage that’s hard to address quickly.

What it’s not

A few things the skill isn’t:

Knowing magic phrases. “Step by step,” “you are an expert” — these mostly don’t help on modern models.

Length. Long prompts aren’t better than short prompts. Specific prompts are better than vague prompts. Length is incidental.

Tool-specific tricks. Cursor-specific syntax, Cline-specific commands. These help marginally; the underlying skill is tool-agnostic.

Memorized templates. “Always include X, Y, Z.” Templates work for narrow patterns; the skill of varying prompts to match varying tasks is bigger.

The marketing for AI tools sometimes sells these surface tricks as “prompt engineering.” They’re a small part of the actual skill.

How to develop it

A few things that help:

Write your prompts and re-read them. Before sending, ask: “if I read this without my context, would I understand what’s wanted?”

Notice when prompts fail. When the AI’s output is wrong, the prompt was probably ambiguous. Refine the prompt; try again.

Steal from your own PRs. Your PR descriptions are practice for prompts. Same skill.

Read others’ prompts. When teammates share AI conversations, notice their prompt styles. Learn from prompts that work.

Iterate on patterns. When a prompt format works for a kind of task, save it. Reuse with adjustments. Build a personal library.

The development is slow. Months, not weeks. The compound benefit shows up over time.

A pattern that helps

For unfamiliar tasks, before writing the prompt:

  1. Spend 30 seconds writing what I want, in my own words
  2. Spend 30 seconds writing what I want the AI not to do
  3. Spend 30 seconds noting which existing code is the closest analog

Then convert these notes to a prompt. The notes become the prompt’s structure: goal, constraints, references.

For familiar tasks (the same shape I’ve prompted many times), the prompt is muscle memory. For unfamiliar tasks, the 90-second pre-prompt thinking pays off.

The compounding benefit

Engineers who write good prompts:

  • Get better first-attempt output
  • Iterate less
  • Spend less time correcting AI mistakes
  • Have less frustration with AI tools

The cumulative effect across a year is meaningful. An engineer who writes clear prompts for a year is genuinely more productive than one who writes vague prompts for the same period.

This is the version of “AI productivity” that’s real. Not “the AI made me 50% faster.” More like “I learned to use the AI well, and now my baseline output is higher.”

What this means for hiring

Engineers who write well — clear PRs, clear docs, clear bug reports — are now more valuable than they were pre-AI. The skill they had translates directly to AI tooling effectiveness.

Engineers who haven’t developed this skill are at a disadvantage. Not insurmountable, but real. Developing the skill takes years; most engineers don’t deliberately work on it.

For hiring, “writes well” is a more important signal than it used to be. For self-development, deliberate practice on writing pays off across many activities, AI tooling included.

Closing

The skill of writing good prompts is the skill of writing clearly. It’s not new. It’s not magic. It’s not specific to AI tools.

It’s also the skill that distinguishes high-productivity AI users from low-productivity ones. Engineers who treat AI as “type something, get code” stay at the entry level. Engineers who treat AI as “express my intent precisely so the tool can help me well” reach a higher productivity ceiling.

The investment is in the underlying skill of clear writing. The AI productivity is one of many returns on that investment.

For engineers wanting to get more out of AI tools: read your prompts before sending. Notice the unclear ones. Make them clearer. Repeat for a year.