A pattern I’ve noticed over the past year: engineers building custom coding agents for their specific workflows. Not Cursor, not Cline, not the off-the-shelf tools — their own scripts that automate parts of their work using LLMs as components.
The pattern is increasingly common. Worth understanding.
What these agents look like
Examples of personal agents I’ve seen or heard about:
A “PR description writer” script. Takes a git diff, runs it through a model with the engineer’s preferred style guide, outputs a draft PR description. Saves the engineer 10 minutes per PR.
A “release notes generator” script. Reads commits since the last tag, generates release notes in the team’s format. Eliminates a manual chore.
A “codebase question answerer” script. Takes natural language questions about the codebase, returns answers with file references. Faster than searching manually.
A “ticket-to-task-list converter.” Takes a Linear or Jira ticket, breaks it into a list of implementation tasks. Useful for kicking off complex work.
A “code review prep script.” For each open PR, generates a summary of what changed and likely issues. Reviewer reads the summary before opening the PR.
Each is a small tool, built by one person, serving their specific workflow.
What’s making this possible
Several developments converge:
Cheap APIs. Anthropic, OpenAI, Google all offer affordable APIs. A personal agent costs cents to dollars per day, well within personal budgets.
Mature SDKs. Python and TypeScript SDKs are well-documented. Building a working LLM-powered script takes hours, not weeks.
Better local models. For sensitive workflows, Ollama + a strong open model gives you a personal agent that doesn’t leave your machine.
Better orchestration libraries. LangChain, llama-index, dspy — frameworks that handle the plumbing so you can focus on logic. Mixed quality but good enough to start.
Coding-specific MCP servers. Tools like the GitHub MCP, Postgres MCP, etc. give agents structured access to common services. Less custom integration work.
The barrier to “I’ll build my own tool for X” has dropped dramatically.
A specific example
A friend’s PR description writer:
import subprocess
import anthropic
def get_diff():
return subprocess.check_output(["git", "diff", "main..HEAD"]).decode()
def get_commits():
return subprocess.check_output(
["git", "log", "main..HEAD", "--oneline"]
).decode()
def generate_pr_description(diff, commits):
client = anthropic.Anthropic()
message = client.messages.create(
model="claude-3-5-sonnet-latest",
max_tokens=1024,
messages=[{
"role": "user",
"content": f"""Write a PR description for these changes.
Format:
## What
[2-3 sentences]
## Why
[2-3 sentences]
## How
[bullet list of approach]
## Testing
[what tests cover this; what manual testing was done]
Do not include "Closes #X" lines; I'll add those manually.
Commits:
{commits}
Diff (truncated to 3000 lines):
{diff[:50000]}
"""
}]
)
return message.content[0].text
if __name__ == "__main__":
diff = get_diff()
commits = get_commits()
description = generate_pr_description(diff, commits)
print(description)
About 40 lines. Built in an afternoon. Used multiple times per day. Saves real time.
Why these aren’t replaced by Cursor or Cline
Cursor and Cline are general-purpose. They optimize for a wide range of tasks. They make tradeoffs that benefit the median user.
Personal agents are specific. They optimize for one task. They make tradeoffs that benefit the specific user.
For tasks like “write a PR description in my preferred style,” a personal agent does this better than a general tool because:
- It knows your style
- It always follows your format
- It doesn’t get distracted by other features
- It runs from a single command
The personal agent is a specialization of the same underlying capability.
The skill required
Building a personal agent requires:
- Comfort with one of Python or TypeScript
- Understanding of the LLM API basics (request/response, system prompts, JSON mode)
- The ability to define your own workflow precisely
- A few hours of focused time
This is approachable for engineers but not for everyone. Non-engineers can use the off-the-shelf tools; building custom is engineering work.
The compound effect: engineers can have specialized tools that fit their work. Non-engineers can’t. The AI productivity gap between engineers and non-engineers may widen as personal agents proliferate.
What’s not yet standard
A few things that would make personal agents easier:
Better starting templates. A “CLI tool that uses Claude to do X” template that’s just-add-prompt would help.
Reusable components. Shell command runners, file readers, Git integrators — reusable pieces would speed up new agents.
Better debugging. When an agent produces wrong output, debugging is currently informal. Better tracing and replay would help.
Better cost management. Tracking and capping costs per agent. Currently this is per-API-key, not per-agent.
These are addressable; some open-source projects are working on them. The ecosystem is maturing.
Examples I’d build
If I were starting personal agents from scratch, the first ones I’d build:
-
A daily standup writer. Looks at my git activity, calendar, and recent Slack messages. Drafts a standup update. I edit and send.
-
A meeting prep tool. Looks at the meeting on my calendar and the related repos/docs. Produces a “things to think about before this meeting” list.
-
A weekend project namer. I describe my weekend project; it generates names that match my usual style.
-
A learning-from-bug tool. When I fix a bug, I describe it and the fix; the tool produces a personal note for “things to watch for in similar code.”
These are small. They fit my specific patterns. They wouldn’t be useful as general tools because they’re so personal.
What this implies
A few thoughts on the trajectory:
Customization is the next frontier. General tools have hit a maturity plateau. The next productivity gains come from customizing tools to your work.
Engineering teams will have shared agents. What I’m describing for individuals will scale to teams. “Our team’s PR writer,” “Our team’s release notes tool.” These will become standard.
The off-the-shelf tools may add agent-building features. Cursor or Cline could let users build custom agents within the tool. Some are starting to. The line between “use the tool” and “extend the tool” may blur.
The skill of building personal agents will be valuable. Engineers who can build their own tools will compound their productivity. Engineers who can’t will be limited to what’s available off the shelf.
A starting suggestion
For engineers curious about personal agents:
- Pick one task in your daily workflow that’s tedious
- Build a 50-line script that uses an LLM to automate it
- Use it for a week; refine it
- Decide if it’s worth keeping
The first agent takes longer than expected. The second takes much less. By the third, you have a pattern.
The investment is bounded. The payoff is ongoing tools that fit you exactly.
Closing
Personal coding agents are a real category. They sit alongside the off-the-shelf tools, not in competition with them. They serve workflows the general tools don’t.
Engineers who build them have a productivity advantage that compounds. Engineers who don’t are limited to what tools the market provides.
If you’ve never built one, the threshold is lower than you think. The first one might surprise you with how much it changes your workflow.