Tinker AI
Read reviews

Outcome

API shipped in 3 weeks vs estimated 6; Cline contributed ~55% of code; the schema-first workflow plays to Cline's strengths

5 min read

Built a GraphQL API server over three weeks using Cline. The API: a customer feedback collection and routing system. About 4500 lines of TypeScript with Apollo Server, Drizzle ORM, and Postgres.

The schema-first development pattern (define GraphQL types and resolvers from schema) is a particularly good fit for AI tools. Cline contributed about 55% of the code, the highest ratio I’ve achieved on a real project.

The setup

Stack:

  • Apollo Server 4
  • TypeScript with strict mode
  • GraphQL Code Generator for type generation
  • Drizzle ORM with Postgres
  • Pothos schema builder (type-safe GraphQL schema)
  • Vitest for testing
  • Cline 3.5 with Claude 3.5 Sonnet

The repo structure was conventional:

src/
├── schema/
│   ├── types/        # Pothos type builders
│   ├── queries/      # Query resolvers
│   └── mutations/    # Mutation resolvers
├── repositories/     # Database access
├── services/         # Business logic
└── tests/

I had one fully implemented entity (Customer) as a reference pattern. Cline scaled it up.

Why schema-first works for AI

The pattern that emerged: I’d add a type to the schema, then ask Cline to scaffold everything else.

Example flow:

> add a Feedback type to the schema with these fields: id, content, customer_id,
> created_at, status (enum: pending|reviewed|archived). Then create the Pothos
> type builder, the queries (list, get), the mutations (create, update_status),
> and the repository. Follow the pattern from Customer.

Cline produced:

  • The Pothos type builder
  • The list and get queries with pagination and filtering
  • The create and update_status mutations with validation
  • The repository with the necessary methods
  • Tests for each layer

About 80% of the code was usable as-is. The other 20% needed refinement for domain-specific concerns (the archived state had specific business rules that weren’t in my prompt).

This is the speedup. Each entity took ~30 minutes including review and refinement. Manual implementation would have been 90+ minutes per entity.

Cline’s specific strengths here

Pattern matching across the file types. Cline understood the relationship between the type definition, the resolver, the repository, and the tests. When I asked for a new entity, it produced consistent code across all layers.

TypeScript strict mode compliance. With strict mode on, suggestions had to typecheck. Cline’s first attempts almost always typechecked. The 20% needing refinement was about domain logic, not types.

GraphQL-specific patterns. Apollo + Pothos has specific patterns (DataLoader for N+1 prevention, scalar types, custom directives). Cline handled these well — they’re well-trained.

Vitest test patterns. Tests followed the existing patterns. Mock data factories, the test setup, the assertion style — all consistent with what was already there.

Where I had to push back

A few areas where Cline’s defaults didn’t match my project:

N+1 query patterns. Cline sometimes generated resolvers that would N+1. I had a DataLoader pattern in the project; Cline didn’t always use it. Adding to .clinerules: “all multi-record fetches go through DataLoader; check for existing DataLoader in src/loaders/ before adding new ones.”

Authorization. Field-level authorization wasn’t part of Cline’s defaults. I had a pattern for it (a custom scalar that checked user permissions). Cline missed it on early entities; explicit rules helped.

Error handling in mutations. Cline’s generated mutations threw errors that didn’t match our error envelope. We use a Result-typed pattern. Adding to .clinerules: “mutations return Result<T, ApiError>; never throw directly.”

These are the kinds of project-specific patterns that need codification in rules. Once codified, Cline followed them.

A specific session

The most productive session: adding a “search” feature across multiple entities.

The prompt:

> Add a unified search query that searches across Customer, Feedback, and
> Comment. Search by full-text on relevant fields. Return a SearchResult
> union. Implement using the existing PostgreSQL full-text infrastructure
> (see lib/search.ts).

In Plan mode, Cline produced a plan covering:

  • The SearchResult union type
  • The search query
  • Three search methods (one per entity)
  • The integration with the existing search infrastructure
  • Tests for each method

I reviewed the plan, accepted, switched to Act mode. Cline executed in about 25 minutes. The result was clean code that worked on first try.

Estimated time without AI: 4-5 hours. With Cline: 35 minutes including review. About 8x speedup on this specific task.

What didn’t work as well

Custom directives. Apollo supports custom directives. We used a few. Cline had trouble understanding the directive lifecycle and produced code that compiled but didn’t work at runtime. I rewrote these by hand.

Subscription handling. The project had a small WebSocket subscription surface. Cline’s subscription code was structurally fine but didn’t handle reconnection logic well. Manual rework.

Deeply nested queries. When a resolver needed to fetch through 4-5 levels of relationships, Cline’s first attempts were either inefficient (multiple queries) or correct but hard to read. Iteration helped; a manual rewrite was sometimes faster.

Productivity numbers

  • Estimated time without AI: 6 weeks
  • Actual time: 3 weeks
  • Cline API spend: $48
  • Cline contribution to lines of code: ~55%

The 55% number is the highest I’ve achieved on a real project. Three factors:

  1. The schema-first pattern (well-defined inputs and outputs at each step)
  2. A clean reference implementation Cline could pattern-match against
  3. Strict TypeScript catching mistakes early

For projects with similar shape (schema-first development, well-defined types, clear patterns), this productivity ratio seems achievable.

Recommendations for similar projects

Use Pothos or similar type-safe schema builder. Type safety in the schema flows through all the resolvers. The model gets free correctness checking.

Have a fully-implemented reference entity. One entity that demonstrates all the patterns (queries, mutations, types, tests, error handling). Pin this in chat sessions when generating new entities.

Codify authorization patterns explicitly. Custom auth patterns aren’t in the model’s defaults. Document them in .clinerules.

Use GraphQL Code Generator. Generated TypeScript types from the schema mean the model can reason about the types. Without codegen, the model guesses; with codegen, it knows.

Run tests aggressively. Strict TypeScript + comprehensive tests catches the mistakes the model makes. With both, the agent loop closes reliably.

What I learned

The big lesson: AI productivity varies massively by project shape. Schema-first GraphQL was the highest productivity setting I’ve experienced. Other projects (legacy refactors, embedded firmware, niche language work) have been much lower.

If you can structure a project to be schema-first, the AI productivity gains are real and large. If you can’t, the gains are smaller but still meaningful.

For new projects, this is an argument for picking schema-first stacks where possible. GraphQL with code generation, gRPC, OpenAPI-first development — these all have similar characteristics. The constraint of “define the contract first” plays to AI strengths.

Worth doing again

Yes, I’d run this exact playbook again. The schema-first GraphQL setup with Cline is one of the best AI-tool fits I’ve experienced. For teams building backend APIs with GraphQL, the playbook is replicable: pick a type-safe schema builder, build one reference entity carefully, then let Cline scale up.

The cost-vs-benefit math is overwhelmingly favorable. The setup time pays for itself within the first few entities. By the end of the project, the AI tooling investment had compounded into 3 weeks saved on a 6-week estimate.