Tinker AI
Read reviews

Outcome

MVP shipped on time and got first paying customer; codebase has technical debt that wouldn't exist with slower development

6 min read

I built and launched an MVP for a side project in two weeks using Cursor heavily. The product: an analytics dashboard for newsletter writers. The result: 12 paying customers within a month of launch, generating ~$500/month MRR. The codebase has more technical debt than I’d like.

This case study is honest about both. Speed is the constraint when shipping an MVP. AI tools genuinely help with speed. The cost of speed is technical debt; AI tools don’t change this fundamental tradeoff.

The project

What I built:

  • Web app for analyzing Substack/Beehiiv/Ghost newsletter performance
  • Pulls data from each platform’s API
  • Shows engagement metrics over time
  • Suggests optimization actions
  • Stripe for payments
  • Magic link auth (Lucia + Postgres)

Stack:

  • Next.js 15 with App Router
  • Drizzle ORM with Postgres (Supabase)
  • Tailwind v4 + shadcn components
  • TanStack Query for data fetching
  • Stripe for payments
  • Vercel for hosting

About 6500 lines of TypeScript. Cursor was used for ~70% of the code.

Day-by-day

A rough log of how the two weeks went:

Day 1. Set up the project. Cursor scaffolded the Next.js app, set up Drizzle, configured Tailwind, integrated shadcn. About 4 hours; would have taken 6 manually.

Day 2. Authentication flow. Lucia + magic links. Cursor wrote the auth pages, the magic link sender, the session middleware. About 6 hours; would have taken 10. First test paying user could sign up by end of day.

Day 3. Substack integration. Cursor wrote the API client, the data fetching, the database schema. Some adjustments needed for Substack’s actual API quirks. About 8 hours.

Day 4. Substack analytics page. Cursor wrote the queries, the chart components (Recharts), the page layout. About 6 hours.

Day 5. Beehiiv integration. Followed the Substack pattern. Cursor scaled the pattern; ~4 hours total. Faster than Substack because the pattern was now clear.

Day 6. Ghost integration. Same pattern. ~3 hours. The pattern was now muscle memory.

Day 7. Beta launch. Initial users. First bug reports.

Day 8-9. Bug fixing and performance. Cursor was useful for ~half of these. The other half required investigation that Cursor didn’t help with.

Day 10. Stripe integration. Cursor handled the boilerplate; I had to write the webhook handlers carefully. ~6 hours.

Day 11. Subscription gating. Cursor wrote the middleware that enforces paid features. ~3 hours.

Day 12. Polish and deploy. Bug fixes, copy refinement, error states. Cursor helped with ~half. ~5 hours.

Day 13. Marketing site copy and SEO. Manual.

Day 14. Launch. Posted on Reddit and Hacker News.

Total: about 70 hours of focused work over 14 days.

Where Cursor genuinely accelerated

Specific things that went much faster:

API integrations. Each newsletter platform’s integration was ~80% pattern matching after the first one. Cursor produced reasonable scaffolds; I refined for platform-specific quirks.

UI components. shadcn provides components; I assembled them into pages. Cursor knew the shadcn patterns well.

Database operations. Drizzle queries, basic CRUD, repository methods. Cursor handled efficiently.

Form handling. React Hook Form patterns. Standard.

Type definitions. TypeScript interfaces from API responses. Trivial for Cursor.

For these, the productivity multiplier was real and large — 2-3x faster than manual.

Where Cursor didn’t help

A few specific things where I worked at human speed:

Substack’s API quirks. The actual API returns slightly different data than docs suggest. Cursor’s first attempts didn’t match reality. I had to investigate, then guide.

The Stripe webhook race condition. A subtle race when subscription updates happened during user interactions. Cursor’s first fix didn’t address the root cause. Manual debugging.

Performance optimization. The dashboard was slow with users having 10k+ subscribers. Cursor suggested optimizations; some helped, some didn’t. Manual profiling and tuning.

The magic link delivery issue. Some emails went to spam. Cursor’s suggestions for SPF/DKIM/DMARC were generic. I had to follow specific provider documentation.

The bug in the chart legend. A chart legend was rendering at the wrong position. Cursor’s fixes were random; I had to trace through Recharts source.

For these, AI tooling didn’t accelerate the work. Investigation, debugging, and domain-specific tuning remained at human speed.

The technical debt

After two weeks, the codebase has issues:

Test coverage is thin. I wrote tests for the auth flow and Stripe webhook (the security-critical parts). Other parts have minimal tests. The cost is real bugs in production.

Some files are too long. Pages with 600+ lines because I added features in-place rather than refactoring. Maintenance gets harder.

Inconsistent error handling. Different parts of the codebase handle errors differently. Cursor produced what made sense at each moment; the patterns drifted.

Duplicated logic. Some helper functions exist in 2-3 places because Cursor produced similar functions in different files instead of identifying duplication.

No proper logging. I have console.log everywhere; structured logging is on the to-do list. Production debugging is harder than it should be.

Some queries don’t have indexes. I added indexes when bottlenecks appeared, not preemptively.

These are normal MVP technical debt. They’d exist with or without AI tools. AI tools may have made some worse (Cursor’s tendency to add code without refactoring contributes to file length and duplication).

Would I do it the same again?

Yes. The MVP shipped. It got customers. It’s making money. The technical debt is manageable.

The alternative — slower, more careful development — would have meant the MVP launching later, possibly missing the moment, possibly never launching at all. A polished codebase with no users is worse than a debt-laden codebase with paying users.

For startup MVP work specifically, AI tools are well-suited. The tradeoff (speed for cleanliness) matches startup priorities (speed for survival). Mature engineering practices that emphasize cleanliness over speed produce better codebases but maybe not better outcomes.

The honest comparison

If I had built this MVP without Cursor:

  • Estimated time: 4 weeks instead of 2
  • Codebase quality: probably better
  • Customer outcomes: same product, two weeks later, maybe missing first wave
  • Technical debt: less, but still substantial (MVP work is fast no matter what)

The 2-week version is a better business outcome. The 4-week version is a better engineering artifact. For MVPs, business beats engineering.

What I’d do differently

A few things in retrospect:

More tests, even fewer features. I should have written more tests for the bus logic. The “I’ll add tests later” plan didn’t fully materialize.

Refactor at week 1.5. I should have spent half a day at the midpoint refactoring duplication. The compound effect of “leaving it for later” was painful.

Better .cursorrules. I started with sparse rules. By day 5, my rules were comprehensive. Earlier comprehensive rules would have produced more consistent code.

Stripe webhook handling earlier. I left Stripe for late in the build. The webhook complexity ate into days I’d planned for polish.

More careful with auth. Auth code is security-critical. I should have written it more carefully and tested more thoroughly. Cursor’s defaults were fine; I should have reviewed harder.

Costs

  • Cursor Pro subscription: $20 (one month)
  • Vercel hobby tier: free
  • Supabase free tier: free
  • Stripe: only fees on transactions
  • Domain: $12

Total runway costs for the MVP: $32 plus my time.

Revenue after first month: $500 MRR.

The economics of solo MVP work with AI tooling are favorable when the product finds an audience. When it doesn’t, the cost is mostly time.

What this teaches

For others considering AI-tooled startup work:

Speed matters more than you think for MVPs. The 2-week vs 4-week difference can be the difference between catching a moment and missing it.

Technical debt is recoverable. I can fix the issues over the next few months while maintaining customer commitments. Lost momentum is harder to recover.

AI tools fit MVP work well. The work shape (speed-prioritized, pattern-heavy, evolving) matches AI strengths.

Don’t expect magic. AI didn’t write the product. I designed it; AI wrote the typing. The thinking remained mine.

Plan for the recovery. Budget time after launch to fix debt. The MVP got me to revenue; the next month is making the codebase sustainable.

For solo founders or small teams shipping MVPs, AI tooling is a real advantage. It’s not a substitute for product judgment, but it accelerates the implementation enough to make solo MVP work more feasible than it was a few years ago.

The newsletter analytics MVP I built wouldn’t have existed without AI tooling. I wouldn’t have committed two weeks of evenings if the work scope was four weeks. The tool made the project tractable; the project became a small business; the small business will fund more projects.

That’s the kind of compounding the marketing claims and the daily work occasionally delivers. Worth it for the right kind of project.