Tinker AI
Read reviews

Outcome

Migration completed in 80 hours over 4 weeks vs estimated 200 hours; new endpoints type-safe via drf-spectacular

9 min read

A small SaaS I work on (timesheet tracking, ~12k MAU) had been running on Django class-based views for five years. The codebase had grown organically: 80 endpoints, half of them returning rendered HTML, half returning JSON, none of them type-safe at the API boundary. Adding new client integrations meant hand-writing schemas every time.

The migration to Django REST Framework was overdue. Original estimate: 200 hours of focused work over two months. Actual: 80 hours over four weeks. Cursor was a meaningful part of the difference.

The codebase

Some context for the work:

  • Django 4.2, Python 3.11
  • 80 endpoints split between views.py (still using View class subclasses) and api_views.py (a half-finished DRF migration from 2022)
  • 60% test coverage on views, 30% on API logic
  • PostgreSQL, ~50 tables, mostly unchanged for two years
  • Frontend: a SPA that hits the API plus some server-rendered admin pages

The plan

I spent two days planning before opening Cursor. The plan:

  1. Audit existing endpoints; categorize by complexity
  2. Set up DRF infrastructure (auth, permissions, throttling, pagination)
  3. Migrate simple endpoints first; build muscle memory
  4. Tackle complex endpoints (custom logic, reports, multi-step writes)
  5. Move admin endpoints last (lowest priority)
  6. Ship incrementally; both old and new endpoints active during migration

The categorization mattered. About 40 endpoints were “simple CRUD on one model.” 25 were “moderate complexity.” 15 were “custom logic that doesn’t fit the standard pattern.”

Setting up Cursor for the project

Before starting migrations, I set up the project to give Cursor good context:

.cursorrules:

This is a Django 4.2 / DRF 3.14 project. Migrating from class-based views
to DRF.

Patterns to follow:

- All new endpoints in api_views/v2/ as DRF ViewSets (preferred) or APIView
- Serializers in serializers/v2/, one file per resource
- Permissions in permissions.py; reuse existing classes
- For pagination, use the project's CustomCursorPagination
- For filtering, use django-filter via DjangoFilterBackend
- Tests: APITestCase, one test class per ViewSet, cover happy path and 4xx
- Use drf-spectacular @extend_schema decorators on every action
- Keep ViewSets thin; complex logic goes to services/ folder
- For backwards compat, both v1 and v2 endpoints active during migration

Avoid:
- Function-based views (use ViewSets)
- ModelSerializer.Meta.fields = '__all__' (always explicit)
- Direct ORM access in views (use service layer)
- Custom permission classes inline (define in permissions.py)

I also pinned three reference files in chat:

  • api_views/v2/users.py (the cleanest existing v2 endpoint)
  • serializers/v2/users.py (matching serializer)
  • tests/api/v2/test_users.py (matching tests)

These three files served as the canonical pattern Cursor would follow.

Week 1: simple endpoints

Goal: migrate 30 of the 40 simple CRUD endpoints. Pattern was nearly identical for each: read the existing view, write the DRF ViewSet, write the serializer, write tests, run tests.

The Cursor flow:

  1. I’d open the existing views.py for an endpoint
  2. Cmd+L to open chat
  3. “Migrate this endpoint to a DRF ViewSet following the patterns in @users.py”
  4. Cursor produced the ViewSet, serializer, and tests
  5. I’d review, run tests, fix anything that didn’t match
  6. Commit

Average time per simple endpoint: 12-18 minutes. Compared to my pre-AI estimate of 45 minutes per endpoint, this was a meaningful speedup. Across 30 endpoints, that’s 13-15 hours of work that took ~7 hours.

What Cursor got wrong for simple endpoints:

  • Sometimes used ModelSerializer.Meta.fields = '__all__' despite the rules forbidding it; about 10% of cases
  • Forgot to add permission_classes for authenticated endpoints; about 15% of cases
  • Generated tests that hit the URL without verifying response shape; about 25% of cases (tests passed but didn’t check the right things)

Each of these took 30-60 seconds to fix once noticed. The pattern: I learned to scan for these issues before accepting the diff.

Week 2: moderate complexity

Goal: migrate the 25 moderate-complexity endpoints. These had business logic that didn’t fit ModelViewSet’s defaults — custom permission rules, multi-step writes, conditional responses.

Cursor’s value here was different. The basic structure was easy to generate, but the business logic required me to specify it carefully. The flow:

  1. I’d write a docstring describing the endpoint’s behavior
  2. Ask Cursor to generate the ViewSet matching the docstring
  3. Review the implementation against my mental model
  4. Iterate as needed

Average time per moderate endpoint: 35-50 minutes. Still faster than manual but the speedup was smaller because I was spending time specifying behavior, not just writing code.

What Cursor got wrong for moderate endpoints:

  • Misinterpreted permission rules ~30% of the time when rules involved multiple conditions
  • Sometimes flattened complex serialization into a generic response shape, losing nuance
  • Occasionally introduced N+1 queries that my v1 code didn’t have

The N+1 issue is worth highlighting. I caught most of these with assertNumQueries in tests. Without those tests, the migration would have been a perf regression.

Week 3: custom-logic endpoints

The 15 hardest endpoints had logic that didn’t fit DRF’s normal patterns. Reports that aggregate data across many tables. Multi-step writes that touch external services. Endpoints with custom auth flows.

For these, Cursor was a research tool more than a code generator. The pattern:

  1. I’d describe the endpoint and the migration challenge
  2. Ask Cursor for approach options (often 2-3 patterns to consider)
  3. Pick one, sometimes a hybrid
  4. Write the implementation myself with Cursor assisting on parts (Cmd+K for specific functions)

Average time per hard endpoint: 90-120 minutes. The “Cursor writes it” pattern didn’t work for these; the value was in faster research and faster implementation of the parts I’d already designed.

What Cursor got wrong for hard endpoints:

  • Several times, suggested patterns that worked but were over-engineered for our use
  • Once, suggested a third-party library that would have meant adding a dependency for a five-line custom solution
  • Frequently chose abstractions that made the code “more reusable” but that I knew wouldn’t be reused

Across the 15 endpoints, I rejected Cursor’s first suggestion about half the time. The interesting half was where I rejected because Cursor’s suggestion was technically better than what I’d planned, and I learned something. The boring half was where Cursor was overcomplicating.

Week 4: testing, docs, and cleanup

Goal: ensure 80% coverage on new endpoints, generate API docs, deprecate v1 endpoints with sunset headers, update client SDK to use v2.

This was mostly mechanical. Cursor wrote tests for the few endpoints that didn’t have full coverage. I wrote the deprecation logic by hand because it touched response middleware. Cursor generated the OpenAPI schema annotations and the SDK update was largely automated by drf-spectacular’s tooling.

Time: ~20 hours, mostly review and verification.

What worked

Pinning reference files. The three pinned files were the best decision I made. Cursor’s defaults for “DRF ViewSet” don’t match my project’s conventions; the pinned files showed it what we actually do.

Clear .cursorrules. Every “do” and “avoid” rule I wrote saved me time on at least one endpoint. The rules about fields = '__all__' and permission_classes paid for themselves repeatedly.

Good test coverage early. The tests caught Cursor’s mistakes before they reached production. Without solid test patterns, the migration would have shipped bugs.

Incremental shipping. Both v1 and v2 active during the migration meant I could ship endpoint-by-endpoint. No big-bang merge. Each PR was reviewable.

What didn’t work

Asking Cursor for design decisions. When I asked Cursor to “design the API for X feature,” the suggestions were generic. The good designs came from me thinking; Cursor was useful once I had a design.

Letting Cursor batch endpoints. I tried “migrate these 5 endpoints in one session” once. The output was lower quality than doing them one at a time. Each endpoint deserved focused attention; batching introduced sloppy patterns.

Using BugBot on migration PRs. BugBot flagged a lot of “potential issues” that were actually just differences between v1 and v2 (intentional behavior changes). The noise was higher than usual; I disabled BugBot mid-week.

Numbers

  • Original estimate: 200 hours
  • Actual: 80 hours
  • Cursor cost during migration: $24 in API credits (Cursor Pro subscription was already active)
  • Lines changed: ~12,000 added, ~8,000 deleted
  • Endpoints migrated: 80
  • Net new tests: 240
  • Bugs caught in review (would have shipped without): 6
  • Bugs caught after merge: 2

What I’d do differently

A few things I’d change next time:

Start with .cursorrules earlier. I added rules incrementally as I noticed mistakes. Starting with a comprehensive rules file would have saved me the iteration on early endpoints.

Spend more time on the planning phase. Two days of planning paid off. Three days would have paid off more. The shape of the migration (ViewSet vs APIView, when to use service layer, how to test) was right, but I made some patterns up as I went that I’d standardize earlier.

Get a second pair of eyes on the hard endpoints earlier. I waited until week 4 to ask a colleague to review the hardest 15 endpoints. They caught two architectural issues. Earlier review would have meant rework on fewer endpoints.

Was Cursor essential?

Honestly, no — I could have done this migration without Cursor. The plan would have been the same. The careful endpoint-by-endpoint approach would have been the same.

What Cursor gave me was speed on the mechanical parts. For 30 simple endpoints, it cut implementation time roughly in half. For 25 moderate endpoints, it shaved 30-40% off. For 15 hard endpoints, the speedup was smaller (maybe 15%).

Across all 80 endpoints, the speedup was about 60% — the difference between 80 hours and 200 hours.

The plan and the test discipline were what made the migration successful. Cursor was a force multiplier on the typing speed, not a replacement for the thinking.