Tinker AI
Read reviews
intermediate 4 min read

Cursor for writing Cypress tests in a monorepo: scoping context for E2E

Published 2026-01-27 by Owner

End-to-end tests in Cypress have specific context needs that differ from unit tests. The page’s selectors, the test’s expected user flow, the application’s actual state — these are spread across multiple files. Cursor’s defaults sometimes miss the connections, producing tests that compile but don’t test what they should.

A few configuration patterns produce more reliable Cypress tests in Cursor.

The pattern

Cypress tests typically live in cypress/e2e/. The structure I use:

cypress/
  e2e/
    flows/           # User journey tests
    pages/           # Page object classes
    fixtures/        # Test data
    support/
      commands.ts    # Custom Cypress commands
      e2e.ts         # Setup

Each test depends on:

  • The page object (selectors, page actions)
  • The custom commands (login, setup state, etc.)
  • The fixture data
  • The actual page being tested

Cursor needs to see all of these to write good tests.

Pinning the page object

The biggest improvement: pin the relevant page object file when generating new tests.

For example, when writing a test about the user profile page:

> /add cypress/pages/UserProfilePage.ts
> /add cypress/support/commands.ts
> /add cypress/fixtures/users.json
> write a Cypress test that verifies a user can update their profile
> bio. They should see the updated bio after saving.

With these files in context, Cursor’s generated test:

  • Uses the UserProfilePage class with its actual methods
  • Calls the right custom commands (cy.loginAs(...) from commands.ts)
  • Uses fixture data (real users from users.json) rather than inventing data
  • Knows the actual selectors

Without these pinned files, Cursor invents page methods, custom commands, and fixture shapes. The result might compile but reference things that don’t exist.

A specific .cursorrules section

For the Cypress tests, a focused rules section:

# Cypress tests

End-to-end tests in this project follow the page object pattern:

1. Each route has a corresponding class in cypress/pages/
2. Tests interact through the page object, never direct cy.get('selector')
3. Custom commands handle authentication, state setup, navigation
4. Fixtures provide reusable test data

Test structure:
- describe blocks for the feature being tested
- it blocks for specific scenarios
- beforeEach for setup (loginAs, navigate to page, set up state)
- One assertion focus per test (don't test multiple unrelated things in one it)

Selectors:
- Use data-testid attributes whenever possible
- Avoid CSS selectors that depend on styling
- If a selector requires text matching, use cy.contains()

Custom commands to use:
- cy.loginAs(role) — log in as a user with the given role
- cy.seedDatabase(fixture) — set up DB state from a fixture
- cy.navigateTo(page) — navigate using the app's router

When writing a new test:
- Look at cypress/pages/ for the relevant page object
- Look at cypress/support/commands.ts for available commands
- Look at cypress/fixtures/ for available test data
- If a needed page object or command doesn't exist, create it before
  writing the test

This is about 30 lines. The cost is a few minutes to write. The payoff is Cursor producing tests that fit the existing patterns.

What Cursor does well for Cypress

After this configuration:

Writing a new test that follows existing patterns. Speed gain ~2-3x. Tests are structurally correct on first attempt.

Adding custom commands. When a test needs a new command, Cursor produces one that fits commands.ts conventions.

Generating fixtures. From a description, Cursor produces test data with realistic shapes.

Refactoring tests. Restructuring existing tests, extracting commands, etc. Cursor handles well.

What Cursor still struggles with

A few areas:

Flaky test debugging. Why is this test failing intermittently? Cursor’s hypotheses are sometimes right, often miss the actual cause (timing, state leakage). Manual investigation is faster.

Testing complex async flows. Tests for features with delayed state updates, polling, websockets. Cursor’s defaults can be racy. Review carefully.

Multi-tab / multi-window testing. Cypress’s support for these is limited; the patterns are non-obvious. Cursor isn’t helpful here.

Visual regression testing. Tools like Percy or Chromatic have specific patterns. Cursor’s familiarity is uneven.

For these, manual care is needed regardless of AI tooling.

A specific success

A test pattern that worked well with Cursor:

I needed tests for a checkout flow with multiple variations (logged-in, guest, with various payment methods). The test matrix was 12 cases.

I wrote one test by hand carefully, with all the page objects and commands set up. Then I asked Cursor to write the other 11 cases following the same pattern.

Cursor produced 11 tests in about 3 minutes. About 9 of the 11 worked on first try. 2 needed minor adjustments (specific assertions about edge cases).

Compare to manual: 11 tests would have been ~90 minutes of work. With Cursor: 8 minutes including review.

The pattern: when the test structure is clear and you need many variations, Cursor scales the variations efficiently. The first careful test sets the template; subsequent tests follow.

Anti-pattern: generating tests against generic patterns

The pattern that doesn’t work: asking Cursor to “write Cypress tests for the user feature” without the page objects and commands pinned.

What you get: tests that use generic Cypress patterns (cy.get('input[name=email]').type(...)) rather than your project’s patterns. The tests pass but don’t fit; merging them creates inconsistency in your test suite.

Always pin the relevant page objects and commands. The 30 seconds of pinning saves you the rework.

Cypress in monorepos

For monorepos with multiple frontend apps, each app might have its own Cypress setup. The .cursorrules can be per-package:

# apps/web/.cursor/rules/cypress.mdc

For tests in apps/web/cypress/, follow the patterns in:
- apps/web/cypress/pages/
- apps/web/cypress/support/commands.ts
- apps/web/cypress/fixtures/

The web app uses different test infrastructure than admin/. Do not
mix patterns across apps.

This scoping prevents Cursor from importing patterns from one app to another. The per-app conventions stay intact.

Worth the configuration time

For projects with substantial Cypress test suites, the configuration time pays back. Writing rules takes 30 minutes. The first day of test work using the rules saves more time than that.

For projects with thin test coverage, the configuration may be premature. Skip it; use Cursor casually for tests; come back to formal configuration when the test suite grows.

A meta observation

E2E tests are an interesting AI tooling case. They’re verbose enough that AI assistance helps. They’re picky enough about correctness that AI assistance fails when context is wrong.

The right balance: invest in the test infrastructure (page objects, custom commands, fixtures) so the patterns are clear. Then AI assistance produces tests that fit the infrastructure. Without the infrastructure, AI tests are brittle and idiosyncratic.

This is true beyond Cypress. Playwright tests, Selenium tests, integration tests in general — they all benefit from a clear pattern that AI can match against. The pattern is the foundation; AI is the multiplier.

Closing

For Cypress + Cursor specifically, the configuration above produces tests that fit your project’s conventions reliably. Pin the page objects, write the rules, write a careful first test as a template, scale from there.

The result is test code that’s indistinguishable from manually-written tests. That’s the goal — AI assistance that adds to the project consistently rather than producing AI-shaped artifacts that need cleanup.