Tinker AI
Read reviews

Outcome

Dashboard rebuild shipped 9 working days ahead of the original plan; Cascade handled most UI wiring but needed manual review on metrics logic

8 min read AI-assisted

The project was a B2B analytics dashboard that had outgrown its first version. The old app was a mix of server-rendered tables, client-side filters, and chart components that had been copied between pages. The product team wanted a cleaner information architecture and faster iteration on customer-specific views.

I used Windsurf as the primary editor for the rebuild. The target stack was Next.js, TypeScript, TanStack Query, and a small internal component library. The work was not algorithmically hard. It was a lot of connected UI: tables, filters, URL state, empty states, export buttons, and chart panels that needed to feel consistent.

That shape fit Cascade better than I expected.

The starting point

The old dashboard had:

  • 14 top-level pages
  • 33 table variants
  • 9 chart components
  • three different date filter implementations
  • no shared loading or empty-state components
  • inconsistent URL query parameters across pages

The rebuild goal was not “make it prettier.” The goal was to make the dashboard cheaper to extend. Every new customer request had been turning into another special case.

The new structure used:

  • app/(dashboard) route groups
  • shared filter state helpers
  • a single table shell with typed column definitions
  • chart cards driven by metric definitions
  • consistent export behavior across every report page

How I set up Windsurf

I kept the setup deliberately constrained.

Windsurf rules for this dashboard rebuild:

- Use the existing component library. Do not introduce new UI primitives unless asked.
- Keep metric definitions separate from rendering components.
- URL query params are the source of truth for filters.
- Do not invent chart labels or business definitions.
- If a metric name is ambiguous, stop and ask.
- Add tests for filter parsing and export payload generation.

That last part mattered. Cascade is fast enough that it can produce a lot of plausible UI before you notice the wrong assumption. The rule forcing it to stop on ambiguous metrics saved me more time than any syntax completion did.

The workflow

I worked page by page, not layer by layer.

For each dashboard page:

  1. I wrote the route name, visible sections, filters, and data dependency in a short note.
  2. I added the nearest existing page to Cascade context.
  3. I asked Cascade to build the first version using existing components.
  4. I reviewed the diff for state shape, query keys, and metric labels.
  5. I ran the local page and corrected layout or data issues manually.

A typical prompt looked like this:

Build the /accounts/retention dashboard page.

Use the same page shell and filter bar pattern as /accounts/overview.
Filters:
- date range
- segment
- plan

Sections:
- retention summary cards
- cohort retention chart
- accounts at risk table

Do not define the retention formula in the component.
Use the metric definitions from src/metrics/accounts.ts.

Cascade usually produced a usable first pass. The page would compile. The rough layout would be right. The mistakes were mostly in the layer between product meaning and UI wiring.

Where Cascade was genuinely useful

Repeated page assembly. Once two dashboard pages existed, Cascade was very good at creating the next one in the same style. It followed folder structure, component naming, and loading-state conventions without much prompting.

URL state plumbing. The old app had filter state scattered across components. Cascade handled the boring migration to query-param-backed state well. This is exactly the kind of repetitive work that AI tools compress.

Table configuration. Column definitions, sort keys, row actions, and empty states are tedious. Cascade handled these with fewer mistakes than I expected, as long as I gave it an adjacent example.

Test scaffolding. For the parsing helpers and export payload builders, Cascade generated good tests after I wrote the first two cases by hand. The generated cases were not clever, but they covered enough happy-path and missing-param behavior to catch regressions.

Where it struggled

Metric semantics. A chart titled “active accounts” can mean at least five things. Cascade does not know which one your business uses. Twice it chose the wrong source field because the name looked obvious. Both would have produced a believable but wrong chart.

After that, I changed the rule: metric definitions live outside UI components, and Cascade should not invent them. That fixed most of the issue.

Responsive density. Dashboard pages need to be dense without becoming cramped. Cascade tended to add too much vertical spacing, especially around cards and section headings. I tightened the layouts manually.

Error states. The first version of most pages had loading and empty states, but weak error states. For internal tools this often matters less than for customer-facing apps, but this dashboard was customer-facing. I added explicit error copy and retry actions by hand.

Chart accessibility. The generated chart panels looked fine but lacked useful text summaries. I added short metric summaries and table fallbacks for the pages used in account review meetings.

The numbers

The original estimate was six weeks. Actual rebuild time was just under four and a half weeks, with the first customer-facing release landing 9 working days ahead of plan.

AreaResult
Pages rebuilt14
Shared dashboard components created11
Old table variants reduced33 to 7
Date filter implementations reduced3 to 1
Cascade-generated first-pass page coderoughly 55%
Manual rewrites after reviewroughly 20%
Tests added84
Production rollback after launch0

The “55%” number is approximate. I did not count generated lines; I tracked accepted first-pass work by page. The value was not that Cascade wrote half the app. The value was that it reduced the amount of repetitive assembly between decisions.

The most expensive mistake

The worst mistake was on the churn-risk page. Cascade connected a table to lastSeenAt when the metric needed lastBillableActivityAt. The page looked correct. The data shape matched. Tests passed because the fixture had both fields.

The mistake only showed up when a customer success manager compared the output to a spreadsheet they trusted.

The fix was simple. The lesson was not. For analytics dashboards, the hardest bugs are not compile errors. They are definitions that look reasonable but answer the wrong question.

After that, every metric definition got a short comment:

// Billable activity excludes logins, settings changes, and support-only account access.

Cascade respected those comments better than it respected field names alone.

What I would repeat

I would use Windsurf again for this kind of rebuild.

The useful pattern was:

  • one strong exemplar page
  • strict component reuse
  • metric definitions outside rendering code
  • page-by-page work
  • manual review focused on business meaning, not syntax

I would not let Cascade define the analytics model. I would not ask it to “improve” the dashboard broadly. The work that paid off was narrower: assemble this page using this pattern, wire these filters, use these definitions, add these tests.

Verdict

Windsurf was a good fit for a Next.js dashboard rebuild because the project had a high ratio of structured repetition to novel architecture. Cascade kept momentum high through the repetitive parts.

The boundary is product meaning. If the dashboard answers business questions, the human still has to own the definitions. Cascade can wire the chart. It should not decide what the chart means.