Tinker AI
Read reviews
2026-03-13 Source

Cursor hit 1.0 this week. The marquee feature isn’t a new model or a faster autocomplete — it’s observability. Cursor 1.0 includes team-level analytics on AI usage: which engineers used Cursor most, on which kinds of tasks, with what model selections, and at what cost.

For teams trying to evaluate whether Cursor is delivering its promised productivity, this is the first time the data is in the tool itself.

What’s in the dashboard

The new Team Analytics tab (visible to admins on Business and Enterprise plans):

  • Active engineer count and trend over time
  • Request counts by engineer, broken down by feature (Tab, Cmd+K, chat, Composer)
  • Model selection patterns (which models engineers actually pick)
  • Cost attribution per engineer (for plans with overage charges)
  • Aggregate metrics on accept/reject rates for AI suggestions

The data is at the team level by default; individual engineer detail requires opt-in by the engineer (privacy guard).

Why this matters

Most teams adopting AI coding tools have struggled to answer “is this actually helping?” The metrics they had:

  • Subscription cost (easy)
  • Self-reported productivity (unreliable)
  • Engineering output (confounded by other factors)

What was missing: actual usage data. How often are engineers using the tool? On what kinds of work? Are some engineers heavy users while others basically don’t use it?

Cursor’s analytics start to fill this gap. They don’t directly measure productivity (the tool can’t know if your output is good), but they measure usage in detail. From usage patterns, teams can infer where the tool is fitting in and where it isn’t.

The analytics that surprise teams

Three patterns I’ve seen surface in early access reports:

Most teams have a long tail of low-usage engineers. The pitch for AI tools assumes most engineers will use them most of the time. The reality, in many teams: 30-40% of seats see fewer than 5 sessions per week. Sometimes the seats are inactive (engineer left, didn’t reclaim seat). Sometimes the engineer just doesn’t use AI tools much.

For teams paying per-seat, this is an opportunity to reduce subscription waste.

Heavy users are doing different work than average users. The top usage tier tends to be on greenfield work or test writing. The bottom tier is often on legacy code or pure code review. This matches the qualitative experience but seeing it in data helps make decisions about which work to prioritize for AI assistance.

Composer/agent usage is more concentrated than expected. Most teams find that Composer or agent-mode usage is dominated by 2-3 power users. The rest of the team uses chat or Cmd+K. This affects how to think about training: the agent features need handholding to drive broader adoption.

What’s not in the dashboard

A few things teams asked for that aren’t in 1.0:

Quality metrics. Whether AI suggestions led to working code or had to be rewritten. The accept/reject rate is a partial signal but doesn’t catch the case where code was accepted and then revised.

Time savings estimates. No “you saved X hours this week” claims. Cursor was clear that they don’t have data to support such claims; they’re not making them.

Per-task cost breakdown. Cost is shown per engineer, not per task. For teams that want to understand “this complex task cost X dollars,” the dashboard doesn’t help.

Comparison to non-AI baselines. No “before/after” view. The dashboard shows current usage, not productivity deltas vs. pre-AI workflow.

These are reasonable to skip in 1.0. Quality and time metrics are hard to measure honestly. Cursor is being conservative about what they claim.

Privacy considerations

The individual-engineer detail requires opt-in. Without opt-in, admins see aggregates but not personal usage.

This is the right call. Granular per-engineer dashboards create surveillance dynamics that AI tools don’t need. The opt-in model lets engineers who want to share their patterns (for performance reviews, for self-reflection) do so without making it default.

I’d watch for whether the opt-in stays opt-in. There’s pressure on tools to provide manager-visible productivity data; resisting that pressure preserves the engineer-trust relationship that makes AI tools useful.

Implications for the market

The observability story is a wedge for enterprise sales. For organizations that have been hesitant to adopt Cursor because they couldn’t justify the spend, the dashboard provides justification. “We’re paying $20/seat × 100 seats = $24k/year. Here’s the actual usage. Here’s the work it’s been on.”

GitHub Copilot’s analytics are roughly comparable, scoped to GitHub’s audit log model. Windsurf’s analytics are weaker. The Cursor 1.0 release pushes Cursor closer to feature parity with Copilot Enterprise on the manageability dimension.

For teams already on Cursor, this is a free upgrade. For teams on Copilot, the question is whether Cursor’s analytics + product features beat Copilot’s analytics + GitHub integration. The answer probably depends on the team.

The 1.0 framing

Cursor explicitly chose this release to be “1.0.” The framing matters. They’ve been shipping Cursor for 2+ years; calling this 1.0 is an editorial choice that says “we’re now ready for serious enterprise adoption.”

The features that come with that framing: SOC 2, the analytics, more enterprise admin controls, more careful release cadence. These aren’t features that excite individual developers, but they’re the features that make a tool acquirable by organizations that take governance seriously.

This is the maturity move. Whether it pays off depends on whether the enterprise market wants what Cursor is selling — and the analytics dashboard is part of the answer.

What I’ll be watching

The 1.0 release introduces the dashboard. The interesting question is what teams do with the data over the next several months. If teams use the data to expand AI adoption (broaden seat coverage, train low-usage engineers), Cursor wins. If teams use the data to cut spend (reduce seats, downgrade plans), Cursor’s revenue model bends.

Both are plausible. The honest version is probably mixed — some teams expand, some cut, average is roughly flat. The dashboard makes the decision data-driven rather than vibes-driven, which is good for everyone except tool vendors who were quietly benefiting from the lack of data.