#opinion
31 items tagged #opinion.
31 items tagged #opinion.
AI editors like Cursor are great inside the editor. For email drafting, browser-side research, and quick model comparisons, a general AI aggregator does what coding-native tools weren't built for. Here's how I split the two.
Two years ago, the context window race mattered. Now every model worth using has 200k+ tokens, and the actual bottleneck has moved somewhere else. Where it moved is more interesting than where it was.
AI coding tools handle the kinds of grinding-out-code tasks that used to teach junior engineers their craft. Senior engineers are split on whether this is good. Both sides have a point.
Aider and Cline are open source. Cursor and Copilot aren't. The choice between them comes down to four tradeoffs that aren't usually framed clearly.
Vendors publish '40% productivity gains'. Internal teams report similar numbers. The reality, when measured carefully, is more modest and more interesting.
Switching from VSCode to Cursor took an afternoon. Switching back, after a year of accumulated workflow, is a different story. The lock-in isn't where most people are looking.
AI tools handle code-level tasks reasonably well and struggle with architecture-level decisions. The reasons are structural, not just a matter of bigger models.
Engineers love to claim AI made them 10x faster. They mean something specific by it, and that specific thing isn't what the claim sounds like.
If candidates can use AI to solve coding interview questions in 5 minutes, the interview was probably testing the wrong thing. The harder question is what to test instead.
Reviewing AI output one chunk at a time feels slower than letting it produce a feature and reviewing the diff at the end. Across many sessions, the reverse turns out to be true.
Tools advertise 'AI debugs your code automatically.' In practice, autonomous debugging fails more often than it succeeds, and the failure mode is expensive.
Every team using AI tools has someone who's said this. The complaint is sometimes accurate, sometimes a misdiagnosis. Telling the difference matters because the fixes are completely different.
AI tools generate plausible names instantly. The names are almost always worse than what I'd come up with after thirty seconds of thought.
When AI agents produce most of a PR's code, what is review actually for? The answer is shifting in important ways.
Agent loops are designed for autonomous problem-solving. For debugging, the autonomy is precisely what hurts. Investigation requires staying in the loop.