#ai-coding
20 items tagged #ai-coding.
20 items tagged #ai-coding.
Most AI-assisted coding mistakes aren't bad prompts — they're suggestions that looked right and weren't. Five concrete checks that catch the failures before they ship.
AI-generated code is generally not copyrightable—but once you accept a suggestion, you own it. Here's what that means legally, operationally, and when something breaks.
AI review tools catch real bugs consistently but miss architecture, intent, and taste. Here's how to use the pair-review workflow without letting AI comments become background noise.
How to structure commits, branches, and attribution when an AI agent shares your keyboard — and the one habit that prevents merge-conflict headaches.
Most juniors in 2026 arrive with AI tools already in hand. The risk is they never build the foundations those tools are hiding. Here is a curriculum that uses AI without letting it become a crutch.
Strong type systems give AI a faster feedback loop than unit tests. Here's why TypeScript strict, Rust, and Haskell make AI more reliable—and where looser languages let mistakes slip through.
AI handles the mechanical parts of commit messages well. The part it misses is explaining why a change happened — and that gap matters more than most people expect.
Three threat axes in AI coding tools—log exfiltration, tool-call leaks, and supply-chain poisoning—and the mitigations that actually reduce risk.
A breakdown of 2026 token prices across Claude, GPT-5, and open source models — and where autonomous coding sessions actually spend the money.
Long sessions, paste-heavy work, and verbose tool output push context windows to their limits. Here are three compression strategies, what fidelity each one sacrifices, and a workflow that sidesteps the problem entirely.
AI handles the mechanical steps of debugging well. Root cause analysis is the step it skips. Here's how to force it not to.
AI coding tools hallucinate in four distinct patterns. Knowing which kind you're looking at determines whether the toolchain catches it or a human must.
Most AI-in-CI integrations create noise faster than they create signal. PR triage works. Auto-review mostly doesn't. Here's where the tradeoffs land in practice.
Most developers default to the most expensive model for every task. A four-axis framework—cost, intelligence, speed, context—shows when that's right and when it's 10x overpay.
Three pair-programming patterns for working with AI—junior dev, rubber duck, second senior—and the one to avoid: letting the model lead.
AI coding tools are fast in both directions. The problem is that fast exploration and fast shipping require completely different operating modes—and conflating them is how spike code ends up in production.
AI tools thrive on greenfield work. On legacy code with custom DSLs, undocumented invariants, and decade-old conventions, the instinct to modernize becomes a liability.
When AI handles tests and implementation together, it can satisfy itself without testing real behavior. Here's how to assign the work to get actual coverage.
Most days one AI coding tool is enough. This is about the narrower case where running three in parallel — each doing the thing it was built for — actually earns its cognitive overhead.