AI Hasn’t Made Developers Faster, It’s Made Their Review Queues Longer!

A developer uses Copilot to write 30 lines of code in 10 minutes, but then spends 45 minutes reviewing it – checking for bugs, edge cases, and code that doesn’t match team standards.
The time saved during writing gets completely eaten up during validation. And this is exactly what happens repeatedly across teams trying to adopt AI at scale.
At the Pragmatic Summit, Laura Tacho (CTO at DX) shared some interesting research on AI in coding:
Almost 93% of developers use AI assistants every month, and about 27% of production code now comes from AI. Yet, despite all this, overall productivity has barely budged – staying around a 10% boost since AI tools arrived.
AI adoption is everywhere…
The numbers are clear:
- 92.6% of developers use AI coding assistants monthly
- 75% use them weekly
- 26.9% of production code contains AI-authored segments
84% of developers use AI tools, according to Stack Overflow’s 2025 survey. Adoption is now standard – the numbers are probably even bigger now.
…Yet work isn’t moving any quicker
The gap between adoption and productivity appears first as a trust problem.
46% of developers don’t fully trust the output, and that skepticism has a reason: reviewing AI-generated code frequently requires more effort than reviewing human-written one.
The DX AI Measurement Framework (published by vendor DX but structured as an industry standard) identifies this directly:
Code generated by AI may be less intuitive for human developers to understand, potentially creating bottlenecks when issues arise or modifications are needed.
This is why productivity hasn’t jumped. Developers might write code faster with AI, but they end up spending the same time checking, fixing, and making sense of what AI produces. In the end, the overall development cycle doesn’t get any shorter.
Sonar’s research confirms the pattern at scale: 42% of committed code now includes AI assistance, yet 96% of developers say they don’t fully trust AI-generated code. And this is exactly what we see: output is everywhere, but the confidence in it is not.
Why productivity has stalled?
That 10% productivity bump comes down to a workflow mismatch.
Teams started using AI to write code faster, but didn’t adjust how they review, test, or integrate it. In other words, writing got quicker, but everything that comes after stayed just as slow.
The DX research notes a broader context relevant here: most organizations see their biggest bottlenecks not in code generation, but:
In the outer loop, or in human factors like collaboration, alignment, and the ability to do deep, focused work.
AI addresses one specific problem, and that’s code-writing speed. But, as we can see, the overall development cycle has other constraints.
Teams that actually see productivity gains from AI usually do two things: they figure out exactly where AI adds value, and they tweak their workflows to make the most of it. Teams that just deploy AI without changing how they work? They get adoption, but no real boost in productivity.
The 10% productivity ceiling sticks because the time spent validating AI-written code cancels out the speed gains. Most teams focus on writing faster, but few have optimized for faster validation.
It’s an obvious obstacle, but maybe also an opportunity.


