The first StrictlyVC of 2026 hits SF on April 30. Tickets are going fast. Register now.
Save up to $680 on your Disrupt 2026 pass. Ends 11:59 p.m. PT tonight. REGISTER NOW .
TechCrunch Desktop Logo TechCrunch Mobile Logo Latest Startups Venture Apple Security AI Apps Events Podcasts Newsletters Search Submit Site Search Toggle Mega Menu Toggle Topics Latest
‘Tokenmaxxing’ is making developers less productive than they think Tim Fernholz 11:42 AM PDT · April 17, 2026 There’s an old saw in management: What you measure matters. And, typically, you get more of whatever you’re measuring.
Software engineers have debated productivity metrics for decades, starting with lines of code. But as the new generation of AI coding agents delivers more code than ever, what their managers ought to be measuring is less clear.
Enormous token budgets — essentially, the amount of AI processing power a developer is authorized to consume — have become a badge of honor among Silicon Valley developers, but that’s a very weird way to think about productivity. Measuring an input to the process makes little sense when you presumably care more about the output. It might make sense if you’re trying to encourage more AI adoption (or selling tokens), but not if you’re trying to become more efficient.
Consider the evidence from a new class of companies operating in the “developer productivity insight” space. They’re finding that developers using tools like Claude Code, Cursor, and Codex generate a lot more accepted code than they did before. But they also find that engineers have to return to revise that accepted code far more often than before, undercutting claims of increased productivity.
Alex Circei, the CEO and founder of Waydev , is building an intelligence layer to track these dynamics; his firm works with 50 different customers that employ more than 10,000 software engineers. (Circei has contributed to TechCrunch in the past, but this reporter had never met him before.)
He says that engineering managers are seeing code acceptance rates of 80% to 90% — meaning the share of AI-generated code that developers approve and keep — but they’re missing the churn that happens when engineers have to revise that code in the following weeks, which drives the real-world acceptance rate down between 10% and 30% of generated code.
The rise of AI coding tools led Waydev, founded in 2017 to provide developer analytics, to totally rework its platform in the last six months to address the proliferation of rapid coding tools. Now, the company is releasing new tools that track the metadata generated by AI agents, offering analytics on the quality and cost of their code to provide engineering managers with more insight into both AI adoption and efficacy.
Techcrunch event Meet your next investor or portfolio startup at Disrupt Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $410. Meet your next investor or portfolio startup at Disrupt Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $410. San Francisco, CA | October 13-15, 2026 REGISTER NOW While analytics companies have an incentive to highlight the problems they find, the evidence is mounting that large organizations are still figuring out how to use AI tools efficiently. Major companies are noticing — Atlassian acquired DX, another engineering intelligence startup, for $1 billion last year, to help its customers understand the return on investment on coding agents.
The data from across the industry tells a consistent story: More code is being written, but a disproportionate amount of it isn’t sticking.
GitClear , another company in this space, published a report in January that found AI tools increased productivity, but also that its data showed “regular AI users averaged 9.4x higher code churn than their non-AI counterparts” — more than double the productivity gains the tools provided.
Faros AI, an engineering analytics platform, drew on two years of customer data for its March 2026 report . The finding: code churn — lines of code deleted versus lines added — had increased 861% under high AI adoption.
Jellyfish, which bills itself as an intelligence platform for AI-integrated engineering, collected data on 7,548 engineers in the first quarter of 2026. The firm found that the engineers with the largest token budgets produced the most pull requests (proposed changes to a shared codebase), but the productivity improvement didn’t scale. They achieved two times the throughput at 10 times the cost of tokens. In other words, the tools are generating volume, not value.
___________________________________________________________________________________________________________
-- --
PLEASE LIKE IF YOU FOUND THIS HELPFUL TO SUPPORT OUR FORUM.
