Founder Summit 2026 in Boston: Don’t miss ticket savings of up to $300. Register Now.
Save up to $680 on your Disrupt 2026 pass. Ends 11:59 p.m. PT tonight. REGISTER NOW .
TechCrunch Desktop Logo TechCrunch Mobile Logo Latest Startups Venture Apple Security AI Apps Events Podcasts Newsletters Search Submit Site Search Toggle Mega Menu Toggle Topics Latest
Anthropic launches code review tool to check flood of AI-generated code Rebecca Bellan 12:41 PM PDT · March 9, 2026 When it comes to coding, peer feedback is crucial for catching bugs early, maintaining consistency across a codebase, and improving overall software quality.
The rise of “vibe coding” — using AI tools that take instructions given in plain language and quickly generate large amounts of code — has changed how developers work. While these tools have sped up development, they have also introduced new bugs, security risks, and poorly understood code.
Anthropic’s solution is an AI reviewer designed to catch bugs before they make it into the software’s codebase. The new product, called Code Review, launched Monday in Claude Code .
“We’ve seen a lot of growth in Claude Code, especially within the enterprise, and one of the questions that we keep getting from enterprise leaders is: Now that Claude Code is putting up a bunch of pull requests, how do I make sure that those get reviewed in an efficient manner?” Cat Wu, Anthropic’s head of product, told TechCrunch.
Pull requests are a mechanism that developers use to submit code changes for review before those changes make it into the software. Wu said Claude Code has dramatically increased code output, which has increased pull request reviews that have caused a bottleneck to shipping code.
Anthropic’s launch of Code Review — arriving first to Claude for Teams and Claude for Enterprise customers in research preview — comes at a pivotal moment for the company.
Techcrunch event Disrupt 2026: The tech ecosystem, all in one room Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400. Save up to $300 or 30% to TechCrunch Founder Summit 1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately Offer ends March 13. San Francisco, CA | October 13-15, 2026 REGISTER NOW On Monday, Anthropic filed two lawsuits against the Department of Defense in response to the agency’s designation of Anthropic as a supply chain risk. The dispute will likely see Anthropic leaning more heavily on its booming enterprise business, which has seen subscriptions quadruple since the start of the year. Claude Code ’s run-rate revenue has surpassed $2.5 billion since launch, according to the company.
“This product is very much targeted towards our larger scale enterprise users, so companies like Uber, Salesforce, Accenture, who already use Claude Code and now want help with the sheer amount of [pull requests] that it’s helping produce,” Wu said.
She added that developer leads can turn on Code Review to run on default for every engineer on the team. Once enabled, it integrates with GitHub and automatically analyzes pull requests, leaving comments directly on the code explaining potential issues and suggested fixes.
The focus is on fixing logical errors over style, Wu said.
“This is really important because a lot of developers have seen AI automated feedback before, and they get annoyed when it’s not immediately actionable,” Wu said. “We decided we’re going to focus purely on logic errors. This way we’re catching the highest priority things to fix.”
The AI explains its reasoning step by step, outlining what it thinks the issue is, why it might be problematic, and how it can potentially be fixed. The system will label the severity of issues using colors: red for highest severity, yellow for potential problems worth reviewing, and purple for issues tied to preexisting code or historical bugs.
___________________________________________________________________________________________________________
-- --
PLEASE LIKE IF YOU FOUND THIS HELPFUL TO SUPPORT OUR FORUM.
