When it comes to coding, adjacent feedback is important for catching bugs early, maintaining consistency crossed a codebase, and improving wide bundle quality.
The emergence of “vibe coding” — utilizing AI tools that takes instructions fixed successful plain connection and rapidly generates ample amounts of codification — has changed however developers work. While these tools person sped up development, they person besides introduced caller bugs, information risks, and poorly understood code.
Anthropic’s solution is an AI reviewer designed to drawback bugs that humans mightiness miss. The caller product, canned Code Review, launched Monday successful Claude Code.
“We’ve seen a batch of maturation successful Claude Code, particularly wrong the enterprise, and 1 of the questions that we support getting from endeavor leaders is: Now that Claude Code is putting up a clump of propulsion requests, however bash I marque definite that those get reviewed successful an businesslike manner?” Cat Wu, Anthropic’s caput of product, told TechCrunch.
Pull requests are a mechanics that developers usage to taxable codification changes for reappraisal earlier those changes marque it into the software. Wu said Claude Code has dramatically accrued codification output, making propulsion petition reviews a bottleneck to shipping.
“Code Review is our reply to that,” Wu said.
Anthropic’s motorboat of Code Review — arriving archetypal to Claude for Teams and Claude for Enterprise customers successful probe preview — comes astatine a pivotal infinitesimal for the company.
Techcrunch event
San Francisco, CA | October 13-15, 2026
On Monday, Anthropic filed 2 lawsuits against the Department of Defense successful effect to the agency’s designation of Anthropic arsenic a proviso concatenation risk. The quality volition apt spot Anthropic leaning much heavy connected its booming endeavor business, which has seen subscriptions quadruple since the commencement of the year. Claude Code’s run-rate gross has surpassed $2.5 cardinal since launch, according to the company.
“This merchandise is precise overmuch targeted towards our larger standard endeavor users, truthful companies similar Uber, Salesforce, Accenture, who already usage Claude Code and present privation assistance with the sheer magnitude of [pull requests] that it’s helping produce,” Wu said.
She added that developer leads tin crook connected Code Review to tally connected default for each technologist connected the team. Once enabled, it integrates with GitHub and automatically analyzes propulsion requests, leaving comments straight connected the codification explaining imaginable issues and suggested fixes.
The absorption is connected fixing logical errors implicit style, Wu said.
“This is truly important due to the fact that a batch of developers person seen AI automated feedback before, and they get annoyed erstwhile it’s not instantly actionable,” Wu said. “We decided we’re going to absorption purely connected logic errors. This mode we’re catching the highest precedence things to fix.”
The AI explains its reasoning measurement by step, outlining what it thinks the contented is, wherefore it mightiness beryllium problematic, and however it tin perchance beryllium fixed. The strategy volition statement the severity of issues utilizing colors: reddish for highest severity, yellowish for imaginable problems worthy reviewing, and purple for issues tied to pre-existing codification oregon humanities bugs.
Wu said it does this each accelerated and efficiently by relying connected aggregate agents moving successful parallel, with each cause examining the codebase from a antithetic position oregon dimension. A last cause aggregates and ranks the filings, removing duplicates and prioritizing what’s astir important.
The instrumentality provides a airy security analysis, and engineering leads tin customize further checks based connected interior champion practices. Wu said Anthropic’s much precocious launched Claude Code Security provides a deeper information analysis.
The multi-agent architecture does mean this tin beryllium a resource-intensive product, Wu said. Similar to different AI services, pricing is token-based, and the outgo varies depending connected codification complexity — though Wu estimated each reappraisal would outgo $15 to $25 connected average. She added that it’s a premium experience, and a indispensable 1 arsenic AI tools make much and much code.
“[Code Review] is thing that’s coming from an insane magnitude of marketplace pull,” Wu said. “As engineers make with Claude Code, they’re seeing the friction to creating a caller diagnostic [decrease], and they’re seeing a overmuch higher request for codification review. So we’re hopeful that with this, we’ll alteration enterprises to physique faster than they ever could before, and with overmuch less bugs than they ever had before.”















English (US) ·