Anthropic launches Claude Code Review to inspect pull requests with AI agents

Anthropic launches Claude Code Review to inspect pull requests with AI agents — Latest news
Source: Latest news

Anthropic has introduced a Code Review beta built into Claude Code for Teams and Enterprise plan users. The tool spawns teams of AI agents to analyze completed blocks of code in pull requests, surfacing bugs and other potentially problematic issues before human reviewers must intervene.

In internal use, the system raised the share of pull requests receiving substantive review comments from 16% to 54%, substantially increasing the number of issues caught before merge. Larger pull requests (over 1,000 changed lines) produced findings 84% of the time, while very small PRs (under 50 lines) produced findings 31% of the time.

Engineers largely agreed with the tool’s output: fewer than 1% of findings were marked incorrect. Examples surfaced during testing include a one-line change that would have broken authentication and a type mismatch that was silently wiping an encryption key cache.

anthropic, claude code, code review, ai agents, pull requests, beta, teams plan, enterprise plan, authentication, encryption key