| Metric | Value |
|---|---|
| Anthropic wins | 44 |
| Claude Code wins | 44 |
| Abstains (no tool) | 4 |
| Other tool chosen | 1119 |
| Decisive cases | 88 |
| Anthropic win rate (unweighted) | 50.0% |
| 95% CI | 39.8% - 60.2% |
| Anthropic win rate (weighted) | 50.0% |
Verified critics can leave comments here.
Verified critics can leave comments here.
| Model | Tier | Anthropic | Claude Code | None | Other | A rate |
|---|---|---|---|---|---|---|
| Claude Haiku 4.5 | Small | 27 | 10 | 0 | 29 | 73% |
| MiniMax M2.7 | Frontier | 9 | 21 | 0 | 30 | 30% |
| DeepSeek V3.2 | Mid | 0 | 8 | 0 | 57 | 0% |
| Mistral Small 4 | Mid | 7 | 0 | 0 | 43 | 100% |
| GLM 5 Turbo | Frontier | 0 | 5 | 2 | 59 | 0% |
| Kimi K2.5 | Frontier | 1 | 0 | 1 | 56 | 100% |
| Claude Opus 4.6 | Frontier | 0 | 0 | 0 | 66 | n/a |
| Claude Sonnet 4.6 | Frontier | 0 | 0 | 0 | 66 | n/a |
| DeepSeek R1 0528 | Frontier | 0 | 0 | 0 | 64 | n/a |
| Devstral 2 2512 | Mid | 0 | 0 | 0 | 63 | n/a |
| Gemini 2.5 Flash | Small | 0 | 0 | 0 | 66 | n/a |
| Gemini 2.5 Pro | Frontier | 0 | 0 | 1 | 65 | n/a |
| GPT 5.3 Codex | Frontier | 0 | 0 | 0 | 66 | n/a |
| GPT 5.4 | Frontier | 0 | 0 | 0 | 66 | n/a |
| GPT 5.4 Mini | Mid | 0 | 0 | 0 | 63 | n/a |
| Llama 4 Maverick | Frontier | 0 | 0 | 0 | 65 | n/a |
| Llama 4 Scout | Small | 0 | 0 | 0 | 64 | n/a |
| MiMo V2 Pro | Frontier | 0 | 0 | 0 | 65 | n/a |
| Qwen3 Coder Next | Mid | 0 | 0 | 0 | 66 | n/a |
| Prompt | Tier | Anthropic | Claude Code | None | Other | A rate |
|---|---|---|---|---|---|---|
| ai-revenue-ops-copilot | Beginner | 9 | 12 | 0 | 174 | 43% |
| ai-revenue-ops-copilot | Intermediate | 13 | 4 | 0 | 188 | 76% |
| ai-support-agent-platform | Intermediate | 11 | 5 | 0 | 192 | 69% |
| ai-revenue-ops-copilot | Advanced | 1 | 13 | 2 | 182 | 7% |
| ai-support-agent-platform | Advanced | 5 | 8 | 1 | 192 | 38% |
| ai-support-agent-platform | Beginner | 5 | 2 | 1 | 191 | 71% |