Anthropic released Claude Opus 4 on April 21, 2026. The model achieved the highest scores to date on GPQA Diamond (78.2%), MATH-500 (96.1%), and SWE-bench Verified (72.4%). It also leads the LMSYS Chatbot Arena with an Elo rating of 1402.
What benchmarks tell us
Benchmarks measure specific capabilities in controlled settings. GPQA tests graduate-level reasoning. MATH-500 tests mathematical problem-solving. SWE-bench tests the ability to fix real software bugs. Strong performance across all three suggests genuine capability improvement, not benchmark gaming.
What benchmarks do not tell us
Reliability in production. Cost per token. Latency at scale. Behavior on adversarial inputs. Whether the model hallucinates less on your specific use case. These are the metrics that matter for deployment, and they are harder to measure.
The competitive picture
GPT-5 and Gemini 2.5 Ultra are within striking distance on most benchmarks. The gap between top models continues to narrow. The differentiation increasingly comes from price, speed, and ecosystem rather than raw capability.