sharkdp/hyperfine
A command-line benchmarking tool
Results by Model
| # | Model | Score Percentage of hidden behavioral tests passed. | Cost Total API cost in USD for this task instance. | Calls Number of LLM API calls for this task instance. | |
|---|---|---|---|---|---|
| 1 | Claude Opus 4.7 Anthropic | 54.3% | $1.55 | 41 | |
| 2 | GPT 5.4 OpenAI | 48.8% | $0.25 | 9 | |
| 3 | Gemini 3.1 Pro Google | 43.6% | $1.29 | 55 | |
| 4 | Claude Haiku 4.5 Anthropic | 24.1% | $0.58 | 85 | |
| 5 | Claude Sonnet 4.6 Anthropic | 19.2% | $31.21 | 546 | |
| 6 | Gemini 3 Flash Google | 15.5% | $0.27 | 68 | |
| 7 | Claude Opus 4.6 Anthropic | 11.7% | $13.58 | 265 | |
| 8 | GPT 5.4 mini OpenAI | 7.2% | $0.05 | 10 | |
| 9 | GPT 5 mini OpenAI | 0.7% | $0.02 | 9 |
Click row to see model details