riquito/tuc
When cut doesn't cut it
Results by Model
| # | Model | Score Percentage of hidden behavioral tests passed. | Cost Total API cost in USD for this task instance. | Calls Number of LLM API calls for this task instance. | |
|---|---|---|---|---|---|
| 1 | Claude Opus 4.7 Anthropic | 92.7% | $8.58 | 165 | |
| 2 | Claude Opus 4.6 Anthropic | 90.5% | $16.11 | 317 | |
| 3 | Claude Sonnet 4.6 Anthropic | 88.1% | $16.19 | 506 | |
| 4 | Claude Haiku 4.5 Anthropic | 58.9% | $0.94 | 182 | |
| 5 | GPT 5.4 OpenAI | 53.8% | $0.30 | 10 | |
| 6 | GPT 5 mini OpenAI | 37.0% | $0.02 | 14 | |
| 7 | Gemini 3.1 Pro Google | 17.1% | $0.94 | 87 | |
| 8 | Gemini 3 Flash Google | 1.5% | $0.28 | 82 | |
| 9 | GPT 5.4 mini OpenAI | 0.0% | $0.04 | 22 |
Click row to see model details