Beta

API

Programmatic access to AI model benchmark data. Free tier available.

Free
$0
100 requests/day
  • +All public endpoints
  • +JSON responses
  • +Must include attribution link
Requires "Data by BenchGecko" attribution link
Pro
$29/mo
10,000 requests/day
  • +All public endpoints
  • +CSV + JSON export
  • +Historical data access
  • +No attribution required
Enterprise
Custom
Unlimited
  • +Everything in Pro
  • +Webhook notifications
  • +Custom benchmarks
  • +Priority support

Free Tier Attribution

Free API access requires a visible "Data by BenchGecko" link on any page displaying our data.

<!-- Add this wherever you display BenchGecko data -->
<a href="https://benchgecko.ai">Data by BenchGecko</a>

Endpoints

GET/api/v1/models

List all models with scores and pricing

Parameters
providerstringFilter by provider slug (e.g. "anthropic")
open_sourcebooleanFilter open source models only
sortstringSort by: avg_score, pricing_input, release_date, context_window
limitnumberResults per page (default 50, max 200)
Response
{
  "data": [
    {
      "slug": "claude-opus-4",
      "name": "Claude Opus 4",
      "provider": "Anthropic",
      "avg_score": 84.2,
      "scores": {
        "mmlu-pro": 85.7,
        "gpqa-diamond": 74.9,
        "humaneval-plus": 91.2
      },
      "pricing": { "input": 15.00, "output": 75.00 },
      "context_window": 200000,
      "release_date": "2025-08-01"
    }
  ],
  "meta": { "total": 14, "page": 1 }
}
GET/api/v1/models/:slug

Get detailed data for a single model

Response
{
  "slug": "claude-opus-4",
  "name": "Claude Opus 4",
  "provider": { "name": "Anthropic", "slug": "anthropic" },
  "scores": [
    { "benchmark": "MMLU-Pro", "score": 85.7, "category": "knowledge" },
    { "benchmark": "GPQA Diamond", "score": 74.9, "category": "reasoning" }
  ],
  "pricing": { "input": 15.00, "output": 75.00 },
  "context_window": 200000,
  "max_output_tokens": 32000,
  "is_open_source": false,
  "release_date": "2025-08-01"
}
GET/api/v1/benchmarks

List all benchmarks with top-scoring models

Parameters
categorystringFilter by category: coding, reasoning, math, knowledge
Response
{
  "data": [
    {
      "slug": "swe-bench-verified",
      "name": "SWE-bench Verified",
      "category": "coding",
      "top_models": [
        { "name": "Claude Opus 4", "score": 72.5 },
        { "name": "Claude Sonnet 4", "score": 70.3 }
      ]
    }
  ]
}
GET/api/v1/compare

Compare two or more models head-to-head

Parameters
modelsstringComma-separated model slugs (e.g. "claude-opus-4,gpt-4-1,o3")
Response
{
  "models": ["claude-opus-4", "gpt-4-1", "o3"],
  "benchmarks": [
    {
      "name": "MMLU-Pro",
      "scores": { "claude-opus-4": 85.7, "gpt-4-1": 83.1, "o3": 87.2 }
    }
  ]
}

Rate Limits

Free: 100 requests/day, 10 requests/minute

Pro: 10,000 requests/day, 100 requests/minute

Enterprise: Unlimited

Rate limit headers: X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset