LLM Gateway

Compare routing, outputs, and cost in one place.

Run the same prompt through auto-routing and direct model targets, inspect the response quality, and see whether the gateway actually saved you money.

  • Use one prompt to compare `auto`, OpenAI, Anthropic, and Groq responses side by side.
  • See provider, model, route reason, latency, tokens, and cost without leaving the app.
  • Track how much the routed result saved compared with more expensive alternatives.

Prompt Setup

Use your real gateway key. The page calls your own `/chat` route.
Start with auto plus a few direct models, or use a preset to compare specific tradeoffs.
Standard mode returns full usage, cost, and savings data.

Savings Summary

Quantify why routing is useful, not just that it exists.
Auto route cost
$0.000000
Run a comparison to compute routed spend.
Most expensive
$0.000000
Shows the priciest selected direct model result.
Savings
$0.000000
Compare `auto` against the highest selected model cost.

Comparison Results

Each run shows the actual chosen provider, model, route reason, and measured latency.
No runs yet. Choose some targets and run the comparison.