One router for all your AI APIs.
Bring your own OpenAI, Anthropic, Gemini, and Mistral keys. SpendexAI sits in front, scores each request, and routes it to the best-fit model for cost, latency, and quality.
BYOK is free. You connect your own provider keys and pay providers directly.
How it works
Built for developers shipping AI products fast.
Connect your provider keys
Bring your own OpenAI, Anthropic, Gemini, and Mistral keys. SpendexAI stores the routing config and uses your credentials when requests are forwarded.
We score every request
The router looks at prompt size, task type, reasoning depth, latency sensitivity, streaming, and tool usage to decide which provider and model should handle it.
We route with guardrails
Simple traffic goes to cheaper models, harder traffic goes to stronger ones, and provider failures can fall through to backups. You keep control by pinning a model any time.
BYOK across 9 providers
For now, SpendexAI is a router only. You bring the provider accounts and API keys, and we route traffic across them with one endpoint.
| Model | Provider | Price (input/1M) | Tier |
|---|---|---|---|
| GPT-5-nano | $0.05 | Simple | |
| Claude Haiku 4.5 | $1.00 | Simple | |
| Gemini 2.5 Flash Lite | Google |
$0.10 | Simple |
| Mistral Small | Mistral |
$0.10 | Simple |
| GPT-5-mini | $0.25 | Medium | |
| Claude Sonnet 4.6 | $3.00 | Medium | |
| Gemini 2.5 Flash | Google |
$0.30 | Medium |
| Mistral Large | Mistral |
$0.50 | Medium |
| GPT-5 | $1.25 | Complex | |
| Claude Opus 4.6 | $5.00 | Complex | |
| Gemini 2.5 Pro | Google |
$1.25 | Complex |
BYOK live now. Credits coming soon.
Today you connect your own provider keys. SpendexAI handles request classification, routing, retries, and failover. Spendex Credits are the next billing layer.
Savings proof
Most teams overspend because every request gets sent to the same default model. Routing fixes that without forcing you to rebuild your stack.
ROI calculator
Estimate your savings from automatic model routing.
Estimated monthly savings
$3,500
Based on a 70% reductionEstimated annual savings
$42,000
Same traffic, better routingWhat next
The next layers on top of the BYOK router.
-
1
Hosted provider vault
Securely connect and rotate provider credentials, with workspace-level access control and audit logs.
-
2
Spend controls
Budget alerts, usage limits, and policy rules on top of the router once the BYOK core is fully stable.
-
3
AI Orchestrator
You prompt once. SpendexAI sources the right agents, coordinates execution, and settles payments between them. Clean output delivered. E-commerce, first.
Join early access →
FAQ
Questions that block signups. Answered.
How is SpendexAI different from OpenRouter?
OpenRouter is a strong option if you want a hosted model access layer across many providers. SpendexAI is different: it is built BYOK-first. You connect your own OpenAI, Anthropic, Google, Mistral, and other provider accounts, keep direct billing with those providers, and use SpendexAI as the routing, retry, and failover layer in front.
How is SpendexAI different from open-source routers like LiteLLM or Portkey Gateway?
Tools like LiteLLM and Portkey can be a good fit if you want to assemble, host, and operate your own routing stack. SpendexAI is for teams that want the outcome without the operational overhead: one endpoint, automatic routing, fallback handling, and a simpler path to production.
Why use SpendexAI instead of building routing in-house?
Because most in-house routers start simple, then grow into provider-specific logic, retries, failover rules, and model mapping spread across the codebase. SpendexAI keeps that logic in one place behind one OpenAI-compatible endpoint.
Do I lose control over model choice?
No. If you need a specific provider or model, you can pin it directly. Smart routing only applies when you choose automatic routing.
What changes in my code?
Usually just two things: the base URL and the API key. Your existing OpenAI-compatible SDK flow stays nearly the same.
Why BYOK first?
Because many teams want better routing without giving up direct provider relationships, billing visibility, and account-level control. BYOK lets you keep ownership of the underlying provider accounts while SpendexAI handles the routing layer.
Who pays the providers in BYOK mode?
You do. In BYOK mode, usage is billed directly to your OpenAI, Anthropic, Google, Mistral, and other connected provider accounts. SpendexAI sits in front as the intelligence and reliability layer.
Is SpendexAI only about cost savings?
No. Cost savings matter, but the bigger value is operational: better model selection, cleaner multi-provider architecture, retries, failover, and one consistent interface for your app.
When should I use SpendexAI instead of OpenRouter?
Use SpendexAI when you want BYOK, direct provider billing, and routing on top of your own accounts. If you want a hosted aggregation layer where the platform sits between you and the providers commercially, OpenRouter may be a better fit.
When should I use an open-source router instead of SpendexAI?
Use an open-source router if your team wants full infrastructure ownership and is comfortable maintaining the routing stack itself. Use SpendexAI if you want the same class of routing outcome with less engineering overhead.
Can I still force one provider only?
Yes. If you only want OpenAI, or only Anthropic, you can keep routing constrained to the providers you connect and choose.
What happens if a provider goes down?
SpendexAI can retry or reroute to another suitable connected provider or model. Your application still talks to one endpoint.
Can I switch back instantly?
Yes. Remove the SpendexAI base URL and point your SDK back to your original provider. There is no lock-in.
Do you store prompts?
No prompt content is stored as product analytics. Routing decisions are based on request signals and metadata, not on building a prompt warehouse.
Google
Mistral