Model Providers
Supported LLM providers and models for OpenClaw
Supported LLM providers and models for OpenClaw
OpenClaw supports a wide range of LLM providers, from subscription-based services like Anthropic and OpenAI to local models you run entirely on your machine. Choose the provider that best fits your needs for privacy, cost, and performance.
Recommended: Anthropic Pro/Max (100/200) + Opus 4.6
Supports ChatGPT and Codex models:
google/gemini-3.1-pro-preview) from 2026.2.21volcengine-api-keyOllama is the fastest way to run OpenClaw on Mac and Linux. With Ollama 0.17+, a single command installs and configures everything:
ollama launch openclaw --model kimi-k2.5:cloud
Ollama supports local models (full privacy, no API costs) and cloud models (e.g. kimi-k2.5, minimax-m2.5, glm-5) with full context. OpenClaw integrates via the native Ollama API for streaming and tool calling. Use baseUrl: "http://host:11434" (not /v1). See Ollama + OpenClaw tutorial and docs.openclaw.ai/providers/ollama.
Run models entirely on your machine for complete privacy:
Configure local models in your Gateway configuration. Requires sufficient hardware (GPU recommended for best performance). Ollama (above) is the recommended path for local setup. For high-throughput self-hosting, see vLLM.
Rough guidelines for running local models. Actual needs depend on model size and inference server (Ollama, vLLM, etc.):
For lighter setups, use Ollama (optimized for consumer hardware), cloud-backed models via Ollama, or API providers.
For subscription services like Claude Pro/Max and ChatGPT Plus:
For pay-per-use or API-based access:
Configure automatic failover for reliability:
{
"agent": {
"model": "anthropic/claude-opus-4-6",
"fallback": [
"anthropic/claude-sonnet",
"openai/gpt-4"
]
}
}
If the primary model fails or is unavailable, OpenClaw automatically tries fallback models in order.
Rotate between multiple authentication profiles:
Running an always-on agent with a single premium model for every task can get expensive. Many users adopt tiered model routing:
Costs vary with usage (from tens to hundreds of dollars per month). For example setups and cost management, see Example Setups & Model Routing.
If you prefer automatic per-request routing—each query sent to the cheapest model that can handle it, without assigning tiers manually—third-party plugins can do that. For example, ClawRouter (BlockRunAI) runs routing locally, supports 41+ models through one wallet, and uses pay-per-request (USDC on Base). Set your model to blockrun/auto after installing the plugin. See the Skills list for discovery; we don’t endorse or guarantee third-party tools.
OpenClaw tracks model usage:
{
"agent": {
"model": "anthropic/claude-opus-4-6"
}
}
{
"agent": {
"model": "anthropic/claude-opus-4-6",
"fallback": ["anthropic/claude-sonnet"]
}
}