Ollama with OpenClaw
Local and cloud models — full privacy or low-cost cloud
Local and cloud models — full privacy or low-cost cloud
Ollama is the fastest way to run OpenClaw on Mac and Linux. With Ollama 0.17+, a single command installs OpenClaw (if needed), configures the model, and starts the gateway. Ollama supports both local models (full privacy, no API costs) and cloud models (kimi-k2.5, minimax-m2.5, glm-5) with full context. OpenClaw integrates via the native Ollama API for streaming and tool calling.
ollama launch openclaw --model kimi-k2.5:cloud
Requires Ollama 0.17+ and Node.js. Full step-by-step: Ollama + OpenClaw tutorial.
When configuring Ollama manually, use baseUrl: "http://host:11434" (not /v1). Default host is localhost. For a remote Ollama instance, use the host IP or hostname.
{
"agent": {
"model": "ollama/llama3.2",
"provider": "ollama",
"baseUrl": "http://127.0.0.1:11434"
}
}
Local models — Run entirely on your machine. Examples: llama3.2, mistral, codellama. Full privacy, no API costs. Requires sufficient RAM/VRAM (typically 8GB+ for 7B, 16GB+ for 13B).
Cloud models — Ollama 0.17+ can run cloud-backed models with full context: kimi-k2.5:cloud, minimax-m2.5:cloud, glm-5:cloud. These use cloud APIs but integrate through Ollama—same config flow as local.