OpenClaw + Ollama Setup
The simplest and fastest way to get OpenClaw running—one command, no manual config
The simplest and fastest way to get OpenClaw running—one command, no manual config
OpenClaw is a personal AI assistant that clears your inbox, sends emails, manages your calendar, and completes tasks via messaging apps like WhatsApp, Telegram, Slack, Discord, or iMessage. With Ollama 0.17+, you can set it up with a single command—Ollama installs OpenClaw (if needed), configures the model, and starts the gateway automatically. Everything runs on your own hardware.
ollama launch openclaw and follow the prompts. This guide covers the details.
Open a terminal and run:
ollama launch openclaw --model kimi-k2.5:cloud
Other models can be used—see ollama launch openclaw for recommended options.
If OpenClaw is not already installed, Ollama detects that and prompts you to install. Ollama will automatically install and configure OpenClaw via npm, including the gateway daemon and model selection. Accept the prompt and it handles the rest.
OpenClaw opens in the terminal. You can start chatting immediately. If you selected an Ollama cloud model, the web search plugin is installed automatically so OpenClaw can fetch up-to-date information. Local models work without additional plugins.
Connect OpenClaw to WhatsApp, Telegram, Slack, Discord, iMessage, or other chat platforms:
openclaw configure --section channels
After configuring, choose Finished to save your settings. See Channel setup for detailed guides per platform.
OpenClaw works best with at least 64k context length. Ollama's cloud models provide full context for the best agent experience.
| Model | Description |
|---|---|
kimi-k2.5:cloud |
Multimodal reasoning with subagents |
minimax-m2.5:cloud |
Fast, efficient coding and real-world productivity |
glm-5:cloud |
Reasoning and code generation |
| Model | VRAM | Description |
|---|---|---|
glm-4.7-flash |
~25 GB | Reasoning and code generation |
qwen3-coder |
~25 GB | Efficient all-purpose assistant |
More models at ollama.com/search.
Change the model or config without starting the gateway and TUI:
ollama launch openclaw --config
Use a specific model directly:
ollama launch openclaw --model minimax-m2.5:cloud
If the gateway is already running, it restarts automatically to pick up the new model.
openclaw gateway stop
OpenClaw can read files and execute actions when tools are enabled. Run it in an isolated environment and be aware of the risks of giving OpenClaw access to your system. See the OpenClaw security documentation for details.
OpenClaw integrates with Ollama's native API (/api/chat), which supports streaming and tool calling. Do not use the /v1 OpenAI-compatible URL (http://host:11434/v1) with OpenClaw—it breaks tool calling. Use baseUrl: "http://host:11434" (no /v1). For full provider config, see docs.openclaw.ai/providers/ollama.