OpenClaw + Ollama Setup

The simplest and fastest way to get OpenClaw running—one command, no manual config

OpenClaw is a personal AI assistant that clears your inbox, sends emails, manages your calendar, and completes tasks via messaging apps like WhatsApp, Telegram, Slack, Discord, or iMessage. With Ollama 0.17+, you can set it up with a single command—Ollama installs OpenClaw (if needed), configures the model, and starts the gateway automatically. Everything runs on your own hardware.

Fast path: If you already have Ollama and Node.js, run ollama launch openclaw and follow the prompts. This guide covers the details.

What You Need

Step 1: Run the Command

Open a terminal and run:

One command
ollama launch openclaw --model kimi-k2.5:cloud

Other models can be used—see ollama launch openclaw for recommended options.

Step 2: Let Ollama Install OpenClaw

If OpenClaw is not already installed, Ollama detects that and prompts you to install. Ollama will automatically install and configure OpenClaw via npm, including the gateway daemon and model selection. Accept the prompt and it handles the rest.

Step 3: Start Chatting

OpenClaw opens in the terminal. You can start chatting immediately. If you selected an Ollama cloud model, the web search plugin is installed automatically so OpenClaw can fetch up-to-date information. Local models work without additional plugins.

Connect Messaging Apps

Connect OpenClaw to WhatsApp, Telegram, Slack, Discord, iMessage, or other chat platforms:

Configure channels
openclaw configure --section channels

After configuring, choose Finished to save your settings. See Channel setup for detailed guides per platform.

Recommended Models

OpenClaw works best with at least 64k context length. Ollama's cloud models provide full context for the best agent experience.

Cloud Models

Model Description
kimi-k2.5:cloud Multimodal reasoning with subagents
minimax-m2.5:cloud Fast, efficient coding and real-world productivity
glm-5:cloud Reasoning and code generation

Local Models (requires GPU VRAM)

Model VRAM Description
glm-4.7-flash ~25 GB Reasoning and code generation
qwen3-coder ~25 GB Efficient all-purpose assistant

More models at ollama.com/search.

Configure Without Launching

Change the model or config without starting the gateway and TUI:

Config only
ollama launch openclaw --config

Use a specific model directly:

Specific model
ollama launch openclaw --model minimax-m2.5:cloud

If the gateway is already running, it restarts automatically to pick up the new model.

Stop the Gateway

Stop gateway
openclaw gateway stop

Running Securely

OpenClaw can read files and execute actions when tools are enabled. Run it in an isolated environment and be aware of the risks of giving OpenClaw access to your system. See the OpenClaw security documentation for details.

Technical Notes (Ollama API)

OpenClaw integrates with Ollama's native API (/api/chat), which supports streaming and tool calling. Do not use the /v1 OpenAI-compatible URL (http://host:11434/v1) with OpenClaw—it breaks tool calling. Use baseUrl: "http://host:11434" (no /v1). For full provider config, see docs.openclaw.ai/providers/ollama.

Related

Official Documentation