OpenClaw is built around a Gateway-centric architecture where a single long-running process manages all channel connections and provides a unified control plane. This design ensures reliability, security, and simplicity.

Core Components

1. The Gateway

The Gateway (openclaw gateway) is the heart of OpenClaw. It's a single process that:

  • Manages all channel connections (WhatsApp, Telegram, Discord, etc.)
  • Provides a WebSocket control plane on port 18789 (default)
  • Handles session management and routing
  • Serves the Control UI/Dashboard
  • Manages the Canvas host for visual interfaces
  • Coordinates agent communication

Key Principle: One Gateway per host is recommended. It's the only process allowed to own the WhatsApp Web session, ensuring stability and preventing conflicts.

Network Model

OpenClaw uses a loopback-first approach for security:

  • Default: Gateway WebSocket runs on ws://127.0.0.1:18789 (localhost only)
  • Remote Access: Use SSH tunnels, Tailscale, or configure --bind tailnet with authentication tokens
  • Canvas Host: HTTP file server on port 18793 (default) serving /__openclaw__/canvas/ for node WebViews
Architecture Diagram
WhatsApp / Telegram / Discord / iMessage (+ plugins)
        │
        ▼
  ┌───────────────────────────┐
  │          Gateway          │  ws://127.0.0.1:18789 (loopback-only)
  │     (single source)       │
  │                           │  http://<gateway-host>:18793
  │                           │    /__openclaw__/canvas/ (Canvas host)
  └───────────┬───────────────┘
              │
              ├─ Pi agent (RPC)
              ├─ CLI (openclaw …)
              ├─ Chat UI (SwiftUI)
              ├─ macOS app (OpenClaw.app)
              ├─ iOS node via Gateway WS + pairing
              └─ Android node via Gateway WS + pairing

Agent Loop

The agent loop is the core processing cycle:

  1. Message Received - Channel receives a message
  2. Session Routing - Message is routed to appropriate session (main, group, or isolated)
  3. Context Loading - Agent loads relevant context, memories, and tools
  4. LLM Processing - Request sent to LLM provider (Claude, GPT, etc.)
  5. Tool Execution - Agent executes tools as needed (browser, file system, etc.)
  6. Response Streaming - Response streamed back to channel
  7. Memory Update - Conversation and context saved to workspace

This loop runs continuously, maintaining context and state across conversations.

Session Model

Session Types

  • Main Session - Direct messages collapse into a shared main session by default
  • Group Sessions - Each group chat gets an isolated session
  • Isolated Sessions - Can be created for specific routing or security needs

Session Features

  • Activation Modes - Control when the agent responds (always, mention-only, etc.)
  • Queue Modes - Handle concurrent requests
  • Session Isolation - Groups can run in Docker sandboxes
  • Context Management - Automatic context loading and pruning

Workspace Structure

OpenClaw stores everything as files and folders in your workspace (~/clawd by default):

  • Configuration: ~/.clawdbot/moltbot.json (path maintained for backward compatibility with Clawdbot/Moltbot)
  • Credentials: ~/.clawdbot/credentials/
  • Workspace Root: ~/clawd/
  • Prompt Files: AGENTS.md, SOUL.md, TOOLS.md
  • Skills: ~/clawd/skills/<skill>/SKILL.md
  • Memory Files: Daily notes in Markdown format
  • Sessions: Session state and history

This file-based approach means you can:

  • Edit configurations directly
  • Search memories with tools like Raycast or Obsidian
  • Version control your workspace
  • Backup everything easily

Multi-Agent Routing

OpenClaw supports routing messages to different agents based on:

  • Channel - Different channels can use different agents
  • Account/Peer - Route specific contacts to dedicated agents
  • Group - Each group can have its own agent workspace
  • Workspace Isolation - Each agent has its own workspace and sessions

This allows you to run multiple specialized agents, each with their own context, skills, and configuration.

Streaming & Chunking

OpenClaw uses advanced streaming techniques:

  • Block Streaming - Streams responses in chunks for faster perceived performance
  • Telegram Draft Streaming - Shows typing indicators and drafts in real-time
  • Tool Streaming - Streams tool execution results as they become available
  • Markdown Formatting - Proper formatting across all channels

Protocol & Communication

WebSocket Protocol

The Gateway exposes a WebSocket API for:

  • Real-time message delivery
  • Session management
  • Node pairing and communication
  • Control UI updates

RPC Mode

Agents communicate via RPC (Remote Procedure Call) for:

  • Tool invocation
  • Context queries
  • Memory updates
  • Session management

Learn More