Context System

How OpenClaw manages conversation context

Context is the information the agent uses to understand and respond to your messages. OpenClaw automatically loads relevant context, manages token usage, and compacts context when needed to stay within model limits.

What is Context?

Context includes:

  • Conversation History - Previous messages in the session
  • Memories - Relevant information from your workspace
  • System Prompts - Agent instructions and configuration
  • Tool Definitions - Available tools and their descriptions
  • Skills - Active skills and their context

System prompt order matters. LLMs weight the first (and last) tokens in the context window most heavily. So your agent's identitySOUL.md, AGENTS.md—should go first in the system prompt. Putting operational instructions or long memory dumps before the soul dilutes it and can hurt behavior. For more on why identity matters and how to design it, see Soul & Agent Identity.

Context Loading

OpenClaw automatically loads context:

Automatic Loading

  • Session Context - Loads conversation history
  • Memory Search - Finds relevant memories
  • Tool Loading - Includes available tools
  • Skill Context - Loads active skill information

Relevance

The agent loads only relevant context:

  • Searches memories for related information
  • Includes recent conversation history
  • Loads tools that might be needed
  • Keeps context focused and efficient

Token Management

LLM models have context limits measured in tokens:

  • Token Limits - Each model has a maximum context size
  • Token Counting - OpenClaw tracks token usage
  • Automatic Management - Stays within limits automatically

Bootstrap file limits and truncation

Workspace files (AGENTS.md, SOUL.md, TOOLS.md, etc.) are loaded into context at the start of each turn. There is a per-file character limit (default 20,000 characters). If a file exceeds it, OpenClaw keeps the first ~70% and the last ~20%, and drops the middle. No error or warning is shown—your agent may be running on incomplete instructions. If a rule in AGENTS.md seems ignored, it may be in the truncated middle. Check in chat: type /context list; if you see TRUNCATED next to a file, it was cut. Put your most important rules at the top of each file. To allow longer files, increase bootstrapMaxChars (per file) and bootstrapTotalMaxChars (all bootstrap files combined) in config.

Token Usage

Tokens are used for:

  • System prompts
  • Conversation history
  • Memories
  • Tool definitions
  • Your messages
  • Agent responses

Context Compaction

When context approaches token limits, OpenClaw compacts it:

How Compaction Works

  • Summarization - Old context summarized
  • Pruning - Less relevant information removed
  • Preservation - Important information kept
  • Automatic - Happens seamlessly

What Gets Compacted

  • Old conversation history
  • Less relevant memories
  • Redundant information

What's Preserved

  • Recent conversation
  • Important memories
  • System prompts
  • Active tool definitions

Session Context

Each session maintains its own context:

  • Main Session - Shared context across DMs
  • Group Sessions - Isolated context per group
  • Isolated Sessions - Separate context for specific needs

Context Isolation

Different sessions don't share context:

  • Groups have separate context
  • Isolated sessions are independent
  • Main session shares across DMs

Optimizing Context

For Better Performance

  • Keep Conversations Focused - Easier context management
  • Use Clear Messages - Better context understanding
  • Let Compaction Work - Trust automatic management

For Lower Token Cost

Most token spend is input (system prompt, history, tool definitions). If the agent re-reads a huge AGENTS.md or SOUL.md every turn, costs climb. Keep core identity and critical rules in SOUL.md and AGENTS.md; move long reference material into the memory folder and let the agent fetch what it needs via semantic search. That way each turn only injects relevant excerpts instead of the full manual.

For Better Context

  • Build Memories - More memories = better context
  • Use Skills - Skills add relevant context
  • Maintain Workspace - Organized workspace helps

Context Best Practices

  • Trust Automatic Management - Compaction works well
  • Build Good Memories - Quality memories improve context
  • Use Clear Messages - Better context understanding
  • Let It Learn - Context improves over time
  • Monitor Token Usage - Check usage if needed

See also