This tutorial will teach you how to use OpenClaw's browser automation capabilities to navigate websites, fill forms, extract data, and automate web tasks. By the end, you'll be able to build powerful web automation workflows. Estimated time: 25-30 minutes.

What You'll Learn

By the end of this tutorial, you'll know how to:

  • Navigate websites - Automatically browse and interact with web pages
  • Fill forms - Automate form completion and submission
  • Extract data - Scrape and parse information from websites
  • Use Canvas - Leverage the visual Canvas workspace for complex interactions
  • Build automation workflows - Create reusable web automation patterns

Prerequisites

Before starting:

  • OpenClaw installed and running - Complete Getting Started Tutorial
  • Browser control enabled - Browser automation requires Chrome/Chromium
  • Basic web knowledge - Understanding of HTML, CSS selectors helpful but not required
💡 Note: Browser automation runs in a dedicated Chrome/Chromium instance. Ensure you have sufficient system resources and Chrome installed.

Step 1: Enable Browser Control

Browser control is typically enabled by default. Verify it's working:

Test Browser Control
openclaw agent --message "Navigate to https://example.com and tell me what you see"

If browser control isn't working, check the Troubleshooting Guide.

Browser Configuration

Configure browser settings in your OpenClaw config:

Browser Configuration
{
  "browser": {
    "enabled": true,
    "headless": false,
    "timeout": 30000
  }
}

Step 2: Basic Web Navigation

Let's start with basic web navigation. Ask OpenClaw to visit a website:

Navigate to Website
Navigate to https://news.ycombinator.com and summarize the top 5 stories

OpenClaw will:

  1. Open the browser
  2. Navigate to the URL
  3. Read the page content
  4. Extract and summarize information

Interactive Navigation

You can guide OpenClaw through websites:

  • "Click on the 'About' link"
  • "Scroll down to find the pricing section"
  • "Take a screenshot of the page"
  • "Find all links on this page"

Step 3: Form Filling

OpenClaw can automatically fill and submit forms. Here's how:

Simple Form Example

Ask OpenClaw to fill a form:

Fill Contact Form
Go to https://example.com/contact and fill out the contact form with:
- Name: John Doe
- Email: john@example.com
- Message: Hello, I'm interested in your services

Complex Forms

For complex forms, provide detailed instructions:

  • "Fill the registration form with my details"
  • "Select 'Premium' plan from the dropdown"
  • "Check the terms and conditions checkbox"
  • "Submit the form"

OpenClaw will identify form fields and fill them appropriately.

Step 4: Data Extraction

Extract data from websites for analysis or storage:

Extract Text Content

Extract Article Content
Go to https://example.com/article and extract:
- The article title
- The author name
- The main content
- All links in the article

Extract Structured Data

Extract data into structured formats:

Extract Product Data
Go to https://example.com/products and extract all products with:
- Product name
- Price
- Description
- Image URL
Save this as a JSON file

Scrape Multiple Pages

Automate scraping across multiple pages:

Scrape Multiple Pages
Visit https://example.com/products and:
1. Extract all product links from page 1
2. Visit each product page
3. Extract product details
4. Save all data to products.json

Step 5: Using Canvas

Canvas provides a visual workspace for complex browser interactions:

What is Canvas?

Canvas is OpenClaw's visual interface that allows the agent to:

  • Render interactive UIs
  • Display visual information
  • Create custom interfaces
  • Show browser screenshots

Access Canvas

Canvas is accessible via:

  • Control UI at http://localhost:18789
  • macOS menu bar app
  • iOS/Android companion apps

Canvas with Browser

When using browser automation, Canvas can:

  • Show browser screenshots
  • Display extracted data visually
  • Render custom visualizations
  • Provide interactive feedback

Step 6: Advanced Patterns

Price Monitoring

Monitor prices on e-commerce sites:

Price Monitoring
Every day at 9am:
1. Visit https://example.com/product/123
2. Extract the current price
3. Compare with previous price
4. Send me a notification if price dropped

News Aggregation

Aggregate news from multiple sources:

News Aggregation
Visit these news sites:
- https://news.ycombinator.com
- https://www.reddit.com/r/programming
Extract top 5 stories from each
Create a summary document

Research Automation

Automate research tasks:

Research Task
Research "OpenClaw AI assistant" and:
1. Search Google for recent articles
2. Visit top 5 results
3. Extract key information
4. Create a research summary document

Step 7: Best Practices

Respect Website Terms

  • Always respect robots.txt
  • Don't overload servers with requests
  • Add delays between requests when scraping
  • Follow website terms of service

Error Handling

  • Handle page load failures gracefully
  • Verify elements exist before interacting
  • Use timeouts for slow-loading pages
  • Log errors for debugging

Performance

  • Use headless mode for faster execution
  • Cache frequently accessed data
  • Optimize selectors for faster element finding
  • Close browser tabs when done

Troubleshooting

Browser Not Starting

  • Check Chrome/Chromium is installed
  • Verify browser permissions
  • Check Gateway logs: openclaw gateway logs
  • See Troubleshooting Guide

Elements Not Found

  • Wait for page to fully load
  • Check element selectors are correct
  • Use browser dev tools to verify selectors
  • Try more specific selectors

Slow Performance

  • Enable headless mode
  • Reduce screenshot frequency
  • Optimize page interactions
  • Use faster selectors

Next Steps

Now that you understand browser automation, explore these related topics:

🕷️ Web Scraping Use Case

Complete web scraping workflow

View Tutorial →

🌐 Browser Reference

Complete browser features guide

View Reference →

🤖 Automation Tutorial

Advanced automation patterns

View Tutorial →

💡 Pro Tip

Combine browser automation with other OpenClaw features for powerful workflows:

  • Use memory to remember website preferences
  • Combine with automation for scheduled scraping
  • Integrate with skills for specialized web tasks
  • Use Canvas to visualize extracted data