Unbrowse: Skip the Browser, Call the API

Capture a site’s real API once—then your agent uses it forever at API speed

Your OpenClaw agent needs to check a price, place a trade, or submit a form. The usual way is browser automation: fire up Chrome, wait for the page to load, find the right elements, click, scrape. That’s 10–45 seconds per action, fails a fair chunk of the time, and gobbles 500MB+ in a headless browser. Here’s the thing: the website is usually just hitting its own APIs under the hood. Same data, clean JSON, a couple hundred milliseconds. Unbrowse is a plugin that grabs those APIs once and turns them into tools your agent can call directly. First time you (or the agent) browse the site; after that it’s all API. Same outcome, way faster and way more reliable.

Unbrowse is from getFoundry, MIT licensed, and installs as an OpenClaw plugin. Below we walk through what it does, when it helps, and how to get started.

In a nutshell: You browse a site once while Unbrowse records the network traffic. It figures out the real API endpoints and auth, then generates a skill your agent can use. From then on the agent talks to the API, not the DOM. No Chrome, no 10-second waits—just HTTP. Your agent gets faster the more sites you capture.

Why the browser is usually overkill

Almost everything you do on the web is an API call with a button in front of it. Check odds on a prediction market? The page already ran GET /api/markets/election. Place a trade, submit a form on LinkedIn, send a Slack message, book a flight? Mostly POST requests. The browser is just a pretty layer on top. Your agent doesn’t need the pretty layer—it needs the data and the actions. With browser automation you’re still doing the full round-trip: load HTML, run JS, scrape the DOM, click the thing that fires the request. Unbrowse skips that: capture the real requests once, then call them. You get sub-second responses instead of 10–45 seconds, fewer timeouts and DOM breakages, and no headless Chrome. For workflows with lots of web steps, that’s the difference between “this feels broken” and “this actually works.”

How it works (capture, extract, generate)

Unbrowse cares about what the site does on the network, not what it paints on the screen:

  1. Capture – You (or your agent) browse the site once. Unbrowse sits in the middle and intercepts traffic via Chrome DevTools Protocol—every XHR, fetch, WebSocket, auth header, cookie. It’s all recorded.
  2. Extract – That traffic gets analyzed. Real API endpoints are identified. Auth pops out (Bearer tokens, cookies, API keys). Parameters are inferred. Endpoints get grouped by resource so you get a clear picture of the API.
  3. Generate – Unbrowse spits out an API skill: documented endpoints, a TypeScript client, auth config. Your agent gets to call these APIs directly. One browse session, and you’ve got permanent API access. No need to open the browser again for that site.

So you stop scraping the DOM. You call the same APIs the site uses. Data comes back as JSON; actions are plain HTTP. After the first capture, the browser stays out of it.

Browser automation vs Unbrowse

Here’s the rough picture (getFoundry’s benchmarks and what most people see):

Factor Browser automation Unbrowse
Speed 10–45 s per action ~200 ms (direct API)
Reliability ~70–85% (DOM breaks, timeouts) 95%+ (HTTP calls)
Resources Headless Chrome (500MB+) Plain HTTP
Data Scraped from DOM Structured JSON

Browser automation still makes sense when you’re exploring once, the site has no clear API, or you actually need to see the page (e.g. screenshots). Unbrowse shines when the site does have internal APIs and you’re doing the same kind of thing over and over—checking prices, placing trades, submitting forms. Capture once, then run at API speed.

Example workflows that benefit

Unbrowse is a fit when your agent is doing repeated web actions that map cleanly to APIs:

  • Prediction markets / trading – Check odds, place trades, see positions. The UI is usually a thin layer over REST or similar. Capture the market API once; after that the agent can query and trade in milliseconds.
  • Forms and submissions – Job applications, contact forms, booking flows. Submit a form on the site once while Unbrowse records; the agent can replay the same POSTs with different data.
  • Dashboards and SPAs – Internal tools, analytics, admin panels. React and similar apps fetch everything from APIs. Capture those calls and the agent can pull data or trigger actions without loading the UI.
  • Monitoring and data pulls – “Check this page every hour” style workflows. One capture gives you the endpoint; the agent hits it on a schedule. No browser, no DOM, just HTTP.

If a human could do it by clicking around once and then repeating the same requests, Unbrowse can usually turn that into a fast, reliable skill.

How it fits with OpenClaw

OpenClaw gives your agent a toolkit: file system, shell, browser, messaging, scheduling, memory. Unbrowse adds another kind: captured website APIs. You browse a site once; Unbrowse turns that site’s API into a skill. From then on the agent just calls the API. So the more sites you capture, the faster and smarter your agent gets—every site becomes a reusable tool. Those captured APIs can be packaged as skills and shared. One agent figures out Polymarket’s API; now any agent can trade at API speed without ever opening a browser. getFoundry is working on a marketplace where agents can share and trade these skills (including micropayments so agents can buy capabilities for themselves). Skills compound; the ecosystem gets smarter as more people use it.

Install and run

Unbrowse is an OpenClaw plugin. Get OpenClaw installed first (see Installation), then:

Install Unbrowse plugin
openclaw plugins install @getfoundry/unbrowse-openclaw

Both OpenClaw and Unbrowse are MIT licensed. For the latest install steps, plugin commands, and how to capture your first site, check the Unbrowse repo and getFoundry’s docs (linked below).

Unbrowse vs built-in browser: when to use which

  • Reach for Unbrowse when you’re hitting the same site or workflow a lot and the site has internal APIs (SPAs, dashboards, prediction markets, booking flows). Capture once, then run at API speed. Best for trading, form submissions, monitoring, and repeated data pulls.
  • Stick with OpenClaw’s built-in browser when you’re exploring once, the site has no obvious API, or you actually need to drive the UI (screenshots, one-off flows, or “click around and see what’s there”). See the Browser & Canvas guide.

Common questions

Do I have to capture every site separately? Yes. Each site (or each distinct app on a domain) gets captured once. After that the agent uses the generated skill for that site. So you invest a one-time browse per site; from then on it’s API calls.

What about login and auth? Unbrowse records auth as part of the capture—Bearer tokens, cookies, API keys that the site sends. When it generates the skill, it includes auth config so the agent can authenticate when it calls the API. You may need to re-capture if auth expires or the site changes how it works.

What if the site changes its API? If the site updates endpoints or request format, your captured skill might break. Then you capture again and regenerate. That’s still usually less pain than maintaining brittle DOM scrapers that break on every UI change.

Does this work with any website? It works best with sites that use clear internal APIs (XHR, fetch, GraphQL). Plain server-rendered HTML with no JS might not expose much to capture. SPAs, dashboards, and modern web apps are the sweet spot.

Source and links

Unbrowse is built by getFoundry. The idea and the numbers we quoted come from getFoundry’s post on X. Both projects are open source (MIT):

This site (openclaw-ai.online) is an independent resource and isn’t run by getFoundry or the OpenClaw project. For the latest install flow, plugin API, and the skills marketplace, head to getFoundry and the Unbrowse repo.

Bottom line: Unbrowse turns “browse once, then use the API forever” into a real workflow. Your agent stops waiting on the browser and starts calling the same endpoints the site uses. For repeated web actions, that’s often the difference between slow and brittle and fast and reliable.

See also

  • Browser & Canvas – Built-in browser automation and Canvas
  • Skills – How skills extend your agent
  • Automation – Scheduling and workflows
  • Installation – Install OpenClaw first