Unbrowse: Skip the Browser, Call the API
Capture a site’s real API once—then your agent uses it forever at API speed
Capture a site’s real API once—then your agent uses it forever at API speed
Your OpenClaw agent needs to check a price, place a trade, or submit a form. The usual way is browser automation: fire up Chrome, wait for the page to load, find the right elements, click, scrape. That’s 10–45 seconds per action, fails a fair chunk of the time, and gobbles 500MB+ in a headless browser. Here’s the thing: the website is usually just hitting its own APIs under the hood. Same data, clean JSON, a couple hundred milliseconds. Unbrowse is a plugin that grabs those APIs once and turns them into tools your agent can call directly. First time you (or the agent) browse the site; after that it’s all API. Same outcome, way faster and way more reliable.
Unbrowse is from getFoundry, MIT licensed, and installs as an OpenClaw plugin. Below we walk through what it does, when it helps, and how to get started.
In a nutshell: You browse a site once while Unbrowse records the network traffic. It figures out the real API endpoints and auth, then generates a skill your agent can use. From then on the agent talks to the API, not the DOM. No Chrome, no 10-second waits—just HTTP. Your agent gets faster the more sites you capture.
Almost everything you do on the web is an API call with a button in front of it. Check odds on a prediction market? The page already ran GET /api/markets/election. Place a trade, submit a form on LinkedIn, send a Slack message, book a flight? Mostly POST requests. The browser is just a pretty layer on top. Your agent doesn’t need the pretty layer—it needs the data and the actions. With browser automation you’re still doing the full round-trip: load HTML, run JS, scrape the DOM, click the thing that fires the request. Unbrowse skips that: capture the real requests once, then call them. You get sub-second responses instead of 10–45 seconds, fewer timeouts and DOM breakages, and no headless Chrome. For workflows with lots of web steps, that’s the difference between “this feels broken” and “this actually works.”
Unbrowse cares about what the site does on the network, not what it paints on the screen:
So you stop scraping the DOM. You call the same APIs the site uses. Data comes back as JSON; actions are plain HTTP. After the first capture, the browser stays out of it.
Here’s the rough picture (getFoundry’s benchmarks and what most people see):
| Factor | Browser automation | Unbrowse |
|---|---|---|
| Speed | 10–45 s per action | ~200 ms (direct API) |
| Reliability | ~70–85% (DOM breaks, timeouts) | 95%+ (HTTP calls) |
| Resources | Headless Chrome (500MB+) | Plain HTTP |
| Data | Scraped from DOM | Structured JSON |
Browser automation still makes sense when you’re exploring once, the site has no clear API, or you actually need to see the page (e.g. screenshots). Unbrowse shines when the site does have internal APIs and you’re doing the same kind of thing over and over—checking prices, placing trades, submitting forms. Capture once, then run at API speed.
Unbrowse is a fit when your agent is doing repeated web actions that map cleanly to APIs:
If a human could do it by clicking around once and then repeating the same requests, Unbrowse can usually turn that into a fast, reliable skill.
OpenClaw gives your agent a toolkit: file system, shell, browser, messaging, scheduling, memory. Unbrowse adds another kind: captured website APIs. You browse a site once; Unbrowse turns that site’s API into a skill. From then on the agent just calls the API. So the more sites you capture, the faster and smarter your agent gets—every site becomes a reusable tool. Those captured APIs can be packaged as skills and shared. One agent figures out Polymarket’s API; now any agent can trade at API speed without ever opening a browser. getFoundry is working on a marketplace where agents can share and trade these skills (including micropayments so agents can buy capabilities for themselves). Skills compound; the ecosystem gets smarter as more people use it.
Unbrowse is an OpenClaw plugin. Get OpenClaw installed first (see Installation), then:
openclaw plugins install @getfoundry/unbrowse-openclaw
Both OpenClaw and Unbrowse are MIT licensed. For the latest install steps, plugin commands, and how to capture your first site, check the Unbrowse repo and getFoundry’s docs (linked below).
Do I have to capture every site separately? Yes. Each site (or each distinct app on a domain) gets captured once. After that the agent uses the generated skill for that site. So you invest a one-time browse per site; from then on it’s API calls.
What about login and auth? Unbrowse records auth as part of the capture—Bearer tokens, cookies, API keys that the site sends. When it generates the skill, it includes auth config so the agent can authenticate when it calls the API. You may need to re-capture if auth expires or the site changes how it works.
What if the site changes its API? If the site updates endpoints or request format, your captured skill might break. Then you capture again and regenerate. That’s still usually less pain than maintaining brittle DOM scrapers that break on every UI change.
Does this work with any website? It works best with sites that use clear internal APIs (XHR, fetch, GraphQL). Plain server-rendered HTML with no JS might not expose much to capture. SPAs, dashboards, and modern web apps are the sweet spot.
Unbrowse is built by getFoundry. The idea and the numbers we quoted come from getFoundry’s post on X. Both projects are open source (MIT):
This site (openclaw-ai.online) is an independent resource and isn’t run by getFoundry or the OpenClaw project. For the latest install flow, plugin API, and the skills marketplace, head to getFoundry and the Unbrowse repo.
Bottom line: Unbrowse turns “browse once, then use the API forever” into a real workflow. Your agent stops waiting on the browser and starts calling the same endpoints the site uses. For repeated web actions, that’s often the difference between slow and brittle and fast and reliable.