Scrape any webpage to markdown using your browser session
npx claudepluginhub pepijnsenders/browse-cliScrape any webpage to markdown using your browser session
Scrape any webpage to markdown using your browser session
Traditional web scrapers get blocked by Cloudflare, CAPTCHAs, and bot detection. browse sidesteps all of that by using your actual Chrome browser - the same session where you're already logged in and verified as human.
<p align="center"> <img src="assets/demo.gif" alt="browse-cli demo" width="600"> </p>Chrome Browser (with your logins)
│
▼
Browse Extension ←──→ WebSocket Daemon (port 9222)
│ │
▼ ▼
Page Content browse CLI
│ │
└──────────────────────┘
│
▼
Markdown Output
The Browse extension connects your authenticated browser sessions to the CLI via a local WebSocket daemon. This lets you scrape any page you can see in your browser - including sites that require login.
brew tap pepijnsenders/tap
brew install browse
npm install -g @pep/browse-cli
git clone https://github.com/PepijnSenders/browse-cli
cd browse-cli
bun install
bun run build
npm link
chrome://extensions in Chromeextension/ folder from this packageTo find the extension folder after npm install:
npm root -g # Shows global node_modules path
# Extension is at: <path>/browse-cli/extension
browse init
browse https://example.com
# Basic usage - outputs markdown
browse <url>
# Output JSON with metadata (url, title, content)
browse <url> --json
# Output pruned HTML instead of markdown
browse <url> --html
# Wait longer for dynamic content (default: 2000ms)
browse <url> --wait 5000
# Scroll for infinite-scroll pages
browse <url> --scroll 3
| Command | Description |
|---|---|
browse <url> | Scrape URL and output markdown |
browse init | Start the WebSocket daemon |
browse stop | Stop the daemon |
browse --help | Show help |
browse --version | Show version |
| Option | Description | Default |
|---|---|---|
--json | Output JSON with url, title, and content | - |
--html | Output pruned HTML instead of markdown | - |
--wait <ms> | Wait time after page load | 2000 |
--scroll <n> | Number of scroll iterations | 0 |
browse https://techcrunch.com/2024/01/15/some-article
# Twitter/X
browse https://x.com/elonmusk
# LinkedIn
browse https://linkedin.com/in/satyanadella
# Scroll 5 times to load more content
browse https://news.ycombinator.com --scroll 5
# JSON with metadata
browse https://example.com --json | jq .
# Output:
# {
# "url": "https://example.com",
# "title": "Example Domain",
# "content": "# Example Domain\n\nThis domain is for..."
# }
This package includes a Claude Code skill for natural language web scraping.
# In Claude Code, run:
/plugin marketplace add PepijnSenders/browse-cli
/plugin install browse@PepijnSenders-browse-cli
Once installed, Claude can scrape pages naturally:
You: Get the content from https://news.ycombinator.com
Claude: [runs: browse https://news.ycombinator.com]
You: Scrape this Twitter profile and scroll to get more tweets
Claude: [runs: browse https://x.com/openai --scroll 3]
Start the daemon:
browse init
Some pages load content dynamically. Try:
browse <url> --wait 5000 # Wait longer
browse <url> --scroll 2 # Scroll to trigger lazy loading
Log into the website in your Chrome browser first. The CLI uses your existing browser session.
Make sure port 9222 is available:
lsof -i :9222 # Check if something else is using the port
browse stop # Stop any existing daemon
browse init # Start fresh
Traditional headless browsers and scrapers fail on modern websites because: