browse
Scrape any webpage to markdown using your browser session

Traditional web scrapers get blocked by Cloudflare, CAPTCHAs, and bot detection. browse sidesteps all of that by using your actual Chrome browser - the same session where you're already logged in and verified as human.
How It Works
Chrome Browser (with your logins)
│
▼
Browse Extension ←──→ WebSocket Daemon (port 9222)
│ │
▼ ▼
Page Content browse CLI
│ │
└──────────────────────┘
│
▼
Markdown Output
The Browse extension connects your authenticated browser sessions to the CLI via a local WebSocket daemon. This lets you scrape any page you can see in your browser - including sites that require login.
Installation
Homebrew (recommended)
brew tap pepijnsenders/tap
brew install browse
npm
npm install -g @pep/browse-cli
From source
git clone https://github.com/PepijnSenders/browse-cli
cd browse-cli
bun install
bun run build
npm link
Quick Start
1. Install the Chrome Extension
- Open
chrome://extensions in Chrome
- Enable "Developer mode" (top right)
- Click "Load unpacked"
- Select the
extension/ folder from this package
To find the extension folder after npm install:
npm root -g # Shows global node_modules path
# Extension is at: <path>/browse-cli/extension
2. Start the Daemon
browse init
3. Scrape Any Page
browse https://example.com
Usage
# Basic usage - outputs markdown
browse <url>
# Output JSON with metadata (url, title, content)
browse <url> --json
# Output pruned HTML instead of markdown
browse <url> --html
# Wait longer for dynamic content (default: 2000ms)
browse <url> --wait 5000
# Scroll for infinite-scroll pages
browse <url> --scroll 3
Commands
| Command | Description |
|---|
browse <url> | Scrape URL and output markdown |
browse init | Start the WebSocket daemon |
browse stop | Stop the daemon |
browse --help | Show help |
browse --version | Show version |
Options
| Option | Description | Default |
|---|
--json | Output JSON with url, title, and content | - |
--html | Output pruned HTML instead of markdown | - |
--wait <ms> | Wait time after page load | 2000 |
--scroll <n> | Number of scroll iterations | 0 |
Examples
News Articles
browse https://techcrunch.com/2024/01/15/some-article
Social Media (requires login in browser)
# Twitter/X
browse https://x.com/elonmusk
# LinkedIn
browse https://linkedin.com/in/satyanadella
Infinite Scroll Pages
# Scroll 5 times to load more content
browse https://news.ycombinator.com --scroll 5
Get Structured Output
# JSON with metadata
browse https://example.com --json | jq .
# Output:
# {
# "url": "https://example.com",
# "title": "Example Domain",
# "content": "# Example Domain\n\nThis domain is for..."
# }
Claude Code Integration
This package includes a Claude Code skill for natural language web scraping.
Install the Plugin
# In Claude Code, run:
/plugin marketplace add PepijnSenders/browse-cli
/plugin install browse@PepijnSenders-browse-cli
Usage
Once installed, Claude can scrape pages naturally:
You: Get the content from https://news.ycombinator.com
Claude: [runs: browse https://news.ycombinator.com]
You: Scrape this Twitter profile and scroll to get more tweets
Claude: [runs: browse https://x.com/openai --scroll 3]
Troubleshooting
"Connection refused" or "Daemon not running"
Start the daemon:
browse init
"Extension not connected"
- Make sure the Browse extension is installed in Chrome
- Check that the extension icon shows "Connected" when clicked
- Refresh the page you want to scrape
"No content returned"
Some pages load content dynamically. Try:
browse <url> --wait 5000 # Wait longer
browse <url> --scroll 2 # Scroll to trigger lazy loading
"Login required"
Log into the website in your Chrome browser first. The CLI uses your existing browser session.
Extension not finding the daemon
Make sure port 9222 is available:
lsof -i :9222 # Check if something else is using the port
browse stop # Stop any existing daemon
browse init # Start fresh
Why Not Just Use Puppeteer/Playwright?
Traditional headless browsers and scrapers fail on modern websites because: