From brightdata-pack
Scrape structured data from JavaScript-rendered pages, SPAs, or interactive sites using Bright Data Scraping Browser with Playwright or Puppeteer.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin brightdata-packThis skill is limited to using the following tools:
Use Bright Data's Scraping Browser to scrape JavaScript-rendered pages. The Scraping Browser works like a regular Playwright/Puppeteer browser but routes through Bright Data's proxy infrastructure with built-in CAPTCHA solving, fingerprint management, and automatic retries.
Integrates Bright Data APIs for production web scraping, SERP results, structured extraction, and browser automation with best practices, CLI setup, and auth patterns.
Provides reference architecture and project layout for Bright Data scraping in Node.js/TypeScript, including proxies, clients, browsers, scrapers, pipelines, and storage.
Extracts structured data from websites using Python browser automation with openbrowser-ai. Handles JS-rendered content, pagination, multi-page scraping, and data processing with pandas.
Share bugs, ideas, or general feedback.
Use Bright Data's Scraping Browser to scrape JavaScript-rendered pages. The Scraping Browser works like a regular Playwright/Puppeteer browser but routes through Bright Data's proxy infrastructure with built-in CAPTCHA solving, fingerprint management, and automatic retries.
brightdata-install-auth setupnpm install playwright
npx playwright install chromium
// scraping-browser.ts
import { chromium } from 'playwright';
import 'dotenv/config';
const { BRIGHTDATA_CUSTOMER_ID, BRIGHTDATA_ZONE, BRIGHTDATA_ZONE_PASSWORD } = process.env;
const AUTH = `brd-customer-${BRIGHTDATA_CUSTOMER_ID}-zone-${BRIGHTDATA_ZONE}:${BRIGHTDATA_ZONE_PASSWORD}`;
const BROWSER_WS = `wss://${AUTH}@brd.superproxy.io:9222`;
async function scrapWithBrowser(url: string) {
console.log('Connecting to Scraping Browser...');
const browser = await chromium.connectOverCDP(BROWSER_WS);
try {
const page = await browser.newPage();
await page.goto(url, { waitUntil: 'domcontentloaded', timeout: 60000 });
// Wait for dynamic content to load
await page.waitForSelector('body', { timeout: 30000 });
// Extract structured data
const data = await page.evaluate(() => ({
title: document.title,
metaDescription: document.querySelector('meta[name="description"]')?.getAttribute('content') || '',
h1: document.querySelector('h1')?.textContent?.trim() || '',
links: Array.from(document.querySelectorAll('a[href]')).slice(0, 20).map(a => ({
text: a.textContent?.trim(),
href: a.getAttribute('href'),
})),
}));
console.log('Scraped data:', JSON.stringify(data, null, 2));
return data;
} finally {
await browser.close();
}
}
scrapWithBrowser('https://example.com').catch(console.error);
// scrape-products.ts — real-world example
import { chromium, Page } from 'playwright';
import 'dotenv/config';
interface Product {
name: string;
price: string;
rating: string;
url: string;
}
const AUTH = `brd-customer-${process.env.BRIGHTDATA_CUSTOMER_ID}-zone-${process.env.BRIGHTDATA_ZONE}:${process.env.BRIGHTDATA_ZONE_PASSWORD}`;
async function scrapeProducts(searchUrl: string): Promise<Product[]> {
const browser = await chromium.connectOverCDP(`wss://${AUTH}@brd.superproxy.io:9222`);
const page = await browser.newPage();
try {
await page.goto(searchUrl, { waitUntil: 'networkidle', timeout: 90000 });
// Scroll to trigger lazy-loaded content
await autoScroll(page);
const products = await page.evaluate(() => {
return Array.from(document.querySelectorAll('[data-testid="product-card"]')).map(card => ({
name: card.querySelector('.product-title')?.textContent?.trim() || '',
price: card.querySelector('.price')?.textContent?.trim() || '',
rating: card.querySelector('.rating')?.textContent?.trim() || '',
url: card.querySelector('a')?.getAttribute('href') || '',
}));
});
return products;
} finally {
await browser.close();
}
}
async function autoScroll(page: Page): Promise<void> {
await page.evaluate(async () => {
await new Promise<void>((resolve) => {
let totalHeight = 0;
const distance = 300;
const timer = setInterval(() => {
window.scrollBy(0, distance);
totalHeight += distance;
if (totalHeight >= document.body.scrollHeight) {
clearInterval(timer);
resolve();
}
}, 200);
});
});
}
// scraping-browser-puppeteer.ts
import puppeteer from 'puppeteer-core';
const AUTH = `brd-customer-${process.env.BRIGHTDATA_CUSTOMER_ID}-zone-${process.env.BRIGHTDATA_ZONE}:${process.env.BRIGHTDATA_ZONE_PASSWORD}`;
async function scrapeWithPuppeteer(url: string) {
const browser = await puppeteer.connect({
browserWSEndpoint: `wss://${AUTH}@brd.superproxy.io:9222`,
});
const page = await browser.newPage();
await page.goto(url, { waitUntil: 'domcontentloaded', timeout: 60000 });
const title = await page.title();
console.log('Page title:', title);
await browser.close();
}
| Error | Cause | Solution |
|---|---|---|
WebSocket connection failed | Wrong zone or credentials | Verify Scraping Browser zone is active |
Timeout 60000ms exceeded | Slow page load | Increase timeout; use domcontentloaded instead of networkidle |
Target closed | Browser disconnected | Implement retry logic; browser sessions are ephemeral |
Navigation failed | Site blocked request | Scraping Browser handles this; increase timeout |
For SERP API scraping, see brightdata-core-workflow-b.