From tinyfish
Search the web for quick answers, fetch URLs as clean markdown, run browser agents to navigate/fill forms/extract data, or control headless browsers via CLI. Ideal for web research, scraping, and automation.
npx claudepluginhub tinyfish-io/tinyfish-cookbook --plugin tinyfishThis skill uses the workspace's default tool permissions.
The complete web toolkit — four tools, one CLI. Start with the lightest tool that can do the job and escalate only when needed.
Automates browser tasks via CLI: navigate pages, extract data, fill forms, click buttons, take screenshots. Supports stealth remote sessions with CAPTCHA solving for protected sites.
Automates browser interactions via CLI using agent-browser: navigate, click, fill forms, snapshot pages, scrape content, manage sessions for AI agent workflows.
Searches the web, scrapes URLs, crawls sites, and interacts with dynamic pages using Firecrawl CLI for clean markdown output.
Share bugs, ideas, or general feedback.
The complete web toolkit — four tools, one CLI. Start with the lightest tool that can do the job and escalate only when needed.
Before making any TinyFish call, always run BOTH checks:
1. CLI installed?
which tinyfish && tinyfish --version || echo "TINYFISH_CLI_NOT_INSTALLED"
If not installed, stop and tell the user:
Install the TinyFish CLI:
npm install -g @tiny-fish/cli
2. Authenticated?
tinyfish auth status
If not authenticated, stop and tell the user:
You need a TinyFish API key. Get one at: https://agent.tinyfish.ai/api-keys
Then authenticate:
tinyfish auth login
Do NOT proceed until both checks pass.
search → fetch → agent → browser
lightest heaviest
| Tool | When to use | Speed | Cost |
|---|---|---|---|
| search | You need to find URLs or get a quick answer about a topic | Fastest | Lowest |
| fetch | You have URLs and need their clean content (articles, docs, product pages) | Fast | Low |
| agent | You need to interact with a page — click, fill forms, navigate, extract structured data from dynamic sites | Slower | Higher |
| browser | Agent isn't enough — you need raw programmatic browser control via CDP | Slowest | Highest |
Research: search → fetch Search for a topic, then fetch the best results to read their full content.
# 1. Find URLs
tinyfish search query "best React state management libraries 2026"
# 2. Read the top results
tinyfish fetch content get --format markdown "https://result1.com" "https://result2.com"
Deep extraction: search → agent Search to find the right site, then use agent to interact with it and extract structured data.
# 1. Find the site
tinyfish search query "Nike running shoes official store"
# 2. Automate extraction on it
tinyfish agent run --url "https://nike.com/running" \
"Extract all running shoes as JSON: [{\"name\": str, \"price\": str, \"colors\": [str]}]"
Escalation: fetch → agent Try fetch first. If the page is dynamic/JS-heavy and fetch returns empty or incomplete content, escalate to agent.
Full control: agent → browser If agent can't handle a complex multi-step workflow, spin up a raw browser session and automate it yourself via CDP.
tinyfish search queryWeb search. Returns ranked results with titles, URLs, and snippets.
tinyfish search query "<query>" [--location <hint>] [--language <hint>] [--pretty]
--location and --language for geo-targeted results--pretty for human-readabletinyfish search query "best pho in Ho Chi Minh City" --location "Vietnam" --language "en"
tinyfish fetch content getFetch clean, extracted content from one or more URLs. Strips ads, nav, boilerplate — returns just the content.
tinyfish fetch content get <urls...> [--format markdown|html|json] [--links] [--image-links] [--pretty]
--format markdown (default) — clean readable text--format json — structured document tree--links — include all extracted links from the page--image-links — include extracted image URLsurl, final_url, title, language, author, published_date, text, latency_ms# Fetch one page as markdown
tinyfish fetch content get --format markdown "https://example.com/article"
# Fetch multiple pages with links
tinyfish fetch content get --links "https://site-a.com" "https://site-b.com" "https://site-c.com"
tinyfish agent runRun a browser automation using a natural language goal. The agent opens a real browser, navigates, clicks, fills forms, and extracts data.
tinyfish agent run --url <url> "<goal>" [--sync] [--async] [--pretty]
| Flag | Purpose |
|---|---|
--url <url> | Target URL (bare hostnames get https:// auto-prepended) |
--sync | Wait for full result without streaming steps |
--async | Submit and return immediately |
--pretty | Human-readable output |
Output: Default streams data: {...} SSE lines. The final result is the event where type == "COMPLETE" and status == "COMPLETED" — the extracted data is in the resultJson field. Read the raw output directly; no script-side parsing is needed.
Always specify the JSON structure you want in the goal:
tinyfish agent run --url "https://example.com/products" \
"Extract all products as JSON array: [{\"name\": str, \"price\": str, \"url\": str}]"
tinyfish agent run --url "https://example.com/search" \
"Search for 'wireless headphones', filter under $50, extract top 5 as JSON: [{\"name\": str, \"price\": str, \"rating\": str}]"
Parallel extraction — when hitting multiple independent sites, make separate calls. Do NOT combine into one goal.
Good — parallel calls (run simultaneously):
tinyfish agent run --url "https://pizzahut.com" \
"Extract pizza prices as JSON: [{\"name\": str, \"price\": str}]"
tinyfish agent run --url "https://dominos.com" \
"Extract pizza prices as JSON: [{\"name\": str, \"price\": str}]"
Bad — single combined call:
# Don't do this — less reliable and slower
tinyfish agent run --url "https://pizzahut.com" \
"Extract prices from Pizza Hut and also go to Dominos..."
Managing runs:
tinyfish agent run list [--status PENDING|RUNNING|COMPLETED|FAILED|CANCELLED] [--limit N]
tinyfish agent run get <run_id>
tinyfish agent run cancel <run_id>
Batch operations — submit many runs from a CSV file (url,goal columns):
tinyfish agent batch run --input runs.csv
tinyfish agent batch list
tinyfish agent batch get <batch_id>
tinyfish agent batch cancel <batch_id>
tinyfish browser session createSpin up a remote browser instance. Returns a CDP WebSocket URL for programmatic control.
tinyfish browser session create [--url <url>] [--pretty]
--url optionally navigates to a page after creationsession_id, cdp_url (WebSocket), and base_urlcdp_url with Playwright, Puppeteer, or any CDP clienttinyfish browser session create --url "https://example.com"
# Returns: { session_id, cdp_url: "wss://...", base_url: "https://..." }
--pretty for human-readable output. Default is JSON.--debug on the root command or set TINYFISH_DEBUG=1 to log HTTP requests to stderr.$ARGUMENTS