From web-infra-dev-midscene-skills-1
Automates browser tasks via screenshots using Midscene and headless Puppeteer. For navigation, scraping, form filling, UI testing, screenshots, workflows, and Chrome CDP connections.
npx claudepluginhub web-infra-dev/midscene-skillsThis skill is limited to using the following tools:
> **CRITICAL RULES — VIOLATIONS WILL BREAK THE WORKFLOW:**
Controls existing Chrome browsers via DevTools Protocol using use_browser tool for multi-tab management, navigation, form automation, interactions, and content extraction. Use for authenticated sessions or when Playwright is unsuitable.
CLI for browser automation: navigate sites, snapshot elements for refs, fill forms, click buttons, screenshot, scrape data, test web apps. Chains commands, imports auth state.
CLI for AI agents to automate browser tasks: open sites, snapshot DOM refs, fill forms, click elements, screenshot, extract data, test web apps.
Share bugs, ideas, or general feedback.
CRITICAL RULES — VIOLATIONS WILL BREAK THE WORKFLOW:
- Never run midscene commands in the background. Each command must run synchronously so you can read its output (especially screenshots) before deciding the next action. Background execution breaks the screenshot-analyze-act loop.
- Run only one midscene command at a time. Wait for the previous command to finish, read the screenshot, then decide the next action. Never chain multiple commands together.
- Allow enough time for each command to complete. Midscene commands involve AI inference and screen interaction, which can take longer than typical shell commands. A typical command needs about 1 minute; complex
actcommands may need even longer.- Always report task results before finishing. After completing the automation task, you MUST proactively summarize the results to the user — including key data found, actions completed, screenshots taken, and any relevant findings. Never silently end after the last automation step; the user expects a complete response in a single interaction.
Automate web browsing using npx -y @midscene/web@1. By default, launches a headless Chrome via Puppeteer that persists across CLI calls — no session loss between commands. Also supports CDP mode and Bridge mode to connect to an existing Chrome browser. Each CLI command maps directly to an MCP tool — you (the AI agent) act as the brain, deciding which actions to take based on screenshots.
act Can DoInside a single act call in the browser, Midscene can click, right-click, double-click, hover, type or clear text, press keys, scroll, drag, long-press, and continue through multi-step page flows based on what is currently visible. When touch input is enabled, it can also handle swipe- or pinch-style interactions on touch-oriented pages.
This skill has three modes. Choose based on the user's intent:
| Mode | When to use | How it works |
|---|---|---|
| Puppeteer (default) | User wants to browse a URL, scrape data, test UI — no need for their own browser | Launches a new headless Chrome, isolated from user's browser |
| CDP mode | User says "connect to my Chrome", "control my browser", "CDP", "remote debugging", or wants to operate their existing browser. Also use when the task implicitly requires login state (e.g., "check my orders", "open my dashboard", "look at my account") | Connects to user's Chrome via DevTools Protocol. Requires remote debugging enabled (chrome://inspect > "Allow remote debugging"). No extension needed |
| Bridge mode | User explicitly mentions "bridge", "extension", or has Midscene Chrome Extension installed and prefers to use it | Connects to user's Chrome via the Midscene Chrome Extension |
CDP vs Bridge: Both control the user's real Chrome with login sessions preserved. CDP only needs a Chrome setting toggle; Bridge needs a Chrome Extension installed. If the user doesn't specify, prefer CDP mode as it has fewer prerequisites.
Before using CDP or Bridge mode, run a quick precheck to verify the target is reachable. This avoids long timeouts when the user hasn't enabled remote debugging or installed the extension.
# CDP precheck (port 9222, 2s timeout) — returns "101" if available
curl -s --max-time 2 -o /dev/null -w "%{http_code}" -H "Upgrade: websocket" -H "Connection: Upgrade" -H "Sec-WebSocket-Version: 13" -H "Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==" http://127.0.0.1:9222/devtools/browser
# Bridge precheck (port 3766, 2s timeout) — returns "200" or "400" if extension is listening
curl -s --max-time 2 -o /dev/null -w "%{http_code}" http://127.0.0.1:3766/socket.io/?EIO=4&transport=polling
How to use precheck results:
101 → CDP mode is available, use --cdp200 or 400 → Bridge extension is listening, use --bridgeMidscene requires models with strong visual grounding capabilities. The following environment variables must be configured — either as system environment variables or in a .env file in the current working directory (Midscene loads .env automatically):
MIDSCENE_MODEL_API_KEY="your-api-key"
MIDSCENE_MODEL_NAME="model-name"
MIDSCENE_MODEL_BASE_URL="https://..."
MIDSCENE_MODEL_FAMILY="family-identifier"
Example: Gemini (Gemini-3-Flash)
MIDSCENE_MODEL_API_KEY="your-google-api-key"
MIDSCENE_MODEL_NAME="gemini-3-flash"
MIDSCENE_MODEL_BASE_URL="https://generativelanguage.googleapis.com/v1beta/openai/"
MIDSCENE_MODEL_FAMILY="gemini"
Example: Qwen 3.5
MIDSCENE_MODEL_API_KEY="your-aliyun-api-key"
MIDSCENE_MODEL_NAME="qwen3.5-plus"
MIDSCENE_MODEL_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
MIDSCENE_MODEL_FAMILY="qwen3.5"
MIDSCENE_MODEL_REASONING_ENABLED="false"
# If using OpenRouter, set:
# MIDSCENE_MODEL_API_KEY="your-openrouter-api-key"
# MIDSCENE_MODEL_NAME="qwen/qwen3.5-plus"
# MIDSCENE_MODEL_BASE_URL="https://openrouter.ai/api/v1"
Example: Doubao Seed 2.0 Lite
MIDSCENE_MODEL_API_KEY="your-doubao-api-key"
MIDSCENE_MODEL_NAME="doubao-seed-2-0-lite"
MIDSCENE_MODEL_BASE_URL="https://ark.cn-beijing.volces.com/api/v3"
MIDSCENE_MODEL_FAMILY="doubao-seed"
Commonly used models: Doubao Seed 2.0 Lite, Qwen 3.5, Zhipu GLM-4.6V, Gemini-3-Pro, Gemini-3-Flash.
If the model is not configured, ask the user to set it up. See Model Configuration for supported providers.
Use CDP mode to control the user's existing Chrome browser. The default CDP endpoint is ws://127.0.0.1:9222/devtools/browser (port 9222 is Chrome's standard remote debugging port). If the user specifies a different port, replace 9222 accordingly.
Add --cdp <ws-endpoint> to every command:
npx -y @midscene/web@1 connect --cdp ws://127.0.0.1:9222/devtools/browser --url https://example.com
npx -y @midscene/web@1 act --cdp ws://127.0.0.1:9222/devtools/browser --prompt "click the button"
npx -y @midscene/web@1 take_screenshot --cdp ws://127.0.0.1:9222/devtools/browser
npx -y @midscene/web@1 disconnect --cdp ws://127.0.0.1:9222/devtools/browser
disconnect releases the connection but does NOT close the browser. There is no close command in CDP mode.connect --url navigates the existing active tab instead of opening a new tab.connect without --url attaches to the current active tab without navigating.chrome://inspect in Chrome and turn on "Allow remote debugging".Use Bridge mode when the user explicitly mentions "bridge", "extension", or has the Midscene Chrome Extension installed. Add --bridge to every command:
npx -y @midscene/web@1 --bridge connect --url https://example.com
npx -y @midscene/web@1 --bridge act --prompt "click the button"
npx -y @midscene/web@1 --bridge take_screenshot
npx -y @midscene/web@1 --bridge disconnect
disconnect only closes the CLI-side bridge connection, not the browser or tabs.npx -y @midscene/web@1 connect --url https://example.com
npx -y @midscene/web@1 take_screenshot
After taking a screenshot, read the saved image file to understand the current page state before deciding the next action.
Use act to interact with the page and get the result. It autonomously handles all UI interactions internally — clicking, typing, scrolling, hovering, waiting, and navigating — so you should give it complex, high-level tasks as a whole rather than breaking them into small steps. Describe what you want to do and the desired effect in natural language:
# specific instructions
npx -y @midscene/web@1 act --prompt "click the Login button and fill in the email field with 'user@example.com'"
npx -y @midscene/web@1 act --prompt "scroll down and click the Submit button"
# or target-driven instructions
npx -y @midscene/web@1 act --prompt "click the country dropdown and select Japan"
When the user provides a screenshot, icon, logo, or reference image and wants an exact visual match, prefer tap --locate instead of a generic act --prompt. Pass --locate as JSON. The prompt describes the target, images supplies named reference images, and convertHttpImage2Base64: true is useful when the image URL may not be directly accessible to the model.
npx -y @midscene/web@1 tap --locate '{
"prompt": "tap the area contains the image",
"images": [
{
"name": "target image",
"url": "https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png"
}
],
"convertHttpImage2Base64": true
}'
The same locate JSON shape also works for other commands that accept a locate parameter.
Disconnect from the page but keep the browser running:
npx -y @midscene/web@1 disconnect
Close the browser completely when finished (Puppeteer mode only):
npx -y @midscene/web@1 close
The generated HTML report is recommended for human reading first. It includes step-by-step execution details and replay videos for each operation, which makes it much easier to understand what happened and troubleshoot problems.
If another skill or tool needs to consume the report, first convert it with report-tool from the same platform CLI package. Prefer Markdown for LLM-based workflows. Use JSON when the report needs to be processed programmatically.
npx -y @midscene/web@1 report-tool --action to-markdown --htmlPath ./midscene_run/report/.../index.html --outputDir ./output-markdown
npx -y @midscene/web@1 report-tool --action split --htmlPath ./midscene_run/report/.../index.html --outputDir ./output-data
The browser persists across CLI calls via a background Chrome process. Follow this pattern:
act to perform the desired action or target-driven instructions.connect --url before any interaction."the button", say "the blue Submit button in the contact form"."the red Buy Now button" instead of "#buy-btn".close to shut down the browser and free resources.act command: When performing consecutive operations within the same page, combine them into one act prompt instead of splitting them into separate commands. For example, "fill in the email and password fields, then click the Login button" should be a single act call, not three. This reduces round-trips, avoids unnecessary screenshot-analyze cycles, and is significantly faster.tap --locate when a reference image is provided: If the user shares a screenshot, icon, or logo and wants that exact visual target, use tap --locate with a multimodal locate JSON object such as { "prompt": "...", "images": [...] } instead of relying only on act --prompt.Example — Dropdown selection:
npx -y @midscene/web@1 act --prompt "click the country dropdown and select Japan"
npx -y @midscene/web@1 take_screenshot
Example — Form interaction:
npx -y @midscene/web@1 act --prompt "fill in the email field with 'user@example.com' and the password field with 'pass123', then click the Log In button"
npx -y @midscene/web@1 take_screenshot
.env file contains MIDSCENE_MODEL_API_KEY=<your-key>.@midscene/* Dependency Version Outdatednpm ls @midscene/web @midscene/core @midscene/shared (or pnpm why @midscene/web).npm view @midscene/web version, npm view @midscene/core version, npm view @midscene/shared version.npm i @midscene/web@latest @midscene/core@latest @midscene/shared@latest.