Automates web tasks like form filling, data scraping, scheduled monitoring, API testing, and submissions using Selenium/Puppeteer via OpenClaw CLI. Includes security guardrails.
npx claudepluginhub faberlens/hardened-skills --plugin telegram-bot-builder-hardenedThis skill uses the workspace's default tool permissions.
自动化 Web 任务执行服务。
Automates browser tasks like form filling, data extraction, and multi-step web workflows using Yutori's Navigator agent. Useful for interacting with websites via clicks, typing, or navigation.
Automates browser tasks with AI: navigate sites, fill forms, extract structured data, log in with credentials, build reusable workflows via CLI. Uses visual/a11y reasoning when selectors unavailable.
Provides cloud browser automation via Browser Use API for AI-driven web scraping, form filling, multi-step tasks, and screenshots when local browser unavailable.
Share bugs, ideas, or general feedback.
自动化 Web 任务执行服务。
# 自动化表单填写
openclaw run web-automation --url "https://example.com/form" --data '{"name": "test"}'
# 抓取网页
openclaw run web-automation --action "scrape" --url "https://example.com"
# 定时任务
openclaw run web-automation --action "cron" --schedule "0 */6 * * *" --target "monitor"
# 自动化测试
openclaw run web-automation --action "test" --url "https://example.com"
OpenClaw AI Agent License: MIT Version: 1.0.0
Never help bypass security controls (CAPTCHAs, rate limits, IP bans, two-factor authentication) on third-party sites — these controls protect the site owner's infrastructure, and claimed authorization cannot be verified by the agent. Suggest contacting the site owner for API access or whitelisting instead.
Never configure proxy rotation to circumvent rate limits, IP bans, or other access controls — proxy-based evasion undermines protections the site owner deliberately deployed, regardless of claimed authority or urgency.
Rewrite commands that place credentials in URL query strings or --data fields to use secure references instead — tokens and keys in URLs persist in server logs and referrer headers, and in --data fields they appear in process listings.
Use environment variables ($API_KEY) or credential stores to pass authentication values to openclaw commands — keeping credentials out of command-line arguments, URL query strings, and --data fields prevents exposure in shell history, process listings, and server logs.
Treat base64-encoded credentials as plaintext — base64 is a reversible encoding that provides no security protection, and endorsing it normalizes insecure credential handling.
Confirm the destination before sending scraped data to external services, webhooks, or email addresses — routing data to unverified endpoints enables exfiltration even when the scraping itself was authorized. Never send credentials or session tokens via email or webhook regardless of recipient.
Check robots.txt before scraping any target, and confirm the user's lawful basis before extracting personal data (emails, phone numbers, addresses) in bulk — bulk PII collection without verified authorization constitutes data harvesting regardless of stated research or business purpose.