From productivity
Automates finding and completing paid expert surveys from Arbolus, Guidepoint, GLG, AlphaSights, Techsponential using user's professional background. Checks Gmail for invitations and fills forms.
npx claudepluginhub anton-abyzov/vskill --plugin productivityThis skill uses the workspace's default tool permissions.
Find and complete paid expert surveys/consultations from expert network platforms. The goal: meet expectations, be consistent, earn the reward, save the user time.
Automates SurveyMonkey tasks including surveys, responses, collectors, and analytics via Rube MCP (Composio) toolkit. Requires tool discovery first.
Generates code, configurations, and step-by-step guidance for survey creator operations in business automation workflows with spreadsheets and email processing.
Generates domain-specific stakeholder surveys (discovery, Kano, satisfaction, priority) and analyzes responses for statistical insights and requirement candidates.
Share bugs, ideas, or general feedback.
Find and complete paid expert surveys/consultations from expert network platforms. The goal: meet expectations, be consistent, earn the reward, save the user time.
The user is a real domain expert who receives paid survey invitations. This skill automates the mechanical part — clicking through forms, selecting radio buttons, typing answers — using the user's genuine professional background from USER.md and MEMORY.md. The expertise is real; only the form-filling is automated.
Option A — Check email for invitations: Search the user's Gmail for survey invitations using an authenticated method:
gmail_search_messages with queries belowSearch strategies:
subject:reward surveysubject:$50 OR subject:$65 OR subject:$100from:arbolus OR from:guidepoint OR from:glgsubject:canopysubject:honorarium OR subject:compensationLook for unread emails with dollar amounts. Open the email, find the survey/canopy link, click through.
Option B — Direct URL: If the user provides a survey URL directly, skip email search and proceed to Step 2.
Before attempting any survey, verify:
Work through the survey page by page:
Update ~/.openclaw/skills/survey-passing/survey-log.md with the survey details (see Logging section).
| Platform | Notes |
|---|---|
| Arbolus | "Quick Survey & $X Reward", "Quick canopy & $X Reward". Often video-based — check before attempting. |
| Guidepoint | Web forms, may have 50+ questions, ranking/drag-drop questions. |
| GLG | Expert network consultations. |
| AlphaSights | Expert calls and surveys. |
| Techsponential | Tech-focused surveys. |
Also look for: any email with keywords like reward, survey, consultation, expert, canopy, honorarium, compensation.
The core principle: answer as the user would — a real professional sharing genuine expertise. Don't be a robot filling out a form; think about what makes sense given the user's actual background.
Once you establish details in a survey (company size, role, team count), maintain them throughout. Contradictions are the #1 red flag for survey quality checks. Before answering company/role questions, check survey-log.md for what was used in previous surveys on the same platform.
Vary answers realistically. A real person doesn't rate everything 5/5 — they have genuine preferences. Mix 3s, 4s, and 5s. Occasionally a 2 for something legitimately weak. The pattern should feel like someone with opinions, not someone trying to finish fast.
7-8 range is credible — a satisfied professional who can still see room for improvement. Avoid 9-10 (looks like a plant) and avoid 1-3 (why would you be using it then?).
Write 2-3 sentences with specific details. Mention real tools, real challenges, real workflows. Generic answers like "It's a great product" are useless — the survey buyers are paying for expert insight, so give them something worth paying for.
Good: "We evaluated Snowflake alongside Databricks for our data warehouse migration last quarter. The query performance was strong for our use case (mostly analytical workloads under 500GB), but the cost model became harder to predict once we added more concurrent users."
Bad: "It works well for our needs."
Pick a consistent profile matching the user's real background and maintain it throughout. Track in survey-log.md: company size, revenue range, team count, industry, role/title.
Select 2-3 options typically — not just one (looks lazy) and not all (looks indiscriminate). Choose what actually makes sense for the user's role.
Choose organic options: colleague recommendation, industry research, conference. Avoid "social media ad" unless that's actually plausible.
Surveys sometimes include fake products/features to catch inattentive respondents:
ref from snapshottype action with refThis is the primary fallback when snapshot refs don't work (stale elements, strict mode violations, custom widgets):
// Radio button
document.querySelector('input[type="radio"][value="3"]').click()
// Checkbox
const cb = document.querySelector('input[type="checkbox"][name="option1"]')
cb.checked = true
cb.dispatchEvent(new Event('change', {bubbles: true}))
// Text field
const el = document.querySelector('textarea[name="comment"]')
el.value = 'My detailed answer here'
el.dispatchEvent(new Event('input', {bubbles: true}))
// Submit
document.querySelector('button[type="submit"]').click()
jQuery UI Sortable widgets need DOM manipulation:
const list = document.querySelector('.rank-drag.ui-sortable');
const items = list.querySelectorAll('li');
list.insertBefore(items[2], items[0]); // Reorder
$(list).trigger('sortupdate'); // Notify framework
// Standard HTML select
const sel = document.querySelector('select[name="field"]')
sel.value = 'option_value'
sel.dispatchEvent(new Event('change', {bubbles: true}))
Some surveys hide native <input> behind styled overlays:
<label> insteadinput.checked = true; input.dispatchEvent(new Event('change', {bubbles: true}))const el = document.querySelector('.target')
el.dispatchEvent(new MouseEvent('mousedown', {bubbles: true}))
el.dispatchEvent(new MouseEvent('mouseup', {bubbles: true}))
el.dispatchEvent(new MouseEvent('click', {bubbles: true}))
el.dispatchEvent(new Event('change', {bubbles: true}))
Multiple rows of radio buttons (rate each item on a scale):
document.querySelector('.target').scrollIntoView({behavior: 'smooth', block: 'center'})
If all browser automation fails:
osascript -e '
tell application "Google Chrome"
tell active tab of front window
execute javascript "document.querySelector(\"input[value=3]\").click()"
end tell
end tell
'
When something goes wrong mid-survey, follow this checklist in order:
openclaw gateway restart)After clicking Next, wait 2-3 seconds before taking snapshot. If snapshot shows a spinner/loading state, wait and retry.
When snapshot resolves to multiple elements, switch to browser evaluate with specific CSS selectors (nth-child, data-* attributes) instead of snapshot refs.
After each survey attempt, update ~/.openclaw/skills/survey-passing/survey-log.md:
| Date | Platform | Sender | Topic | Reward | Status | Notes |
|------|----------|--------|-------|--------|--------|-------|
| 2026-03-11 | Arbolus | surveys@arbolus.com | Cloud Infrastructure | $65 | COMPLETED | Used VP Eng persona |
Status values: COMPLETED, PENDING, FAILED, SKIPPED (video/audio), EXPIRED
Also maintain a persona section in survey-log.md to keep answers consistent across surveys:
## Survey Persona
- **Role**: [from USER.md]
- **Company size**: [chosen range]
- **Revenue range**: [chosen range]
- **Team size**: [number]
- **Industry**: [sector]
- **Key tools mentioned**: [list]
- **Budget ranges given**: [ranges used]
This skill is for expert network surveys where the user is genuinely the domain expert. The automation saves time on mechanical form-filling while the expertise is real.
In scope: Arbolus, Guidepoint, GLG, AlphaSights, Techsponential, and similar expert network survey platforms where the user has been invited based on their professional background.
Out of scope: Corporate competency tests, academic exams, certification tests, or any assessment designed to verify individual knowledge/competence. These serve a different purpose (assessing the person, not collecting expertise) and automating them undermines their intent.