From hoyeon
Collects diverse opinions on technical topics from developer communities. Aggregates Reddit, HN, Dev.to, Lobsters, ProductHunt, X, Threads. Use for developer reactions and community opinions.
npx claudepluginhub team-attention/hoyeon --plugin hoyeonThis skill uses the workspace's default tool permissions.
Collect and synthesize diverse opinions on specific topics from multiple developer communities.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Collect and synthesize diverse opinions on specific topics from multiple developer communities.
Quickly understand diverse perspectives on technical topics:
| Platform | Method |
|---|---|
Vendored web-search.mjs (chromux) — Google site:reddit.com + enrichment (post body, comments, score) | |
| X (Twitter) | Vendored web-search.mjs (chromux) — Google site:x.com + enrichment (tweets, likes, replies) |
| Hacker News | Vendored hn-search.py (python3) — Algolia API, no key needed |
| Dev.to | Vendored web-search.mjs (chromux) — Google site:dev.to + enrichment (article, comments) |
| Lobsters | Vendored web-search.mjs (chromux) — Google site:lobste.rs + enrichment (article, comments) |
| Threads | Vendored web-search.mjs (chromux) — Google site:threads.net + enrichment (posts, replies, likes) |
| ProductHunt | Vendored ph-search.py (python3) — GraphQL API, requires PRODUCT_HUNT_TOKEN env var |
Run all checks in a single Bash call using shell backgrounding (& + wait).
Claude Code executes Bash calls sequentially — multiple Bash tool calls do NOT run in parallel.
The only way to parallelize is within one shell invocation.
mkdir -p /tmp/dev-scan-$$
# Kill existing chromux instance (may be non-headless) and relaunch in headless mode
chromux kill 2>/dev/null || true
chromux launch default --headless 2>/dev/null || true
node skills/dev-scan/vendor/chromux-search/web-search.mjs --check > /tmp/dev-scan-$$/web.txt 2>&1 &
python3 skills/dev-scan/vendor/hn-search/hn-search.py --check > /tmp/dev-scan-$$/hn.txt 2>&1 &
python3 skills/dev-scan/vendor/ph-search/ph-search.py --check > /tmp/dev-scan-$$/ph.txt 2>&1 &
wait
echo "=== Web (chromux) ===" && cat /tmp/dev-scan-$$/web.txt
echo "=== HN ===" && cat /tmp/dev-scan-$$/hn.txt
echo "=== ProductHunt ===" && cat /tmp/dev-scan-$$/ph.txt
rm -rf /tmp/dev-scan-$$
| Result | Action |
|---|---|
web-search --check → available: true | chromux available — Reddit, X, Dev.to, Lobsters all use Google site: + enrichment |
web-search --check → available: false | Fall back to WebSearch tool for all Google-based sources |
hn-search --check → available: true | Hacker News source available |
hn-search --check → available: false | Fall back to WebSearch for HN |
ph-search --check → available: true | ProductHunt source available |
ph-search --check → available: false | Skip ProductHunt (token not set or invalid) |
Report available sources before proceeding. Minimum 1 source required.
Note: Step 0 (dependency check) and Step 1 (query planning) are independent — run Step 0 bash commands and perform Step 1 reasoning in the same message to save a round-trip.
Extract structured components from user request:
comparison | opinion | technology | eventExamples:
React 19, entities: [React 19], type: opinionBun vs Deno, entities: [Bun, Deno], type: comparisonRedis license, entities: [Redis], type: eventUser requests are often complex or conversational. Before generating platform-specific queries, decompose the request into atomic search concepts that search engines can match effectively.
Why this matters: Search engines match keywords, not intent. A verbose question like "Is React 19's use() hook a viable replacement for useEffect patterns in production apps?" will miss threads titled "use() vs useEffect" or "React 19 hooks review". Decomposition bridges this gap.
Process:
core: The most concise keyword combination (2-4 words)versus: Direct comparison form if applicable ("A vs B")opinion: How people ask about it ("A worth it", "A review", "A experience")technical: Specific feature/aspect if the question targets one ("A feature X")Example: "Can React 19's use() hook replace the existing useEffect pattern?"
| Variant | Query |
|---|---|
core | React 19 use hook |
versus | use() vs useEffect |
opinion | React 19 use hook worth it |
technical | React 19 use hook replace useEffect |
Example: "Is Cursor worth paying for compared to GitHub Copilot?"
| Variant | Query |
|---|---|
core | Cursor AI editor |
versus | Cursor vs GitHub Copilot |
opinion | Cursor worth paying for |
technical | (not applicable — no specific feature) |
Example: "What happened with the Redis license change"
| Variant | Query |
|---|---|
core | Redis license |
versus | (not applicable) |
opinion | Redis license change reaction |
technical | Redis SSPL Valkey fork |
Map the best variant from Step 1-2 to each platform's search behavior. Store all variants — the retry step (Step 2.5) needs alternate queries if the primary returns 0 results.
| Source | Variable | Best variant | Retry variant | Platform-specific adjustments |
|---|---|---|---|---|
Q_REDDIT | versus or opinion | core | Google site:reddit.com — keep "vs", natural phrasing. Enrichment extracts post body + top comments. | |
| X/Twitter | Q_TWITTER | versus or core | opinion | Google site:x.com — short terms. Enrichment extracts tweets + likes + replies. |
| HN | Q_HN | core or technical | core (shorter) | Drop "vs" — Algolia full-text matches better without. |
| Dev.to | Q_DEVTO | opinion or versus | core | Google site:dev.to — add context word (comparison/review/guide) for recall. |
| Lobsters | Q_LOBSTERS | core | core (2 words max) | Google site:lobste.rs — simple terms. Small community, keep broad. |
| Threads | Q_THREADS | opinion or core | core | Google site:threads.net — short-form posts. Similar to X/Twitter, concise queries work best. |
| ProductHunt | Q_PH | core | — | Product names only. Drop generic words. Only if PH relevant (see below). |
ProductHunt relevance check — PH is a product launch community. Only set Q_PH when the query involves specific products, tools, or SaaS (e.g. "Cursor", "Linear", "Supabase vs Firebase"). Skip PH when the topic is abstract/conceptual (e.g. "microservices best practices", "Rust async patterns", "tech layoffs").
Full example: user asks "claude code vs codex"
Decomposition: core=claude code codex, versus=claude code vs codex, opinion=claude code vs codex worth it
| Variable | Variant used | Optimized Query |
|---|---|---|
Q_REDDIT | versus | claude code vs codex |
Q_TWITTER | versus | claude code vs codex |
Q_HN | core | claude code codex |
Q_DEVTO | versus | claude code vs codex comparison |
Q_LOBSTERS | core | claude code codex |
Q_THREADS | opinion | claude code vs codex |
Q_PH | core | claude code codex |
Extract time period from user request. Default: month.
| User says | TIME_PERIOD | --time value |
|---|---|---|
| (nothing) | month | month / m |
| "last week" | week | week / w |
| "last few days" | week | week / w |
| "this year" | year | year / y |
| "all time" | all | all / a |
Use TIME_PERIOD in all search commands below.
Split into two phases: API sources in parallel (shell backgrounding), then all Google site: sources sequentially (chromux shares one Chrome instance — simultaneous use causes tab conflicts).
Results go to files, not stdout. Enriched JSON can exceed 50KB — piping to stdout hits Claude Code's output limit. Instead, save to files and use the Read tool to access them. This also serves as a log of the scan.
Both Bash calls must share the same temp directory. Generate a stable RUN_ID once and use it in both calls.
Bash call 1 — API sources (parallel):
SESSION_ID="[session ID from UserPromptSubmit hook]"
RUN_ID="dev-scan-$(date +%s)-$RANDOM"
D="$HOME/.hoyeon/$SESSION_ID/tmp/$RUN_ID"
mkdir -p "$D"
echo "$D" > /tmp/dev-scan-current-dir
python3 skills/dev-scan/vendor/hn-search/hn-search.py "{Q_HN}" --count 10 --comments 5 --time {TIME_PERIOD} --json > "$D/hn.json" 2>"$D/hn.err" &
python3 skills/dev-scan/vendor/ph-search/ph-search.py "{Q_PH}" --count 10 --comments 3 --time {TIME_PERIOD} --json > "$D/ph.json" 2>"$D/ph.err" &
wait
echo "RUN_DIR=$D"
for f in "$D"/*.json; do echo "$(basename $f): $(wc -c < $f) bytes, $(python3 -c "import json,sys; d=json.load(open('$f')); print(len(d) if isinstance(d,list) else 'obj')" 2>/dev/null || echo '?') items"; done
Bash call 2 — Google site: sources (sequential via chromux, same Bash call):
D="$(cat /tmp/dev-scan-current-dir)"
node skills/dev-scan/vendor/chromux-search/web-search.mjs "{Q_REDDIT}" --site reddit.com --time {TIME_SHORT} --count 5 --comments 5 --body 300 --json > "$D/reddit.json" 2>"$D/reddit.err"
node skills/dev-scan/vendor/chromux-search/web-search.mjs "{Q_TWITTER}" --site x.com --time {TIME_SHORT} --count 5 --comments 5 --json > "$D/x.json" 2>"$D/x.err"
node skills/dev-scan/vendor/chromux-search/web-search.mjs "{Q_DEVTO}" --site dev.to --time {TIME_SHORT} --count 5 --comments 5 --body 300 --json > "$D/devto.json" 2>"$D/devto.err"
node skills/dev-scan/vendor/chromux-search/web-search.mjs "{Q_LOBSTERS}" --site lobste.rs --time {TIME_SHORT} --count 5 --comments 5 --json > "$D/lobsters.json" 2>"$D/lobsters.err"
node skills/dev-scan/vendor/chromux-search/web-search.mjs "{Q_THREADS}" --site threads.net --time {TIME_SHORT} --count 5 --comments 5 --body 300 --json > "$D/threads.json" 2>"$D/threads.err"
for f in "$D"/*.json; do echo "$(basename $f): $(wc -c < $f) bytes, $(python3 -c "import json,sys; d=json.load(open('$f')); print(len(d) if isinstance(d,list) else 'obj')" 2>/dev/null || echo '?') items"; done
Reading results: Use the Read tool on each $D/{source}.json file. Read the files with the most items first (Reddit, Dev.to tend to be richest). Skip files with 0 items.
TIME_SHORT mapping: month→m, week→w, year→y, all→a (web-search.mjs uses single-letter time codes).
--check in Step 0 or is not relevant (e.g. skip PH line if Q_PH not set).WebSearch tool with site: filter for all Google-based sources.rm -rf "$D" yet — keep the files until synthesis is complete. Clean up after final output.After Step 2, check which sources returned 0 results (empty JSON array []). Empty results often mean the query was too specific or the time window too narrow — not that the community has nothing to say.
Retry strategy (one Bash call for all retries):
core variant (2-3 words). For Lobsters, try just 2 keywords.TIME_PERIOD was month, retry with year. If already year or all, skip time broadening.D="$(cat /tmp/dev-scan-current-dir)"
# Example: HN returned 0, retry with shorter query + broader time
python3 skills/dev-scan/vendor/hn-search/hn-search.py "{Q_HN_RETRY}" --count 10 --comments 5 --time year --json > "$D/hn.json" 2>"$D/hn.err"
# Example: Lobsters returned 0, retry with 2-word query + broader time
node skills/dev-scan/vendor/chromux-search/web-search.mjs "{Q_LOBSTERS_RETRY}" --site lobste.rs --time y --count 5 --comments 5 --json > "$D/lobsters.json" 2>"$D/lobsters.err"
for f in "$D"/*.json; do echo "$(basename $f): $(wc -c < $f) bytes, $(python3 -c "import json,sys; d=json.load(open('$f')); print(len(d) if isinstance(d,list) else 'obj')" 2>/dev/null || echo '?') items"; done
Skip retry if: The topic is genuinely niche for that platform (e.g., Lobsters has very few posts on commercial tools). Note the skip reason in the output.
Max 1 retry per source. If retry also returns 0, move on.
| Source | Tool | Notes |
|---|---|---|
| web-search.mjs | Google site:reddit.com + enrichment. Extracts: post title, body, author, score, top comments with author/score. | |
| X/Twitter | web-search.mjs | Google site:x.com + enrichment. Extracts: tweets, author, handle, likes, time. |
| HN | hn-search.py | Algolia API, no key. Stories with points and top comments. |
| Dev.to | web-search.mjs | Google site:dev.to + enrichment. Extracts: article body, author, tags, comments. |
| Lobsters | web-search.mjs | Google site:lobste.rs + enrichment. Extracts: article body, author, tags, score, comments. |
| Threads | web-search.mjs | Google site:threads.net + enrichment. Extracts: posts, author, replies, likes. Requires chromux login. |
| ProductHunt | ph-search.py | GraphQL API, needs PRODUCT_HUNT_TOKEN. Only for product/tool queries. |
Deduplicate across sources: If the same URL appears in multiple source results, merge them (keep the richer version with more comments/metadata). Cite by the actual platform (Reddit, X, Dev.to), not "Google".
For every comment extracted from Reddit, X/Twitter, and Threads (Google site: enriched results), tag sentiment:
| Tag | When to apply |
|---|---|
positive | Praise, endorsement, excitement, recommendation |
negative | Criticism, frustration, warning, discouragement |
neutral | Factual statement, question, "it depends" |
mixed | Same comment contains both positive and negative points |
Use these tags downstream in Opinion Classification and Controversy detection — comments with opposing sentiment on the same subtopic signal controversy.
Classify collected opinions by:
positive comments)negative comments)neutral/mixed)Identify opinions repeatedly appearing across communities:
Find points where opinions diverge:
Find unique or deep insights:
Core Principle: All opinions must have inline source. No opinions without sources. The report is designed for quick scanning AND decision-making — TL;DR first, details after.
## TL;DR
> [1-2 sentence summary of overall community sentiment and the key takeaway.
> e.g. "The community is broadly positive about X, but many suggest Z is a better choice in Y situations."]
## Sentiment Overview
Positive ████████░░ 75% | Negative ██░░░░░░░░ 20% | Neutral █░░░░░░░░░ 5%
Sources: Reddit N, X N, HN N, Dev.to N, Lobsters N, Threads N
---
## Key Findings
### Consensus
1. **[Opinion Title]**
- [Detailed description]
- Sources: [Reddit](url), [HN](url)
2. **[Opinion Title]**
- [Details]
- Source: [Dev.to](url)
(at least 5)
---
### Controversy
1. **[Controversy Topic]**
- Pro: "[Quote]" - [Source](url)
- Con: "[Quote]" - [Source](url)
- Context: [Why opinions diverge]
(at least 3)
---
### Notable Perspective
1. **[Insight Title]**
> "[Original quote or key sentence]"
- [Why this is notable]
- Source: [Platform](url)
(at least 3)
---
## Decision Signal
- **If you need [topic]**: [Clear recommendation based on majority opinion]
- **Watch out for**: [Top 2-3 risks/concerns frequently mentioned]
- **Alternatives worth considering**: [Other options the community recommends, with context on when they fit better]
- **Confidence**: High/Medium/Low — based on volume and agreement across sources
Calculate sentiment from comment-level tags (Step 3-0). The bar uses block chars:
█ = 10% filled, ░ = 10% emptySource: [Platform](url)Sources: [Reddit](url), [HN](url)"..." format when possible| Situation | Response |
|---|---|
| 0 results for a source | Retry once with alternate query variant + broader time (Step 2.5). Skip after 2nd failure. |
| chromux unavailable | Fall back to WebSearch tool with site: filter for all Google-based sources |
| web-search enrichment timeout on URL | Skip that URL, include remaining results |
| hn-search failure | Retry with shorter query. Skip HN if retry also fails. |
| ph-search failure / token missing | Skip ProductHunt, proceed with other sources |
| Output too large for stdout | Results are in files — use Read tool (already the default approach) |
| Topic too new | Note insufficient results, suggest related keywords |