From solo
Automates GitHub outreach by scanning competitor library dependents, evaluating repo switch potential via README and package files, and drafting personalized issues.
npx claudepluginhub fortunto2/solo-factory --plugin soloThis skill is limited to using the following tools:
Competitive outreach pipeline. Scan a competitor's dependents, evaluate which repos would benefit from switching to your library, and draft personalized GitHub issues.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Competitive outreach pipeline. Scan a competitor's dependents, evaluate which repos would benefit from switching to your library, and draft personalized GitHub issues.
Works with any crate/package — not specific to any product.
Use these instead of reimplementing from scratch:
scripts/init_jsonl.py <repos.txt> <out.jsonl> — convert repo list to JSONLscripts/enrich.py <jsonl> [--batch 30] — add stars/description via gh api, skip forks/archivedscripts/evaluate.py <jsonl> <owner/repo> — deep-evaluate a repo (README + Cargo.toml + feature detection)scripts/evaluate.py <jsonl> --next — pick next highest-star enriched reposcripts/status.py <jsonl> [--targets] [--csv] — show pipeline status tableAll data lives in data/outreach/{competitor}/ in the project directory:
data/outreach/{competitor}/
config.json # Competitor, our product, feature matrix
dependents.jsonl # One JSON object per repo (append-only)
progress.json # Cursor: last evaluated index, stats
Each line is a JSON object:
{
"repo": "owner/name",
"stars": 1234,
"description": "...",
"language": "Rust",
"last_push": "2026-03-20T...",
"archived": false,
"phase": "raw|enriched|evaluated|drafted|posted|skipped",
"score": 0,
"features_used": ["streaming", "tools", "embeddings"],
"our_advantages": ["websocket", "structured_outputs"],
"verdict": "skip|maybe|target",
"verdict_reason": "fork of AppFlowy, not original",
"draft_title": "",
"draft_body": "",
"issue_url": "",
"evaluated_at": "",
"notes": ""
}
JSONL is append-friendly and grep/jq compatible. Render as table with status command.
| User says | Action |
|---|---|
/outreach setup | Configure competitor + our product |
/outreach enrich | Add stars/description via gh api |
/outreach next | Evaluate next unevaluated repo |
/outreach evaluate [repo] | Deep-evaluate a specific repo |
/outreach draft [repo] | Generate issue text for a target |
/outreach status | Show progress table |
/outreach batch [N] | Evaluate next N repos (default 5) |
First-time configuration.
Ask or read from $ARGUMENTS:
owner/repo (e.g., async-openai)Create data/outreach/{competitor}/config.json:
{
"competitor": "async-openai",
"competitor_repo": "64bit/async-openai",
"our_product": "openai-oxide",
"our_repo": "fortunto2/openai-oxide",
"feature_matrix": {
"persistent_websockets": {"us": true, "them": false, "impact": "high", "pitch": "..."},
"structured_outputs": {"us": true, "them": false, "impact": "high", "pitch": "..."}
}
}
dependents.jsonl exists. If not, run scraper:
scripts/scrape-dependents.sh {competitor_repo} 50 > /tmp/deps.txt
Convert to JSONL with phase=raw.Fast pass: add GitHub metadata to all phase=raw entries.
# For each raw entry, call gh api
gh api repos/{owner}/{repo} --jq '{
stars: .stargazers_count,
description: .description,
language: .language,
last_push: .pushed_at,
archived: .archived,
topics: .topics
}'
enrichedgh api returns 404 (private/deleted) — set phase=skippedDeep analysis of a single repo. This is where the agent thinks.
gh api repos/{owner}/{repo}/readme --jq '.content' | base64 -d
Understand: what does this project do? How do they use the competitor?
gh api repos/{owner}/{repo}/contents/Cargo.toml --jq '.content' | base64 -d
Which features do they use? What else is in their dependency tree?
If Cargo.toml references the competitor via git = "..." (not crates.io), they forked it — this is a HIGH SIGNAL:
# Find their fork
gh api repos/{fork_owner}/{competitor_name}/commits --jq '.[0:5] | .[] | "\(.sha[0:7]) \(.commit.message | split("\n")[0])"'
Compare their fork commits against upstream. If they added a feature we already have (e.g. schemars for structured outputs), that's our strongest pitch: "you can drop the fork and use us — we have that built-in." Record exactly what they added in notes.
If stars >50, clone shallow and grep:
git clone --depth 1 {url} /tmp/outreach-eval
grep -rn "async.openai\|ChatCompletion\|stream\|tool_call\|embedding" /tmp/outreach-eval/src/
rm -rf /tmp/outreach-eval
Map findings to feature signals (streaming, tools, structured, websocket, etc.)
Score based on:
Verdict:
Update JSONL entry with phase=evaluated.
Always rm -rf /tmp/outreach-eval after analysis.
Generate a personalized GitHub issue for a target repo.
references/issue-templates.mdShow progress across all repos.
Outreach: async-openai → openai-oxide
Phase Count
─────────────────
raw 12
enriched 340
evaluated 180
→ target 8
→ maybe 47
→ skip 125
drafted 3
posted 1
skipped 59
─────────────────
Total 591
Top targets (not yet drafted):
1. fastrepl/char (8068★) — streaming + agent loop
2. risingwavelabs/risingwave (7000★) — embeddings
...
Read from JSONL, aggregate by phase/verdict.
Evaluate next N repos efficiently.
gh api firstrm -rf /tmp/outreach-eval after every evaluationgh api .fork field. Only evaluate originals.--paginate sparingly. Check X-RateLimit-Remaining header.references/issue-templates.md.git = "..." instead of crates.io, they forked the competitor because it's missing something. Check the fork diff (usually 1-3 commits). If they added a feature we already have, that's our #1 pitch — "drop your fork, we have it built-in." Example: fastrepl/char forked async-openai to add schemars → our structured feature does exactly that.