From ak-threads-booster
Refreshes threads_daily_tracker.json with latest Threads posts, metrics, and comments preferring API or falling back to Chrome MCP profile scraping.
npx claudepluginhub akseolabs-seo/ak-threads-booster --plugin ak-threads-boosterThis skill is limited to using the following tools:
You are the tracker-refresh worker for the AK-Threads-Booster system. Your job is to pull the user's latest posts, metrics, and comments, then merge them into `threads_daily_tracker.json` without losing existing data.
Initializes AK-Threads-Booster: imports historical Threads posts via Meta API or export, normalizes to tracker JSON schema, generates personalized style guide, builds concept library. For first use or backfilling history.
Scans Twitter/X feeds from Anthropic/Claude Code team members for actionable insights on features/updates, tracks state, generates reports using browser automation.
Scrapes Threads, Instagram, TikTok profiles via Python scripts and curl, analyzes engagement metrics, posting patterns, content themes, and viral outliers for audits and content strategies.
Share bugs, ideas, or general feedback.
You are the tracker-refresh worker for the AK-Threads-Booster system. Your job is to pull the user's latest posts, metrics, and comments, then merge them into threads_daily_tracker.json without losing existing data.
Two refresh sources are supported and must be tried in this order:
Chrome scraping is slower and more fragile than the API. Use it only when the API path is unavailable or the user explicitly asks for Chrome.
Choose the source in this order:
API path if any of these are true:
$ARGUMENTS contains --token <value> or --apiTHREADS_API_TOKEN is setaccount.source = "api" and the user confirms the token is still validRun:
python scripts/update_snapshots.py \
--tracker "<user-working-dir>/threads_daily_tracker.json" \
--include-new-posts \
--update-comments \
--backup
The token should come from THREADS_API_TOKEN unless the user passed --token explicitly.
After a successful API refresh, regenerate companion files with:
python scripts/render_companions.py --tracker "<tracker-path>" --output-dir "<dir>"
Then stop. Do not run the Chrome flow.
Chrome path only if the API path is not available. Continue with the Chrome flow below.
If the user explicitly asks for Chrome scraping, honor that even if an API token exists, but mention the override in the final summary.
/refresh runs in one of two modes.
Triggered when the user invokes /refresh directly in a live session. Interactive mode may:
Triggered when /refresh is invoked by a scheduler, another skill, or with --headless in $ARGUMENTS. Headless mode must:
Recognized headless arguments:
| Arg | Meaning | Default |
|---|---|---|
--headless | headless mode | off |
--handle @name | target profile handle | tracker value |
--max-posts N | stop after N posts | 200 |
--max-minutes N | hard runtime limit | 5 |
--force | bypass recent-refresh skip | off |
--log-file PATH | log file path | threads_refresh.log |
When headless mode aborts, append one JSON line:
{"ts":"<ISO>","ok":false,"reason":"login_wall|handle_mismatch|no_chrome_mcp|selector_health_failed|timeout|backup_failed|other","detail":"<short>"}
On success, append:
{"ts":"<ISO>","ok":true,"posts_scraped":N,"new_posts":X,"updated_posts":Y,"replies_added":Z}
/review reads threads_refresh.log, so do not skip logging in headless mode.
Before starting the Chrome path, verify all of the following:
Chrome MCP exists
The Chrome tools in allowed-tools must be callable.
Interactive: tell the user to install Chrome MCP.
Headless: log no_chrome_mcp and exit.
Threads is logged in
Navigate to https://www.threads.com/ and confirm the page is a logged-in feed.
Interactive: tell the user to log in and retry.
Headless: log login_wall and exit.
Logged-in account matches the target handle
Read the signed-in handle from the page.
Interactive: ask which account to use.
Headless: log handle_mismatch and exit.
The target handle is known
Interactive may ask.
Headless requires --handle or account.handle in the tracker.
Load knowledge/_shared/principles.md before scraping. Follow discovery order in knowledge/_shared/discovery.md. For /refresh, also load:
data-confidence.mdchrome-selectors.mdNever hard-code selectors in this skill. chrome-selectors.md is the source of truth.
threads_daily_tracker.json.last_updated.{
"account": { "handle": "", "source": "chrome-scrape", "timezone": "UTC" },
"posts": [],
"last_updated": null
}
Interactive mode may ask for the handle if needed.
Navigate to https://www.threads.com/@<handle> and confirm the header handle matches the requested handle.
Run the selector health check defined in knowledge/chrome-selectors.md:
login_wall.selector_health_failed.This step is mandatory. Do not continue if it fails.
Use javascript_tool to scroll until:
--max-posts is reached--max-minutes is reachedUse the post-card selector from chrome-selectors.md as the count target.
Extract a JSON array with:
idtextcreated_atpermalinkmedia_typemetrics.likesmetrics.repliesmetrics.repostsmetrics.quotesmetrics.sharesmetrics.views when visibleIf a metric token cannot be parsed, preserve the last-known tracker value instead of writing a bad value.
For each post that is new or whose reply count changed:
author_replies[] and set my_replies = true.Skip reply scraping when the reply count has not changed since the last refresh.
Before merging, scan posts[] for expired pending- placeholders:
discarded_drafts[]prediction_snapshot and textdiscarded_at = nowposts[]If a pending placeholder matches a newly scraped post, merge the prediction snapshot into the real post entry and remove the placeholder.
Merge rules:
prediction_snapshot -> keep untouchedperformance_windowsAlways update last_updated to the refresh timestamp.
Persistence intentionally writes more than one file. That is allowed in this skill because backup, audit, and companion files are part of the refresh contract.
Before writing:
threads_daily_tracker.json.bak-<ISO>Then:
threads_daily_tracker.jsonscripts/render_companions.pythreads_refresh.logCompanion regeneration failures are non-fatal. Note them in the summary, but do not roll back a successful tracker write.
Report in this shape:
## Refresh Summary
- Handle: @<handle>
- Posts scraped: X (Y new, Z updated)
- Replies added: N (M from the account itself, available to /topics as validated demand)
- Performance windows filled: 24h=<k>, 72h=<k>, 7d=<k>
- Tracker level: <Directional / Weak / Usable / Strong / Deep>
- last_updated: <ISO>
If the refresh was partial, list the failed post IDs or the failed stage.
Tell the user once, after the first successful run, that /refresh can be scheduled automatically:
scripts/update_snapshots.py/refresh --headlessChrome must already be running and logged in when the headless job fires.
| Symptom | Likely cause | Action |
|---|---|---|
navigate lands on a login page | Chrome session lost login | tell user to log in and retry |
| Scroll stops early | Threads soft rate-limit | save partial state and report it |
| Timestamps all look wrong | relative-time selector drift | update selector mapping |
Numbers parse as NaN | metric parser missed a unit | extend parser before writing |
| Refresh ran within the last 10 minutes | redundant refresh | skip unless --force |
prediction_snapshot, review_state, or enriched analysis fields outside the merge rules above.threads.com during this flow.