From x-research
Searches X/Twitter via API v2 for real-time dev discussions, product feedback, breaking news, expert opinions. Supports engagement sorting, user profiles, thread fetching, watchlists, result caching. Useful for recent discourse on library releases, API changes, launches.
npx claudepluginhub trailofbits/skills-curated --plugin x-researchThis skill is limited to using the following tools:
Agentic research over X/Twitter. Decompose research questions into targeted searches,
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
Agentic research over X/Twitter. Decompose research questions into targeted searches, iteratively refine, follow threads, deep-dive linked content, and synthesize sourced briefings.
For X API details (endpoints, operators, response format): read
{baseDir}/skills/x-research/references/x-api.md.
X_BEARER_TOKEN (or XAI_API_KEY) env varpip install uv or https://docs.astral.sh/uv/)All commands use uv run for automatic dependency management:
uv run {baseDir}/skills/x-research/scripts/x_search.py search "<query>" [options]
Options:
--sort likes|impressions|retweets|recent -- sort order (default: likes)--since 1h|3h|12h|1d|7d -- time filter (default: last 7 days)--min-likes N -- filter by minimum likes--min-impressions N -- filter by minimum impressions--pages N -- pages to fetch, 1-5 (default: 1, 100 tweets/page)--limit N -- max results to display (default: 15)--quick -- quick mode: 1 page, max 10 results, auto noise filter, 1hr cache--from-user <username> -- shorthand for from:username in query--quality -- filter low-engagement tweets (min 10 likes, post-hoc)--no-replies -- exclude replies--save -- save results to ~/x-research-output/--json -- raw JSON output--markdown -- markdown output for research docsAuto-adds -is:retweet unless query already includes it. All searches display estimated API cost.
Examples:
uv run {baseDir}/skills/x-research/scripts/x_search.py search "claude code" --sort likes --limit 10
uv run {baseDir}/skills/x-research/scripts/x_search.py search "from:anthropic" --sort recent
uv run {baseDir}/skills/x-research/scripts/x_search.py search "(cursor OR windsurf) AI editor" --pages 2 --save
uv run {baseDir}/skills/x-research/scripts/x_search.py search "AI agents" --quick
uv run {baseDir}/skills/x-research/scripts/x_search.py search "AI agents" --quality --quick
uv run {baseDir}/skills/x-research/scripts/x_search.py profile <username> [--count N] [--replies] [--json]
Fetches recent tweets from a specific user (excludes replies by default).
uv run {baseDir}/skills/x-research/scripts/x_search.py thread <tweet_id> [--pages N]
Fetches full conversation thread by root tweet ID.
uv run {baseDir}/skills/x-research/scripts/x_search.py tweet <tweet_id> [--json]
uv run {baseDir}/skills/x-research/scripts/x_search.py watchlist # Show all
uv run {baseDir}/skills/x-research/scripts/x_search.py watchlist add <user> [note] # Add account
uv run {baseDir}/skills/x-research/scripts/x_search.py watchlist remove <user> # Remove
uv run {baseDir}/skills/x-research/scripts/x_search.py watchlist check # Check recent
Watchlist stored in {baseDir}/skills/x-research/data/watchlist.json.
uv run {baseDir}/skills/x-research/scripts/x_search.py cache clear
15-minute TTL. Avoids re-fetching identical queries.
When doing deep research (not just a quick search), follow this loop:
Turn the research question into 3-5 keyword queries using X search operators:
from: specific known experts(broken OR bug OR issue OR migration)(shipped OR love OR fast OR benchmark)url:github.com or url: specific domains-is:retweet (auto-added), add -is:reply if needed-airdrop -giveaway -whitelist for crypto-adjacent topicsRun each query via CLI. After each, assess:
from: specifically?thread command?WebFetch?When a tweet has high engagement or is a thread starter:
uv run {baseDir}/skills/x-research/scripts/x_search.py thread <tweet_id>
When tweets link to GitHub repos, blog posts, or docs, fetch with WebFetch. Prioritize links that:
Group findings by theme, not by query:
### [Theme/Finding Title]
[1-2 sentence summary]
- @username: "[key quote]" (NL, NI) [Tweet](url)
- @username2: "[another perspective]" (NL, NI) [Tweet](url)
Resources shared:
- [Resource title](url) -- [what it is]
Use --save flag or save manually.
-is:reply, use --sort likes, narrow keywordsOR, remove restrictive operators-$ -airdrop -giveaway -whitelistfrom: or --min-likes 50has:linksX API uses pay-per-use pricing ($0.005/post read, $0.01/user lookup). Quick mode
keeps costs under ~$0.50/search. Always check the cost display after each search.
Cache prevents duplicate charges. See references/x-api.md for full pricing.
skills/x-research/
SKILL.md (this file)
scripts/
x_search.py (CLI entry point, run with uv)
x_api.py (X API wrapper)
x_cache.py (file-based cache, 15min TTL)
x_format.py (terminal + markdown formatters)
data/
watchlist.json (accounts to monitor)
cache/ (auto-managed)
references/
x-api.md (X API endpoint reference)