Researches topics from last 30 days on Reddit, X, and web; parses intent to deliver prompting techniques with copy-paste prompts, recommendation lists, news, or insights.
From antigravity-awesome-skillsnpx claudepluginhub sickn33/antigravity-awesome-skills --plugin antigravity-awesome-skillsThis skill uses the workspace's default tool permissions.
README.mdSPEC.mdTASKS.mdfixtures/models_openai_sample.jsonfixtures/models_xai_sample.jsonfixtures/openai_sample.jsonfixtures/reddit_thread_sample.jsonfixtures/xai_sample.jsonplans/feat-add-websearch-source.mdplans/fix-strict-date-filtering.mdscripts/last30days.pyscripts/lib/__init__.pyscripts/lib/cache.pyscripts/lib/dates.pyscripts/lib/dedupe.pyscripts/lib/env.pyscripts/lib/http.pyscripts/lib/models.pyscripts/lib/normalize.pyscripts/lib/openai_reddit.pyDesigns and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Research ANY topic across Reddit, X, and the web. Surface what people are actually discussing, recommending, and debating right now.
Use cases:
Before doing anything, parse the user's input for:
Common patterns:
[topic] for [tool] → "web mockups for Nano Banana Pro" → TOOL IS SPECIFIED[topic] prompts for [tool] → "UI design prompts for Midjourney" → TOOL IS SPECIFIED[topic] → "iOS design mockups" → TOOL NOT SPECIFIED, that's OKIMPORTANT: Do NOT ask about target tool before research.
Store these variables:
TOPIC = [extracted topic]TARGET_TOOL = [extracted tool, or "unknown" if not specified]QUERY_TYPE = [RECOMMENDATIONS | NEWS | HOW-TO | GENERAL]The skill works in three modes based on available API keys:
API keys are OPTIONAL. The skill will work without them using WebSearch fallback.
If the user wants to add API keys for better results:
mkdir -p ~/.config/last30days
cat > ~/.config/last30days/.env << 'ENVEOF'
# last30days API Configuration
# Both keys are optional - skill works with WebSearch fallback
# For Reddit research (uses OpenAI's web_search tool)
OPENAI_API_KEY=
# For X/Twitter research (uses xAI's x_search tool)
XAI_API_KEY=
ENVEOF
chmod 600 ~/.config/last30days/.env
echo "Config created at ~/.config/last30days/.env"
echo "Edit to add your API keys for enhanced research."
DO NOT stop if no keys are configured. Proceed with web-only mode.
IMPORTANT: The script handles API key detection automatically. Run it and check the output to determine mode.
Step 1: Run the research script
TOPIC_FILE="$(mktemp)"
trap 'rm -f "$TOPIC_FILE"' EXIT
cat <<'LAST30DAYS_TOPIC' > "$TOPIC_FILE"
$ARGUMENTS
LAST30DAYS_TOPIC
python3 ~/.claude/skills/last30days/scripts/last30days.py "$(cat "$TOPIC_FILE")" --emit=compact 2>&1
The script will automatically:
Step 2: Check the output mode
The script output will indicate the mode:
Step 3: Do WebSearch
For ALL modes, do WebSearch to supplement (or provide all data in web-only mode).
Choose search queries based on QUERY_TYPE:
If RECOMMENDATIONS ("best X", "top X", "what X should I use"):
best {TOPIC} recommendations{TOPIC} list examplesmost popular {TOPIC}If NEWS ("what's happening with X", "X news"):
{TOPIC} news 2026{TOPIC} announcement updateIf PROMPTING ("X prompts", "prompting for X"):
{TOPIC} prompts examples 2026{TOPIC} techniques tipsIf GENERAL (default):
{TOPIC} 2026{TOPIC} discussionFor ALL query types:
Step 3: Wait for background script to complete Use TaskOutput to get the script results before proceeding to synthesis.
Depth options (passed through from user's command):
--quick → Faster, fewer sources (8-12 each)--deep → Comprehensive (50-70 Reddit, 40-60 X)After all searches complete, internally synthesize (don't display stats yet):
The Judge Agent must:
Do NOT display stats here - they come at the end, right before the invitation.
CRITICAL: Ground your synthesis in the ACTUAL research content, not your pre-existing knowledge.
Read the research output carefully. Pay attention to:
ANTI-PATTERN TO AVOID: If user asks about "clawdbot skills" and research returns ClawdBot content (self-hosted AI agent), do NOT synthesize this as "Claude Code skills" just because both involve "skills". Read what the research actually says.
CRITICAL: Extract SPECIFIC NAMES, not generic patterns.
When user asks "best X" or "top X", they want a LIST of specific things:
BAD synthesis for "best Claude Code skills":
"Skills are powerful. Keep them under 500 lines. Use progressive disclosure."
GOOD synthesis for "best Claude Code skills":
"Most mentioned skills: /commit (5 mentions), remotion skill (4x), git-worktree (3x), /pr (3x). The Remotion announcement got 16K likes on X."
Identify from the ACTUAL RESEARCH OUTPUT:
If research says "use JSON prompts" or "structured prompts", you MUST deliver prompts in that format later.
CRITICAL: Do NOT output any "Sources:" lists. The final display should be clean.
Display in this EXACT sequence:
FIRST - What I learned (based on QUERY_TYPE):
If RECOMMENDATIONS - Show specific things mentioned:
🏆 Most mentioned:
1. [Specific name] - mentioned {n}x (r/sub, @handle, blog.com)
2. [Specific name] - mentioned {n}x (sources)
3. [Specific name] - mentioned {n}x (sources)
4. [Specific name] - mentioned {n}x (sources)
5. [Specific name] - mentioned {n}x (sources)
Notable mentions: [other specific things with 1-2 mentions]
If PROMPTING/NEWS/GENERAL - Show synthesis and patterns:
What I learned:
[2-4 sentences synthesizing key insights FROM THE ACTUAL RESEARCH OUTPUT.]
KEY PATTERNS I'll use:
1. [Pattern from research]
2. [Pattern from research]
3. [Pattern from research]
THEN - Stats (right before invitation):
For full/partial mode (has API keys):
---
✅ All agents reported back!
├─ 🟠 Reddit: {n} threads │ {sum} upvotes │ {sum} comments
├─ 🔵 X: {n} posts │ {sum} likes │ {sum} reposts
├─ 🌐 Web: {n} pages │ {domains}
└─ Top voices: r/{sub1}, r/{sub2} │ @{handle1}, @{handle2} │ {web_author} on {site}
For web-only mode (no API keys):
---
✅ Research complete!
├─ 🌐 Web: {n} pages │ {domains}
└─ Top sources: {author1} on {site1}, {author2} on {site2}
💡 Want engagement metrics? Add API keys to ~/.config/last30days/.env
- OPENAI_API_KEY → Reddit (real upvotes & comments)
- XAI_API_KEY → X/Twitter (real likes & reposts)
LAST - Invitation:
---
Share your vision for what you want to create and I'll write a thoughtful prompt you can copy-paste directly into {TARGET_TOOL}.
Use real numbers from the research output. The patterns should be actual insights from the research, not generic advice.
SELF-CHECK before displaying: Re-read your "What I learned" section. Does it match what the research ACTUALLY says? If the research was about ClawdBot (a self-hosted AI agent), your summary should be about ClawdBot, not Claude Code. If you catch yourself projecting your own knowledge instead of the research, rewrite it.
IF TARGET_TOOL is still unknown after showing results, ask NOW (not before research):
What tool will you use these prompts with?
Options:
1. [Most relevant tool based on research - e.g., if research mentioned Figma/Sketch, offer those]
2. Nano Banana Pro (image generation)
3. ChatGPT / Claude (text/code)
4. Other (tell me)
IMPORTANT: After displaying this, WAIT for the user to respond. Don't dump generic prompts.
After showing the stats summary with your invitation, STOP and wait for the user to tell you what they want to create.
When they respond with their vision (e.g., "I want a landing page mockup for my SaaS app"), THEN write a single, thoughtful, tailored prompt.
Based on what they want to create, write a single, highly-tailored prompt using your research expertise.
If research says to use a specific prompt FORMAT, YOU MUST USE THAT FORMAT:
ANTI-PATTERN: Research says "use JSON prompts with device specs" but you write plain prose. This defeats the entire purpose of the research.
Here's your prompt for {TARGET_TOOL}:
---
[The actual prompt IN THE FORMAT THE RESEARCH RECOMMENDS - if research said JSON, this is JSON. If research said natural language, this is prose. Match what works.]
---
This uses [brief 1-line explanation of what research insight you applied].
Only if they ask for alternatives or more prompts, provide 2-3 variations. Don't dump a prompt pack unless requested.
After delivering a prompt, offer to write more:
Want another prompt? Just tell me what you're creating next.
For the rest of this conversation, remember:
CRITICAL: After research is complete, you are now an EXPERT on this topic.
When the user asks follow-up questions:
Only do new research if the user explicitly asks about a DIFFERENT topic.
After delivering a prompt, end with:
For full/partial mode:
---
📚 Expert in: {TOPIC} for {TARGET_TOOL}
📊 Based on: {n} Reddit threads ({sum} upvotes) + {n} X posts ({sum} likes) + {n} web pages
Want another prompt? Just tell me what you're creating next.
For web-only mode:
---
📚 Expert in: {TOPIC} for {TARGET_TOOL}
📊 Based on: {n} web pages from {domains}
Want another prompt? Just tell me what you're creating next.
💡 Unlock Reddit & X data: Add API keys to ~/.config/last30days/.env
This skill is applicable to execute the workflow or actions described in the overview.