Performs multi-source web research to discover, score, and organize findings for newsletter content. Activates when the user wants to research a topic, find trending material, gather sources for a newsletter, or asks 'what's new in [topic]?' Searches across web, GitHub, Reddit, and blogs with recency scoring and deduplication.
From founder-osnpx claudepluginhub thecloudtips/founder-os --plugin founder-osThis skill uses the workspace's default tool permissions.
references/query-patterns.mdDesigns and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Conduct multi-source web research for a given topic to produce a structured set of findings ready for the newsletter outline and drafting phases. Formulate 5-8 targeted search queries per topic across multiple source types (general web, GitHub, Reddit, Quora, official blogs). Parse results into structured findings with title, source, URL, summary, date, and relevance score. Apply recency scoring to weight recent content higher. Deduplicate findings that appear across multiple sources. Categorize each finding by type. Produce a research summary organized by category and recency for handoff to the outline phase.
Generate 5-8 search queries per topic, distributed across source types to maximize coverage and diversity of perspectives.
| Source Type | Queries | Purpose |
|---|---|---|
| General Web | 1-2 | Broad coverage, news articles, analysis pieces |
GitHub (site:github.com) | 1-2 | Trending repos, new releases, changelogs, tools |
Reddit (site:reddit.com) | 1 | Community sentiment, discussions, pain points |
| Official Blogs / Changelogs | 1-2 | Authoritative announcements, product updates |
Quora (site:quora.com) | 0-1 | Expert Q&A, alternative perspectives |
For detailed query templates and examples per source type, see ${CLAUDE_PLUGIN_ROOT}/skills/newsletter/topic-research/references/query-patterns.md.
Produce a query plan before executing searches:
Query Plan for: "[Topic]"
1. [general] "exact query string"
2. [github] "site:github.com exact query string"
3. [reddit] "site:reddit.com exact query string"
4. [blog] "exact query string"
5. [general] "exact query string"
...
Display the query plan to the user before proceeding so they can refine or redirect.
Parse each search result into a structured finding object. Extract consistently regardless of source type.
Each finding contains:
null and note "Date unavailable" in the summary.(relevance_score * 0.6) + (recency_score * 0.4). Used for final ranking.Assign a relevance score from 0.0 to 1.0 based on topical alignment:
| Score Range | Criteria |
|---|---|
| 0.8-1.0 | Directly addresses the topic. Contains the topic's key terms in title or first paragraph. Provides actionable or newsworthy information. |
| 0.5-0.79 | Related to the topic but covers a broader or adjacent subject. Mentions the topic but is not primarily about it. |
| 0.2-0.49 | Tangentially related. Shares a category or domain but the specific topic is secondary. |
| 0.0-0.19 | Minimally related. Only surface-level keyword overlap. Discard findings below 0.2. |
Filter out findings with relevance_score below 0.2 before further processing. They add noise without value.
Weight recent findings higher to prioritize fresh content for the newsletter.
| Age | Recency Score | Label |
|---|---|---|
| 0-7 days | 1.0 | Fresh -- highest newsletter value |
| 8-14 days | 0.7 | Recent -- strong newsletter value |
| 15-30 days | 0.4 | Aging -- include only if highly relevant |
| 31-60 days | 0.2 | Stale -- include only for foundational context |
| 60+ days or unknown | 0.1 | Old -- exclude unless seminal/canonical |
null, assign recency_score of 0.1 (assume old).Detect the same news, announcement, or topic covered by multiple sources and merge into a single finding with multiple source URLs.
Two findings are duplicates when they meet ANY of these conditions:
When duplicates are detected:
source_urls array.After deduplication, findings that retain 3+ source URLs receive a +0.1 combined_score bonus (cap at 1.0). Multi-source coverage indicates a high-signal topic worth featuring in the newsletter.
Tag each finding with exactly one source type:
| Category | Tag | Detection Rules |
|---|---|---|
| Official Release | official-release | URL contains company blog domain, changelog, or release notes page. Title mentions version numbers, "announces", "launches", "introduces", "releases". |
| Blog Post | blog-post | URL from known blog platforms (Medium, Substack, Dev.to, personal blogs) or contains /blog/ in path. Analysis, opinion, or tutorial content. |
| GitHub Repository | github-repo | URL matches github.com/[owner]/[repo]. Content describes a tool, library, or open-source project. |
| Community Discussion | community-discussion | URL from Reddit, Quora, Hacker News, Stack Overflow, or forum domains. Content is a question, discussion thread, or community commentary. |
| Tutorial | tutorial | Content is primarily instructional. Contains step-by-step instructions, code examples, or "how to" framing. |
When a finding could match multiple categories, prefer the more specific tag: official-release > tutorial > blog-post > community-discussion.
Organize the final output for handoff to the outline phase. The summary serves as the raw material the newsletter-writing skill uses to build sections.
# Research Summary: [Topic]
Generated: [YYYY-MM-DD]
Queries executed: [N]
Total findings: [N] (after dedup)
## Key Takeaways
- [1-sentence summary of the most important finding]
- [1-sentence summary of the second most important finding]
- [1-sentence summary of the third most important finding]
## Findings by Category
### Official Releases
| # | Title | Date | Score | Sources |
|---|-------|------|-------|---------|
| 1 | [title] | [date] | [combined] | [count] |
[detail block per finding]
### Blog Posts
[same table format]
### GitHub Repositories
[same table format]
### Community Discussions
[same table format]
### Tutorials
[same table format]
## Timeline View (Last 30 Days)
- [date]: [finding title] ([source_type])
- [date]: [finding title] ([source_type])
...
## Research Gaps
- [Areas where queries returned insufficient results]
- [Suggested follow-up queries to fill gaps]
Below each category table, include an expandable detail block per finding:
**[#] [Title]** (Score: [combined_score])
Source: [source] | Date: [date] | Type: [source_type]
URLs: [url1], [url2], ...
Summary: [2-3 sentence summary]
Newsletter angle: [1-sentence suggestion for how to use this in the newsletter]
The "Newsletter angle" field is critical -- provide a concrete suggestion for how the finding could be framed in the newsletter (e.g., "Lead with this as the main story", "Use as a supporting data point", "Good for a 'tools to watch' sidebar").
After completing all queries, identify gaps in the research:
Include 2-3 suggested follow-up queries the user can run to fill gaps. Format each as a ready-to-execute query string with its target source type.
When the topic is too broad (single word like "AI" or "marketing"), ask the user to narrow: "Topic '[topic]' is broad. Suggest a more specific angle: [3 suggestions based on the word]." Do not proceed with research until the topic is refined. A well-scoped topic yields 5-8 focused queries; an ambiguous topic yields noise.
When queries return zero relevant findings (all below 0.2 relevance), report: "No relevant findings for '[topic]' across [N] queries. Consider these alternative angles: [suggestions]." Never fabricate findings.
When web search returns errors or rate limits, report the error and provide partial results from successful queries. Note which source types were skipped: "Research incomplete -- [source_type] queries failed. Partial results from [N] successful queries below."
When the topic involves non-English content or markets, include at least one query in the relevant language if the user indicates the newsletter targets that audience. Default to English queries unless instructed otherwise.
If web search is unavailable entirely, report: "Web search unavailable -- cannot perform topic research. Provide URLs or content directly for manual analysis." Do not attempt to generate findings from training data alone. Research must be grounded in current web results.
If Notion is available, log the research summary to the "[FOS] Research" database with Type = "Newsletter Research". If Notion is unavailable, output everything to chat. Never fail because of missing optional MCP servers.
Do NOT create databases. Discover existing ones using this fallback chain:
When writing to the Research database, always set the Type property to "Newsletter Research" to distinguish from other research types (e.g., Competitive Analysis from P15).