Help us improve
Share bugs, ideas, or general feedback.
Autonomous RSS/content aggregator with self-evolution capabilities. Use this skill when the user wants to: - Discover and track topic-specific news from multiple sources - Get AI-filtered intelligence reports (score-based filtering) - Automatically find new relevant sources for a topic - Track information sources over time with quality scoring Triggers: /feed-agent, "track news about X", "monitor topic", "set up feed for X", "daily intelligence report", "discover sources", "what's new in X"
npx claudepluginhub feed-mob/agent-skills --plugin feedmob-reporting-skillsHow this skill is triggered — by the user, by Claude, or both
Slash command
/feedmob-campaign-creator:feed-agentThe summary Claude sees in its skill listing — used to decide when to auto-load this skill
An autonomous intelligence aggregator that discovers, filters, and analyzes topic-specific content with self-evolution capabilities.
diagrams/feed-agent-workflow.htmldiagrams/feed-agent-workflow.mmddiagrams/feed-agent-workflow.svgevals/evals.jsonreferences/report-template.mdreferences/scoring-prompts.mdreferences/search-providers.mdscripts/__init__.pyscripts/analyzer.pyscripts/browser_adapter.pyscripts/db.pyscripts/evolution.pyscripts/exa_adapter.pyscripts/fetcher.pyscripts/pipeline.pyscripts/reporter.pyscripts/scout.pyscripts/search_provider.pytests/conftest.pytests/unit/test_date_anchored_reporting.pyMandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
An autonomous intelligence aggregator that discovers, filters, and analyzes topic-specific content with self-evolution capabilities.

| Command | Description |
|---|---|
/feed-agent [topic] | Run full pipeline: scout → fetch → analyze → report |
/feed-agent set-topic [topic] | Configure the tracking topic |
/feed-agent scout | Run discovery only (find new sources) |
/feed-agent report [--as-of-date YYYY-MM-DD] | Refresh data, analyze, and generate a 7-day date-anchored report |
/feed-agent sources | List active and candidate sources |
/feed-agent evolve | Run self-evolution (prune/promote sources) |
/feed-agent [topic]python3 scripts/pipeline.py --project-root . --action init --topic "{topic}"
Reads config/feed-agent.yaml for:
Run source discovery using enabled search providers:
python3 scripts/scout.py --project-root . --topic "{topic}"
Uses the SearchProvider interface to:
sources/feeds/{topic}/candidates.jsonFetch full content from all active sources:
python3 scripts/fetcher.py --project-root . --topic "{topic}"
For each source:
Score and filter articles:
python3 scripts/analyzer.py --project-root . --topic "{topic}"
For each article:
Articles scoring < 7 are filtered out.
Create markdown intelligence report:
python3 scripts/reporter.py --project-root . --topic "{topic}" --output reports/{topic}/{date}.md --as-of-date {date} --window-days 7
Output structure:
Report generation rules:
--as-of-date.published_at when available, with fetched_at as fallback.Update source quality and keywords:
python3 scripts/evolution.py --project-root . --topic "{topic}"
Based on average scores:
/feed-agent set-topic [topic]Initialize or update topic configuration:
python3 scripts/pipeline.py --project-root . --action config --topic "{topic}"
Creates/updates:
config/feed-agent.yaml - Topic and provider settingssources/feeds/{topic}/active.yaml - Active source list/feed-agent scoutRun discovery phase only:
python3 scripts/scout.py --project-root . --topic "{topic}" --verbose
Outputs discovered candidates to console and saves to candidates.json.
/feed-agent report [--as-of-date YYYY-MM-DD]Generate a fresh daily report for the target date:
python3 scripts/pipeline.py --project-root . --action report --topic "{topic}" --as-of-date {date} --window-days 7
This workflow refreshes sources, analyzes newly fetched items, and produces a report covering the inclusive 7-day window ending on {date}.
/feed-agent sourcesList source status:
python3 scripts/db.py --project-root . --topic "{topic}" list-sources
Shows:
/feed-agent evolveForce evolution check:
python3 scripts/evolution.py --project-root . --topic "{topic}" --force
Feed Agent uses a pluggable search provider system. See references/search-providers.md for implementation guide.
| Provider | Tool | Description |
|---|---|---|
exa | exa_web_search_exa | Primary - high-quality web search |
browser | agent-browser | Secondary - scrapes Google/Brave results |
providers:
enabled:
- exa
- browser
exa:
max_results: 10
use_autosuggestions: true
browser:
headless: true
timeout: 30
Each article is scored 0-10 for relevance to the configured topic. See references/scoring-prompts.md for the complete scoring prompt template.
| Score | Interpretation |
|---|---|
| 9-10 | Directly addresses topic, major new development |
| 7-8 | Highly relevant, contributes meaningful insight |
| 5-6 | Somewhat related, tangential relevance |
| 3-4 | Peripheral connection, low signal |
| 0-2 | Not relevant to topic |
The analyzer checks if an article's core points have been covered in previous reports:
The agent continuously improves its source list:
-- Every run updates source quality scores
UPDATE feed_agent_sources
SET quality_score = promoted_articles / total_articles,
relevance_avg = AVG(recent_scores)
WHERE topic = ?
Candidates with > 70% promoted articles after 3 validation runs are automatically added to the active source list.
Sources with < 30% relevance average over 3 consecutive runs are flagged for removal. User must confirm pruning.
High-scoring articles are analyzed for frequently occurring terms not in the current keyword list. These are stored in the database as candidate keywords, promoted to active when they repeatedly appear in strong articles, and then reused to drive feed discovery queries and relevance scoring.
active - Used by scout query generation and analysis promptscandidate - Suggested by analysis, waiting for more evidencerejected - Suppressed from future reuseGenerated reports follow the template in references/report-template.md:
# {Topic} Intelligence Report
**Date:** {date}
**Sources Scanned:** {count} | **Articles Found:** {count} | **Promoted (>=7):** {count}
## Core Insights
> Top insights from all promoted articles
| # | Insight | Source | New? |
|---|---------|--------|------|
| 1 | {insight} | {source} | Yes |
## Detailed Analysis
### {Article Title}
**Score:** {score}/10 | **Source:** {source_name} | **New Insight:** {is_new}
**Core Points:**
- {point_1}
- {point_2}
**Link:** {url}
---
## Self-Evolution
### Source Quality Update
| Source | Avg Score | Status | Action |
|--------|-----------|--------|--------|
| {name} | 0.82 | Active | Maintained |
### Keyword Adjustments
- **Added:** {keywords}
- **Reasoning:** {reason}
See scripts/db.py for schema definitions. Key tables:
feed_agent_sources - Source registry with quality trackingfeed_agent_articles - Article storage with analysisfeed_agent_evolution_log - Track all evolution actionspip install feedparser beautifulsoup4 aiohttp pyyaml
User: /feed-agent "AI Agents"
Action:
1. Check if topic "AI Agents" is configured
2. Run scout to discover new sources
3. Fetch content from all active sources
4. Analyze and score articles (0-10)
5. Filter (keep ≥ 7)
6. Generate report at reports/ai-agents/2026-03-23.md
7. Run evolution to update source quality
8. Display summary to user
When invoking this skill, AI agents should be aware of the following runtime characteristics:
timeoutSeconds: 600 (10 minutes) minimumThis is a long-running batch task. Do not expect synchronous results; use appropriate async patterns (e.g., sessions_yield, background sessions) when calling.
| Error | Response |
|---|---|
| No sources configured | Guide user to run /feed-agent set-topic first |
| All articles filtered | Report mentions filter rate, suggest lowering threshold |
| No new articles | Check last_fetched timestamp, report stale sources |
| Scout finds no candidates | User may need to adjust topic keywords |
Fixed:
_ensure_article_columns() now properly adds missing columns (published_at, story_key, story_status) on database initinclude_unanalyzed parameter to get_articles() to properly fetch articles pending analysis (with NULL or 0 relevance_score)parse_date() in fetcher now defaults to current time if no date found in feed entry, ensuring published_at is always setinsert_article() now ensures published_at is always set with fallback to current timeImpact: These fixes resolve the "no such column" errors and ensure articles are properly fetched, analyzed, and included in reports even when RSS feeds lack proper date metadata.