From llm-wiki
Monitor RSS/Atom feeds or OPML lists and ingest recent items into raw/feeds/. Invoked by wiki-research for feed/newsletter research.
npx claudepluginhub skinnnyjay/wiki-llm --plugin llm-wikiThis skill uses the workspace's default tool permissions.
Fetches items from RSS/Atom feeds or OPML lists into `raw/feeds/`. Use this sub-skill when the input is a feed URL, OPML file, or the user wants to monitor a set of sources for recent content.
Creates new Angular apps using Angular CLI with flags for routing, SSR, SCSS, prefixes, and AI config. Follows best practices for modern TypeScript/Angular development. Use when starting Angular projects.
Generates Angular code and provides architectural guidance for projects, components, services, reactivity with signals, forms, dependency injection, routing, SSR, ARIA accessibility, animations, Tailwind styling, testing, and CLI tooling.
Executes ctx7 CLI to fetch up-to-date library documentation, manage AI coding skills (install/search/generate/remove/suggest), and configure Context7 MCP. Useful for current API refs, skill handling, or agent setup.
Fetches items from RSS/Atom feeds or OPML lists into raw/feeds/. Use this sub-skill when the input is a feed URL, OPML file, or the user wants to monitor a set of sources for recent content.
| Input | Action |
|---|---|
Single feed URL (.xml, .rss, /feed, /atom) | Fetch and parse directly |
| OPML file path | Parse OPML to extract all feed URLs, then process each |
| Blog URL (not feed URL) | Auto-discover feed: try <URL>/feed, <URL>/rss, <URL>/atom.xml |
| Comma-separated list of URLs | Process each as individual feeds |
Preferred: feed CLI
which feed && feed --version
If available:
feed fetch <FEED_URL> --since 24h --format json > /tmp/feed-items.json
Fallback: stdlib XML parsing
llm-wiki ingest url <FEED_URL> --out feeds/raw/<source-slug>.xml
# Then parse with Python:
python3 << 'EOF'
import xml.etree.ElementTree as ET, json, sys
tree = ET.parse('feeds/raw/<source-slug>.xml')
root = tree.getroot()
ns = {'atom': 'http://www.w3.org/2005/Atom'}
# Handle both RSS 2.0 and Atom
items = root.findall('.//item') or root.findall('.//atom:entry', ns)
for item in items:
title = item.findtext('title') or item.findtext('atom:title', namespaces=ns)
link = item.findtext('link') or item.findtext('atom:link', namespaces=ns)
pubdate = item.findtext('pubDate') or item.findtext('atom:updated', namespaces=ns)
print(json.dumps({'title': title, 'link': link, 'date': pubdate}))
EOF
Only process items published within the recency window. Default: last 7 days.
For monitoring / daily digests: use last 24 hours.
# With feed CLI
feed fetch <URL> --since 24h
# Manual date check (Python)
from datetime import datetime, timedelta, timezone
cutoff = datetime.now(timezone.utc) - timedelta(days=7)
# Skip items where parsed_date < cutoff
Before fetching each item's full content:
llm-wiki raw rebuild-index
# Then check:
grep -r "source_url: <ITEM_URL>" raw/feeds/
Skip items already present in raw/ (same URL).
For each new item that passes the recency and dedup filters:
llm-wiki ingest firecrawl "<ITEM_URL>" \
--out feeds/<source-slug>/<YYYY-MM-DD>/<item-slug>.md
If Firecrawl is unavailable, use stdlib:
llm-wiki ingest url "<ITEM_URL>" \
--out feeds/<source-slug>/<YYYY-MM-DD>/<item-slug>.md
Limit: Cap at 20 items per feed per run to avoid context overflow.
If input is an OPML file:
import xml.etree.ElementTree as ET
tree = ET.parse('<OPML_PATH>')
feeds = [el.get('xmlUrl') for el in tree.findall('.//outline[@xmlUrl]')]
# feeds is now a list of RSS/Atom URLs — process each with Steps 2–5
Process feeds sequentially; do not parallelize (respect rate limits).
---
source_type: feed
feed_url: https://...
feed_title: "Example Blog"
source_url: https://... # item link
item_title: "<article title>"
item_published: YYYY-MM-DD
fetched_date: YYYY-MM-DD
adapter: firecrawl | stdlib_url
---
Run each file through skills/wiki-research/references/source-eval.md. Feed items often need recency evaluation — skip low-relevance items rather than ingesting everything.
Write a digest summary if requested:
outputs/feeds-digest-<YYYY-MM-DD>.mdThen return to wiki-research Step 3 (post-process).
raw/feeds/ with frontmatter; recency/dedup rules applied.outputs/ when requested.| Symptom | Fix |
|---|---|
| Feed URL returns HTML not XML | Site requires JS; try <URL>/feed.xml or check site for feed link in <head> |
| feed CLI not found | pip install feed-cli or use Python stdlib XML fallback |
| 429 from feed server | Wait 60s; some servers throttle XML requests |
| OPML has outdated/dead feed URLs | Skip 404s; note [DEAD] in OPML review |
| Items have no full content (abstract only) | Fetch linked article URL via wiki-research-web |
llm-wiki integrations status and any llm-wiki line from Step 1 of this skill (from the vault root).