From ai-daily-digest
Daily AI news digest covering technical advances, business news, and engineering impact. Aggregates from research papers, tech blogs, HN, newsletters. Use daily for staying current on AI developments.
npx claudepluginhub smykla-skalski/sai --plugin ai-daily-digestThis skill is limited to using the following tools:
Generate comprehensive daily AI news digest with technical, business, and engineering coverage.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Analyzes BMad project state from catalog CSV, configs, artifacts, and query to recommend next skills or answer questions. Useful for help requests, 'what next', or starting BMad.
Generate comprehensive daily AI news digest with technical, business, and engineering coverage.
Parse from $ARGUMENTS:
--focus [technical|business|engineering|leadership|all] — Default: all--notion-page-id [UUID] — Notion parent page ID for digest publishing (overrides env var)--no-notion — Skip Notion publishing entirely (archive-only mode)Resolve the Notion parent page ID using this precedence (first match wins):
--notion-page-id argumentNOTION_PARENT_PAGE_ID environment variableWhen prompting the user, provide these instructions for finding the page ID:
https://www.notion.so/Page-Title-{32-hex-chars})8-4-4-4-12 for UUID formatRecommend persisting via env var in ~/.zshrc / ~/.bashrc:
export NOTION_PARENT_PAGE_ID="your-page-id-here"
Or in ~/.claude/settings.json under the "env" key:
{
"env": {
"NOTION_PARENT_PAGE_ID": "your-page-id-here"
}
}
echo "${XDG_DATA_HOME:-$HOME/.local/share}/sai/ai-daily-digest"date +%Y-%m-%ddate +%Aecho "${NOTION_PARENT_PAGE_ID:-}"All persistent state and generated artifacts are stored in the data directory from preprocessed context above. This path is independent of the plugin cache and project working directory — artifacts survive plugin updates and work from any project.
Use the pre-resolved data directory path for all file operations. Do NOT use ./findings/ or other relative paths — they may resolve to the plugin cache and be lost on updates.
All state stored in the persistent data directory.
.last-run)Format: YYYY-MM-DD. Read on startup to calculate date range. If missing, default to past 7 days.
.covered-stories)Pipe-separated: {date}|{story_id}|{url} — one story per line.
story_id — Normalized: lowercase, hyphen-separated, key terms (e.g., falcon-h1r-7b-release, xai-20b-funding)Example:
2026-01-28|deepseek-r1-release|https://api-docs.deepseek.com/news/news250120
2026-01-29|falcon-h1r-7b-release|https://falcon-lm.github.io/blog/falcon-h1r-7b/
--focus area and --notion-page-id--no-notion is set, set notion_page_id to null (archive-only mode).
Otherwise check in order: --notion-page-id arg → Notion page ID from preprocessed context → prompt user interactively.
Store resolved value as notion_page_id for Phase 18.
If the preprocessed value is empty and user declines to provide an ID, skip Notion publishing (archive-only mode).DATA_DIR. Run mkdir -p "$DATA_DIR" to ensure it exists.$DATA_DIR/.last-run — set date range from last run to the today value from preprocessed context$DATA_DIR/.covered-stories — build in-memory covered_ids and covered_urls setsBefore starting Phase 2, read references/search-patterns.md in full for all search queries, source-specific patterns, and the Friday Weekly Recap section.
Spawn a general-purpose research agent for each phase (or batch of independent phases).
Pass each agent: the date range, covered_ids and covered_urls sets, the focus area, and the relevant section from references/search-patterns.md.
Phases 2-5 are independent - spawn them in parallel. Subsequent phases can be batched as appropriate.
Each agent executes all web searches for its phase and returns ONLY a list of story items: title, URL, 1-line summary, and story_id. No analysis, no ranking - that happens in Phase 16.
Collect results from all research agents before proceeding to Phase 16. Do not skip phases - missing a phase means missing an entire digest section.
| Phase | Topic | Skip unless focus includes |
|---|---|---|
| 2 | Technical research (models, papers, frameworks) | technical |
| 3 | Business research (funding, acquisitions, launches) | business |
| 4 | Engineering impact (dev tools, workflow, job market) | engineering |
| 5 | Leadership research (strategy, org transformation) | leadership |
| 6 | GitHub trending AI repos | technical |
| 7 | AI tools for professionals (9 domains) | all (always run) |
| 8 | AI application domains (7 verticals) | all (always run) |
| 9 | AI safety & ethics | all (always run) |
| 10 | Open source AI ecosystem | technical |
| 11 | AI infrastructure & hardware | technical |
| 12 | Regional AI developments | all (always run) |
| 13 | YouTube AI videos | all (always run) |
| 14 | Cool & thought-provoking research | all (always run) |
| 15 | Newsletter & blog aggregation | all (always run) |
All deduplication happens here before generating the digest.
Step 1: Generate story IDs for all collected items.
Normalized format — lowercase, hyphen-separated, company/product + action + key detail:
falcon-h1r-7b-releasexai-20b-fundingsimonwillison-sandboxes-postStep 2: Deduplicate within session — remove same event from different URLs.
Step 3: Deduplicate against history — use in-memory covered_ids and covered_urls from Phase 1. DO NOT re-read or update the file.
Filter out stories where:
covered_idscovered_urlsStep 4: Rank by source credibility (tier 1 > tier 2 > tier 3), engagement, relevance.
Step 5: Categorize into template sections and select Top 5 from filtered content.
Step 6: Completeness check — compare categorized items against the Length Guidelines table in references/output-template.md. If any section is below its target minimum, return to the corresponding research phase and run additional searches from references/search-patterns.md to fill the gap. Every section in the template must have content before proceeding.
- [ ] **[Title]** — [1-line summary] [Source: URL]
- [ ] on ALL story items with source URLs (renders as Notion task)DO NOT update .covered-stories in this phase — wait for verification.
Step 1: Save to Notion
Skip this step if notion_page_id was not resolved in Phase 1 (archive-only mode).
Load Notion tool via ToolSearch (select:mcp__notion__notion-create-pages), then create page:
notion_page_id resolved in Phase 1🤖 AI Digest {YYYY-MM-DD}If page creation fails, warn the user and continue — the archive copy in Step 2 still provides value.
Step 2: Write archive copy to $DATA_DIR/ai-digest-{YYYY-MM-DD}.md
Step 3: Update $DATA_DIR/.last-run with today's date (YYYY-MM-DD).
Spawn a general-purpose verification agent to check today's digest against:
$DATA_DIR/.covered-stories (should NOT include today's stories yet)$DATA_DIR/Agent checks for: exact duplicates, near duplicates (same company + similar action within 7 days), URL duplicates, topic fatigue (same topic 3+ times in past week).
On duplicate detection:
*Verification: {N} borderline items retained*Only after Phase 19 passes. Append to $DATA_DIR/.covered-stories for each story in final digest:
{date}|{story_id}|{url}
Keep file under 300 lines — trim oldest from top if over.
.last-run on successful digest generationStory items use - [ ] checkbox format for newsletter curation. User checks stories in Notion → /ai-newsletter extracts checked items.
/ai-digest
</example>
<example>
Focus on a single area:
/ai-digest --focus technical
/ai-digest --focus business
</example>
<example>
Explicit Notion page or archive-only mode:
/ai-digest --notion-page-id 12345678-abcd-1234-efgh-123456789abc
/ai-digest --no-notion
/ai-digest --focus technical --no-notion
</example>