From guide
Run the daily content pipeline to fetch signals, analyze relevance, draft output, edit for voice fidelity, and deliver the brief
npx claudepluginhub ondrej-svec/heart-of-gold-toolkit --plugin guideThis skill uses the workspace's default tool permissions.
> "Time is an illusion. Lunchtime doubly so." — But your daily brief runs on schedule.
Generates Substack Note ideas by scanning YouTube videos, newsletters, and prior Notes. Orchestrates fetching, processed-log management, duplicate prevention, and delegation to idea extraction. Use for content repurposing and posting cadence.
Produces email newsletter editions with subject line formulas, section structure, personalization, link placement, growth tactics, sponsorship placement, and engagement optimization. Activates on 'newsletter', 'weekly digest', subscriber growth, or open rates queries.
Generates newsletters, tweets, X/Twitter threads, and content lineups capturing user's authentic voice and brand tone. Activates for social media posts, email content, or content planning.
Share bugs, ideas, or general feedback.
"Time is an illusion. Lunchtime doubly so." — But your daily brief runs on schedule.
Run the full content pipeline: fetch signals, analyze relevance, create daily brief + drafts, edit for voice fidelity, and deliver output with notifications.
content/config.yaml in the user's project (falls back to plugin defaults if missing)feedparser and pyyamljq and curl on PATHgws CLI for Gmail, osascript for iMessageBefore anything else, check if content/config.yaml exists.
/guide:setup to configure your content pipeline — it takes about 2 minutes and I'll walk you through it." Then stop. Do NOT use the defaults config as a silent fallback for a first run — the user needs to configure their own sources and themes.Run the pipeline fetch script to gather all external signals deterministically.
The fetch scripts live in this plugin's scripts/ directory. Determine the scripts path:
heart-of-gold-toolkit/plugins/guide/, use heart-of-gold-toolkit/plugins/guide/scripts/scripts/ directoryfetch-rss.py in the project tree if unsurecontent/config.yamlvoice.reference config fieldbash <scripts>/run-pipeline-fetch.sh --config content/config.yaml
This script:
content/pipeline/YYYY-MM-DD/signals.json (with collision avoidance: signals-2.json, etc.)content/pipeline/YYYY-MM-DD/fetch-log.json with per-source statussignals.json from the pipeline directory — this is your input for Phase 2fetch-log.json — check which sources succeeded/failed. If a source failed, mention it in the brief footer.content/captures/ (or configured captures_dir) — last 7 days of AM/PM capturescontent/daily/ for deduplication contextScore, deduplicate, and group signals into content angles.
Each signal in signals.json already has a relevance_score field (0.0-1.0) with source weight multipliers applied upstream by the pipeline fetcher. Use this score as the primary ranking input. Do not apply your own source weighting — it's already baked in.
The weights (applied by run-pipeline-fetch.sh):
How to use relevance_score: Sort signals by score descending. Top-scoring signals become Must-Read candidates. Then apply your editorial judgment: theme relevance, deduplication, narrative clustering. The score is a starting point, not the final answer.
Before scoring, filter out signals that would appear in any general AI/tech newsletter without personalization. If the source is a generic industry publication (CIO.com, McKinsey, Deloitte, Waydev, generic "agentic AI" roundups) AND the signal doesn't connect to a specific capture, theme, or current project — drop it. The brief is personal curation, not industry news. When in doubt, ask: "Would a smart friend send this to me specifically, or would they send it to anyone in tech?" If the answer is "anyone" — cut it.
Track suggested content angles across the last 5 briefs (read previous briefs from content/daily/). If an angle has been suggested 3+ times without being written (no corresponding draft in content/drafts/), either escalate it ("This angle has come up 4 times — write it today or retire it") or drop it with a note explaining why it's being retired. Never silently suggest the same angle for a 4th time.
relevance_score from signals.json as the primary ranking, then apply your own 1-5 editorial judgment on theme relevance (personal and professional from config)content/pipeline/YYYY-MM-DD/analysis.mdmissing_voice on the angle but still include itGenerate the daily reading digest — a document worth reading with your morning coffee.
The daily brief has five sections:
The Story (opening narrative, ~300-500 words) — Weave the strongest signals into a coherent narrative about what's happening in the world RIGHT NOW. Don't list items — tell a story. Connect the dots between seemingly unrelated signals. What's the thread? What's the tension? What's shifting?
How to write The Story:
Local Events — upcoming tech/AI events in the user's city (from iCal feeds). Only show if events exist in the next 7 days. Format:
## 📅 {City} This Week
**Tue Apr 1 · 18:30** — Prague Gen AI: "Building with Claude Code"
Pracovna & Laskafe, Žižkov · [RSVP](link)
**Thu Apr 3 · 19:00** — AI Tinkerers: April Demo Night
Impact Hub · [RSVP](link)
Rules:
config.yaml → sources.events.active_citymetadata.type == "event" in signals.json are rendered here, NOT in the Reading ListReading List — 8-12 items minimum (up to 15 when signal is strong), organized into tiers:
Content Ideas — 4-6 ranked content angles from the analysis phase, tagged by format (LinkedIn / Blog / YouTube)
What's on Your Mind — synthesis of the user's recent captures. Stale capture rule: If the most recent capture is older than 7 days, do NOT recycle it. Either skip this section entirely or note briefly: "No fresh captures since [date]. Today's brief is all external signal." Never re-summarize old captures as if they're news — that reads like a broken record after 3+ days of the same content.
Write to content/daily/YYYY-MM-DD.md (or configured daily_dir) with YAML frontmatter:
---
date: YYYY-MM-DD
sources_count: <number of sources that returned data>
signals_fetched: <total signals before dedup>
angles_count: <number of content angles>
---
Generate at least 3 LinkedIn post drafts, each from a different angle. The user picks the best one.
content/drafts/YYYY-MM-DD-linkedin-1.mdcontent/drafts/YYYY-MM-DD-linkedin-2.mdcontent/drafts/YYYY-MM-DD-linkedin-3.md---
date: YYYY-MM-DD
draft: 1 # or 2, 3
angle: <angle title>
sources: <list of source URLs>
word_count: <actual word count>
voice_score: <set by edit phase>
---
sensitive: true to frontmattertone field defaults onlyGenerate a blog post outline approximately once per week — when a strong angle appears that deserves deeper treatment.
Generate a blog outline when:
content/drafts/ for *-blog-outline.md files), ORThe outline follows the emotional arc with 6-8 bullet points, each with a 1-2 sentence description:
Each bullet notes which signals/captures feed into that section.
Write to content/drafts/YYYY-MM-DD-blog-outline.md with frontmatter:
---
date: YYYY-MM-DD
angle: <angle title>
needs_write_post: true
---
blog/ directory: Note the overlap and suggest building on the existing draftSurface topics that deserve deeper unpacking — rants, opinions, real talk that's too meaty for a LinkedIn post and too raw for a polished blog.
Generate a YouTube/long-form idea when:
Append to the Content Ideas section of the daily brief (not a separate file). Format:
### 🎙️ YouTube / Deep Dive: {topic}
Why this has legs: {1-2 sentences on why this deserves 10+ minutes of talking, not 200 words}
Signals: {list of sources}
The rant seed: {one provocative sentence that could be the opening line on camera}
This is a suggestion, not a draft. The user decides if it's worth pursuing.
Scan all generated content against the user's voice profile.
Scan the LinkedIn draft (and blog outline if present) for:
voice.jargon_blocklist in config (-10 per hit)voice_score to frontmatter. A score of exactly 75 passes (threshold is strictly < 75).needs_human_review: true in frontmatter. Do not attempt further rewrites.Log all edit changes in the pipeline state (content/pipeline/YYYY-MM-DD/) for transparency. Record what was flagged, what was changed, and the before/after scores.
< 75, not <= 75needs_human_review: trueWrite final output files, commit & push to GitHub, and send an iMessage with links.
Ensure directories exist — create daily_dir, drafts_dir, pipeline_dir if they don't exist (mkdir -p)
Write daily brief to configured output.daily_dir (e.g., content/daily/YYYY-MM-DD.md)
Write LinkedIn draft to configured output.drafts_dir (e.g., content/drafts/YYYY-MM-DD-linkedin.md)
Write blog outline to configured output.drafts_dir (if generated)
Preserve pipeline state in configured output.pipeline_dir (signals.json, analysis.md, edit log)
File collision avoidance: If a file already exists at the target path, append -2, -3, etc. to the filename before the extension
Commit and push to GitHub:
git add the daily brief, drafts, and pipeline state filescontent: daily brief YYYY-MM-DD — {N} signals, angle: {angle_title}git push origin maingit remote get-url origin (convert SSH to HTTPS if needed)https://github.com/{owner}/{repo}/blob/main/{file_path}Send iMessage morning brief — the primary delivery. A rich, self-contained mini-brief that's worth reading on its own, with GitHub links at the bottom for deep-diving.
How to send: Use the reply MCP tool from the iMessage plugin (available when running with --channels plugin:imessage@claude-plugins-official).
Finding the chat_id: The self-chat uses the any;-; prefix format. Query it:
sqlite3 ~/Library/Messages/chat.db "SELECT guid FROM chat WHERE chat_identifier LIKE '%<phone_number>%' LIMIT 1"
Read the phone number from notifications.imessage.recipient in config. The typical self-chat GUID is any;-;+<number>.
iMessage format — The Morning Signal:
-- Morning Brief -- {date} --
THE STORY
{3-5 sentences weaving the top signals into a narrative. What's happening
in the world right now? What's the tension or thread that connects them?
Ground it in specifics — names, numbers, quotes. If captures exist, weave
the user's personal context in naturally.}
THIS WEEK IN {CITY}
{Only include if events exist in next 7 days}
- {Day} {Time} -- {Event name} @ {Venue}
- {Day} {Time} -- {Event name} @ {Venue}
MUST-READS
1. {Title} ({source, score if HN})
{Why it matters to YOU — one sentence, personal, specific}
{URL}
2. {Title}
{Why it matters}
{URL}
3. {Title}
{Why it matters}
{URL}
TOP ANGLE
"{Angle title}" — {score}/10
{Two sentences: the hook that makes you want to write this. Pull the
most provocative or vulnerable thread.}
--
Full brief: {GitHub URL to daily brief}
LinkedIn drafts: {GitHub URL to drafts directory}
--
Heart of Gold -- {N} signals -- {N} angles
Rules for the iMessage:
Sending via reply tool:
reply(chat_id: "any;-;+<number>", text: "<the mini-brief>")
mkdir -p-2 suffixed files