From agent-almanac
Implements two-clock du-dum architecture for autonomous agents: fast clock accumulates cheap observations into digests; slow clock acts only on pending items. Optimizes LLM/API costs in frequent-observe, rare-act loops.
npx claudepluginhub pjt222/agent-almanacThis skill is limited to using the following tools:
Separate observation from action using two clocks running at different frequencies. The fast clock (analysis) collects data cheaply and writes a compact digest. The slow clock (action) reads the digest and decides whether to act. If the digest says nothing is pending, the action clock exits immediately -- zero cost for idle cycles.
Manages engagement buffer for autonomous agents: ingests, prioritizes, rate-limits, deduplicates, tracks state of items from platforms like GitHub, Slack, email. Generates digests and enforces cooldowns.
Guides building reliable autonomous AI agents with ReAct, Plan-Execute loops, reflection patterns, goal decomposition, and frameworks like LangGraph, CrewAI. Emphasizes reliability principles for production.
Executes heartbeat checklist from HEARTBEAT.md to detect stale sessions, waiting timeouts, and evaluate items for issues in Claude Code Hermit. Supports run/start/stop/status/edit subcommands.
Share bugs, ideas, or general feedback.
Separate observation from action using two clocks running at different frequencies. The fast clock (analysis) collects data cheaply and writes a compact digest. The slow clock (action) reads the digest and decides whether to act. If the digest says nothing is pending, the action clock exits immediately -- zero cost for idle cycles.
The name comes from the heartbeat rhythm: du-dum, du-dum. The first beat (du) observes; the second beat (dum) acts. Most of the time, only the first beat fires.
Separate all work into observation (cheap, frequent) and action (expensive, rare).
| Clock | Cost profile | Frequency | Example |
|---|---|---|---|
| Fast (analysis) | Cheap: API reads, file parsing, no LLM | 4-6x/day | Scan GitHub notifications, parse RSS, read logs |
| Slow (action) | Expensive: LLM inference, write operations | 1x/day | Compose response, update dashboard, send alerts |
Expected: A clear two-column split where every operation is assigned to exactly one clock. The fast clock has no LLM calls; the slow clock has no data gathering.
On failure: If an operation needs both reading and LLM inference (e.g., "summarize new issues"), split it: the fast clock collects the raw issues into the digest; the slow clock summarizes them. The digest is the boundary.
The digest is the low-bandwidth message that bridges the two clocks. It must be compact, human-readable, and machine-parseable.
pending: none or empty section)Example digest structure:
# Digest — 2026-03-22T06:30:00Z
## Pending
- PR #42 needs review response (opened 2h ago, author requested feedback)
- Issue #99 has new comment from maintainer (action: reply)
## Status
- Last analyzed: 2026-03-22T06:30:00Z
- Sources checked: github-notifications, rss-feed, error-log
- Items scanned: 14
- Items pending: 2
When nothing is pending:
# Digest — 2026-03-22T06:30:00Z
## Pending
(none)
## Status
- Last analyzed: 2026-03-22T06:30:00Z
- Sources checked: github-notifications, rss-feed, error-log
- Items scanned: 8
- Items pending: 0
Expected: A digest template with clear pending/empty states. The action clock can determine whether to proceed by checking a single field or section.
On failure: If the digest grows too large (>50 lines), the fast clock is including too much raw data. Move details to a separate data file and keep the digest as a summary with pointers.
Build the observation scripts that run on the fast schedule.
# Pseudocode: analyze-notifications.sh
fetch_notifications()
filter_actionable(notifications)
format_as_digest_entries(filtered)
atomic_write(digest_path, entries)
log("analyzed {count} notifications, {pending} actionable")
Schedule example (cron):
# Fast clock: analyze every 4 hours
30 */4 * * * /path/to/analyze-notifications.sh >> /var/log/analysis.log 2>&1
0 6 * * * /path/to/analyze-pr-status.sh >> /var/log/analysis.log 2>&1
Expected: One or more analysis scripts, each producing or updating the digest file. Scripts run independently -- if one fails, the others still update their sections.
On failure: If a data source is temporarily unavailable, the script should log the error and leave the previous digest entries intact. Do not clear the digest on source failure -- stale data is better than missing data for the action clock.
Build the action script that reads the digest and decides whether to act.
# Pseudocode: heartbeat.sh (the slow clock)
digest = read_file(digest_path)
if digest.pending is empty:
log("heartbeat: nothing pending, exiting")
exit(0)
# Only reaches here if work exists
response = call_llm(digest.pending, system_prompt)
execute_actions(response)
archive_digest(digest_path)
log("heartbeat: processed {count} items, cost: {tokens} tokens")
Schedule example (cron):
# Slow clock: act once per day at 7am
0 7 * * * /path/to/heartbeat.sh >> /var/log/heartbeat.log 2>&1
Expected: The action script exits in under 1 second on idle cycles (just a file read and empty check). On active cycles, it processes pending items and clears the digest.
On failure: If the LLM call fails, do not clear the digest. The pending items remain for the next action cycle. Consider implementing a retry counter in the digest to avoid infinite retries on permanently failing items.
The cost savings come from idle detection -- the action clock must reliably distinguish "nothing to do" from "something to do" with minimal overhead.
# Minimal idle check
if grep -q "^(none)$" "$DIGEST_PATH" || grep -q "pending: 0" "$DIGEST_PATH"; then
echo "$(date -u +%FT%TZ) heartbeat: idle" >> "$LOG_PATH"
exit 0
fi
Expected: The idle path is a single file read followed by a string match. No network calls, no process spawning beyond the script itself.
On failure: If the idle check is unreliable (false positives causing missed work, or false negatives causing unnecessary LLM calls), simplify the digest format. A single boolean field (has_pending: true/false) at the top of the file is the most reliable approach.
Calculate the expected cost to confirm the two-clock architecture delivers savings.
fast_runs = 24 / fast_interval_hoursfast_runs * cost_per_analysis_run (should be ~$0 if no LLM)active_days_fraction * cost_per_action_run(1 - active_days_fraction) * cost_per_idle_check (should be ~$0)Example cost comparison:
| Architecture | Daily cost (active) | Daily cost (idle) | Monthly cost (80% idle) |
|---|---|---|---|
| Single loop (LLM every 30min) | $13.74/37h | $13.74/37h | ~$400 |
| Du-dum (6 analyses + 1 action) | $0.30 | $0.00 | ~$6 |
Expected: A cost model showing the du-dum architecture is cheaper than the original by at least 10x on idle days.
On failure: If the cost model does not show significant savings, one of these is likely true: (a) the fast clock is too frequent, (b) the fast clock includes hidden LLM calls, or (c) the system is rarely idle. Du-dum benefits systems with high idle ratios. If the system is always active, a simpler polling approach may be more appropriate.
manage-token-budget -- cost control framework that du-dum makes practical; du-dum is the architectural pattern, token budget is the accounting layercircuit-breaker-pattern -- handles the failure case (tools breaking); du-dum handles the normal case (nothing to do). Use together: du-dum for idle detection, circuit-breaker for failure recoveryobserve -- observation methodology for the fast clock; du-dum structures when and how observations become actionable via the digestforage-resources -- strategic exploration layer; du-dum is the execution rhythm that forage-resources operates withincoordinate-reasoning -- stigmergic signaling patterns; the digest file is a form of stigmergy (indirect coordination through environmental artifacts)