Daily note lifecycle - briefing, task recommendations, progress sync, and work summary. SSoT for daily note structure.
From aops-coworknpx claudepluginhub nicsuzor/academicops --plugin aops-coworkThis skill is limited to using the following tools:
instructions/briefing-and-triage.mdinstructions/focus-and-recommendations.mdinstructions/mobile-capture-triage.mdinstructions/progress-sync.mdinstructions/reflect.mdinstructions/sync-workflow.mdinstructions/work-summary.mdreferences/note-template.mdDesigns and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Compose and maintain a daily note that helps the user orient, prioritise, and track their day.
Location: $ACA_DATA/daily/YYYYMMDD-daily.md
The daily note answers three questions for a knowledge worker returning to their desk:
The note is a planning document, not an execution trigger. After the note is updated, output "Daily planning complete. Use /pull to start work." and HALT. User stating a priority ≠ authorization to execute it.
A good daily note is evaluated qualitatively, not by structural compliance:
Every /daily invocation updates the note in place. The skill is designed to be run repeatedly throughout the day. There are no separate modes.
/daily # Update the note (create if missing)
/daily sync # Alias for muscle memory
The daily note has five sections, each serving a distinct purpose. The agent composes these sections using its judgment about what matters most in context (P#116). The structure below defines WHAT each section achieves, not a rigid template.
The first thing the user sees. Combines priority overview and curated task recommendations in one place.
Contains:
Quality guidance: Weight recommendations by significance to the person, not just priority field values. An overdue email reply to a colleague is more important than a P0 framework task that has been P0 for months. A paper deadline matters more than a CI fix. The agent should understand the user's world — their research commitments, their students, their external obligations — and recommend accordingly.
User priorities subsection: After presenting recommendations, ask the user what sounds right for today. Record their response in a ### My priorities subsection. This subsection is never overwritten on subsequent runs.
See [[instructions/focus-and-recommendations]] for task data loading and recommendation reasoning.
Email triage and mobile captures, presented as a briefing the user reads in the note itself — not by opening individual emails.
Contains:
notes/mobile-captures/Quality guidance: FYI items involving real people (students, collaborators, funders) get full context — who said what, what's being asked, what the deadline is. Automated notifications, newsletters, and low-signal items get a single line or are omitted entirely. The agent triages by significance, not by recency.
Bidirectional contract: If the user adds notes or annotations below any FYI item, those are preserved on subsequent runs. The agent regenerates its content above user annotations but never deletes below them.
See [[instructions/briefing-and-triage]] for email triage, sent-mail cross-referencing, and task creation.
A 2-4 sentence narrative synthesis of the day's work, followed by a structured Session Flow subsection. This is the editorial section — the agent's judgment about what the day's work means, not a log of what happened.
Quality guidance: Lead with the most significant work, not the most recent. Research progress, paper milestones, and external commitments matter more than framework PRs. If 8 PRs were merged on internal tooling but no progress was made on the paper deadline, the story should note the tooling work briefly and highlight the gap. Mention specific PR numbers and task IDs for traceability, but embed them in narrative, not tables.
Distinguish human work from agent output. In a conductor workflow, impressive autonomous output (an agent producing 6 tasks over 4 hours) is not the same as the human doing deep work. The story should reflect what the human actually engaged with — use prompt count as the primary signal. An autonomous session that produced a lot is worth a sentence ("dispatched X, which produced Y"); an interactive session where the human debugged something for 5 minutes with 3 prompts is where the narrative focus should be. The reader wants to know: what did I spend my attention on today?
If this is a repeat run during the day, emphasise what changed since the last update. Note dropped threads (work started but not finished) with gentle framing.
### Session Flow)Reconstructs the day's attention flow from session summary JSONs in $AOPS_SESSIONS/summaries/. This answers: what did I actually spend my attention on, where did I get pulled away, and what's still hanging?
Structure:
### Session Flow
**Where your attention went** (interactive sessions, 2+ user prompts):
1. **[Topic]** ([time], [N prompts], [duration]): [What the user was doing — use their actual prompt text, not agent-generated summaries. What was the outcome.]
2. ...
**Dispatched work** (1-prompt sessions — fire and forget):
- **[Topic]** ([time]): [What was dispatched and what came back. Note if the agent ran autonomously for a long time.]
**Autonomous background runs** (0-prompt sessions):
- **[Session ID]** ([duration]): [What the agent produced. Flag this as zero human attention cost.]
**Threads left hanging**:
- [Topic not completed, with context on why]
**The day in a line**: [One-sentence editorial summary]
The primary signal for attention cost is user prompt count, not session duration or output complexity. Extract timeline_events where type == "user_prompt" for each session. See [[instructions/progress-sync]] Step 4.2 for the engagement classification table.
A 337-minute autonomous session with 0 prompts costs the human nothing — it's fire-and-forget. A 5-minute session with 4 prompts is where they were actively thinking. Lead with where the human's attention actually went, not where the most output was produced.
Use the user's actual prompts as ground truth: The description field in user prompt timeline events tells you what the human was trying to do. Agent-generated summary fields are abstractions of abstractions. When writing Session Flow entries, reference the prompt content (e.g., "debugged PKB search for [[specific topic]]" not "PKB lookup").
Categories of attention (derived from prompt count):
Work type ordering: Within each attention category, list research/academic sessions before infrastructure sessions. If the day's only deep-engagement session was a research analysis, it should be the first item under "Where your attention went" — not buried after dispatched infrastructure work. Research work that produced no GitHub artifacts but high human engagement is the headline, not the footnote. See [[instructions/progress-sync]] for work type classification.
What counts as a distraction vs. conductor work: A quick check on an unrelated project is conductor work if it's a deliberate scan (1 prompt, moved on). It's a distraction if it pulls the user into reactive engagement (2+ prompts on something unplanned), or if a "quick check" turns into a 2-hour tangent. Judge by prompt count and what happened after — did the user return to their main thread, or did they drift?
Data source: Read session summary JSONs for the current day. Filter out auto-commit sessions (commit-changed in filename, or filename starts with sessions-) and polecat workers (project field matches a short hex hash, e.g. ^[a-f0-9]{7,8}$). Use timeline_events where type == "user_prompt" for prompt count and content (the primary attention signal), summary for agent outcomes, token_metrics.efficiency.session_duration_minutes for duration context, and project for context-switch detection.
See [[instructions/work-summary]] for story synthesis guidance.
A reference section for traceability — what sessions ran, what PRs merged, what tasks were completed. This section exists for the record, not for the user's morning read.
Contains (when data is available):
Quality guidance: This section should be scannable but not prominent. It's reference material. If GitHub CLI is unavailable or no sessions ran, the section should be minimal ("No sessions today") rather than filled with empty tables and "n/a" markers.
Session log entries must be meaningful the next morning. For sessions with user prompts, use the first user prompt's description (truncated) as the session description, not the agent-generated summary. For 0-prompt (autonomous) sessions, base the description on what the agent produced — e.g., autonomous: summarized AXIOMS.md for daily skill update. "Pulled task-7275a7b8" is useless — what was the task about? "Reviewed swarm-supervisor skill update" — what was the update? Include enough context that someone reading the log tomorrow can reconstruct what happened without opening the session JSON. Include the prompt count (e.g., "2p" or "0p") so the reader can distinguish interactive work from autonomous runs at a glance.
Accomplishments should be linked to their corresponding tasks. Every [x] item should reference a task ID where possible.
See [[instructions/progress-sync]] for session loading, PR querying, and task matching.
Items carrying forward from yesterday (verified against live task state — never copy blindly from yesterday's note) and end-of-day abandoned todos.
Only present when non-empty. If there's nothing to carry over, omit the section entirely rather than showing empty placeholders.
The daily note is a shared document between the agent and the user. The ownership contract:
| Content type | Rule |
|---|---|
| Machine-generated sections (Work Log tables, PR lists, priority bars) | Fully replaced on each run. |
| Mixed sections (Focus recommendations, FYI items) | Agent regenerates its content but preserves anything the user has written. User content is identified by position (below agent content). |
| User sections (My priorities, any section the user adds) | Never touched by the agent. |
| User annotations anywhere | If the user adds a note, comment, or annotation to any section, the agent preserves it. |
What happens when the user edits the note: The agent should read the note before updating and notice user changes. If the user has crossed out a recommendation, added context to an FYI item, or written priorities, those are signals the agent should respect — not overwrite.
Template markers: Do not leave visible template artifacts (<!-- user notes -->, placeholder text like "(End of day carryover)", empty tables). If a section has no content, either omit it or write a brief natural-language empty state ("No sessions today"). The note should read as a composed document, not a filled-in form.
--- as section dividers (only in frontmatter)[[wikilink]] syntax[ns-abc] Task title)The skill gathers information from multiple sources and composes the note. The order below is a typical sequence, not a rigid pipeline — the agent may adjust based on what's available:
/email to triage inbox (creates tasks with full context; returns FYI items for the daily note)Detailed procedures for each step are in the
instructions/subdirectory. These procedures describe best practices and edge cases — they are guidance for the agent, not scripts to execute mechanically (P#116).
When a data source is unavailable, skip gracefully and continue. Note the gap in natural language ("Email unavailable today"), not with error codes or empty table structures. The note should always be useful even when incomplete.
/bundle): The daily note surfaces information; the bundle adds editorial judgment for decision-making (coversheets, email drafts, annotation targets). See [[specs/daily-briefing-bundle.md]]./pull: Starts execution. The daily note plans; /pull acts.See [[references/note-template]] for the structural template.