From waggle
Reads incoming messages (Slack, Teams, Discord) and custom intake sources addressed to the current user and auto-converts them into categorized tasks (hearing-needed, self-action, or delegate). For ambiguous Slack messages in an explicitly interactive session, can optionally send a user-approved clarification reply in-thread instead of creating a hearing-task pair. Supports per-user custom source configuration via ~/.waggle/intake-prompt.md or Global Instructions, and per-user task-creation rules (tag naming, priority defaults, etc.) via ~/.waggle/task-creation-prompt.md or Global Instructions. Use this skill whenever the user wants to process incoming messages, check their inbox, or convert messages into tasks — even if they don't say "intake". Triggers on: "message intake", "intake", "process messages", "convert messages to tasks", "check slack", "check teams", "inbox processing", "clarify slack message".
npx claudepluginhub kazukinagata/waggle --plugin waggleThis skill uses the workspace's default tool permissions.
Reads incoming messages from messaging tools addressed to the current user and auto-converts them into Notion tasks.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Reads incoming messages from messaging tools addressed to the current user and auto-converts them into Notion tasks.
Primary mode: creates tasks from messages.
Opt-in mode: can send Slack thread replies to ask for clarification on ambiguous messages. This mode is strictly user-approved and is disabled by default in any non-interactive run (see WAGGLE_EXECUTION_MODE in Step 2.3).
To run automatically every morning via Claude Desktop:
Run the ingesting-messages skillInvoke the bootstrap-session skill to establish the active provider and current user.
Skip if active_provider and current_user are already set in this conversation.
Determine how far back to fetch messages:
lookback_period to that value.lookback_period = "24 hours"Inspect available MCP tools and determine which messaging tool to use:
| Tool group | Service |
|---|---|
slack-* tools exist | Slack |
teams-* or ms-teams-* tools exist | Microsoft Teams |
discord-* tools exist | Discord |
The Intake Log is a Notion database (Intake Log) that tracks which messages have already been processed.
intakeLogDatabaseId.intakeLogDatabaseId is missing or the database does not exist:
notion-create-database.| Property | Notion Type | Description |
|---|---|---|
| Message ID | title | Message unique ID (e.g. Slack: channel_id:ts) |
| Tool Name | select | slack / teams / discord |
| Processed At | date | Processing timestamp |
intakeLogDatabaseId.processed_message_ids: query intakeLogDatabaseId using the provider's "Querying Any Notion Database" flow and collect all existing Message ID values.Active Threads enables continuous monitoring of threads the user has participated in, even after the original messages fall outside the lookback period.
activeThreadsDatabaseId.activeThreadsDatabaseId is missing or the database does not exist:
notion-create-database.| Property | Notion Type | Description |
|---|---|---|
| Thread ID | title | channel_id:thread_ts |
| Channel ID | rich_text | Slack channel ID |
| Thread TS | rich_text | thread_ts value |
| Last Checked | date | Timestamp of last check for new replies |
| Status | select | active / closed |
| Clarification Sent At | date | When a Slack clarification reply was last sent to this thread (optional, used by Step 2.3 idempotency) |
activeThreadsDatabaseId.Clarification Sent At field was introduced):
notion-fetchClarification Sent At property is missing, add it via notion-update-data-source with ADD COLUMN "Clarification Sent At" DATEnull for this field; no data migration is neededactive_threads: query activeThreadsDatabaseId with filter Status = active and collect all records.Load both custom intake instructions and custom task-creation instructions via the shared loader:
loading-custom-instructions skill with key intake to populate custom_intake_instructions. If no custom intake source is configured, the variable is null. This governs which additional sources are scanned in Step 1.5.loading-custom-instructions skill with key task-creation to populate custom_task_creation_instructions. If no custom task-creation rules are configured, the variable is null. This is applied in Step 3 when building per-category task fields (see task-creation-templates.md).Both files (~/.waggle/intake-prompt.md and ~/.waggle/task-creation-prompt.md) may exist independently; they serve different purposes. On Cowork, both are loaded from their respective <waggle-custom-intake> / <waggle-custom-task-creation> XML tags in Global Instructions.
Use the detected Messaging MCP to retrieve all messages from the past {lookback_period} addressed to current_user via a multi-query strategy:
Retrieve every message from the past {lookback_period} that is directed at or contextually relevant to current_user:
current_usercurrent_user has participated (started or replied), even if no @-mention is presentto:me<@USER_ID> (the current_user's Slack user ID). Exclude own messages. Search scope must include both public and private channels the user is a member of. If the MCP tool has a channel-type filter, ensure private / mpim / im types are included alongside public_channel.from:me search (past {lookback_period}), collect all thread_ts values of threads current_user participates inFor each thread in active_threads, check for new replies that the lookback-period queries may have missed:
slack_read_thread with channel_id and message_ts set to the thread's Thread TS value. Set oldest to the thread's Last Checked timestamp to retrieve only new replies since the last check.current_user (own messages)channel_id:ts) is already in processed_message_idsLast Checked to the current timestamp (even if no new messages were found).This ensures threads discovered in previous ingesting runs continue to be monitored regardless of the lookback period. Without this step, threads whose original messages and user replies have both fallen outside the lookback window would become invisible to Query 3.
id ∉ processed_message_idsbot_id or bot-related subtype such as bot_message):
text is empty / whitespace-only / only newlines, apply Step 1c-1 first (Block Kit body refetch) before the KEEP/DISCARD bullets below. The KEEP check needs a real body to scan for <@current_user>; skipping 1c-1 would let the message silently fall through to DISCARD.current_user — from this point on, treat it identically to a human message. It flows through classification (Step 2), enrichment (Step 2.5), and task creation (Step 3) with no further bot-specific filtering. The bot-sender check in Step 2.3 Prerequisite #4 only gates sending Slack clarification replies (because bots do not read replies); it does NOT exclude the message from intake — bot-origin Category A messages still produce a [Hearing] task via the fall-through path.Slack's slack_search_* MCP does not render blocks to plain text. Bot messages whose content lives entirely in Block Kit (meeting notifiers like MTG Pipeline Bot, quiz bots like Colla, etc.) therefore come back with an empty or whitespace-only text field, even though Slack's search index resolved an <@current_user> mention inside the blocks and matched the query. Left untreated, Step 1c's KEEP-on-@-mention rule cannot fire (no body to scan) and the message is silently dropped.
Procedure:
Trigger: the bot message's text is empty / whitespace-only / only newlines. (Bot messages with non-empty plain text — e.g. attendance confirmers, system-notice bots — skip 1c-1 entirely and go straight to Step 1c's KEEP/DISCARD.)
Refetch: call slack_read_channel with:
channel_id = the message's channeloldest = str(float(ts) - 0.000001) (one microsecond before the target)latest = str(float(ts) + 0.000001) (one microsecond after)limit = 1Both oldest and latest are exclusive bounds in this MCP, so setting either equal to ts returns zero messages. The ± 1 μs window is the tightest pinpoint that still includes the target and nothing else. The response expands blocks into a plain-text representation. Replace the message's text with that rendered body.
Fallback on empty / error: if the response contains no messages, or the call errors (API failure, rate limit, DM permission issue, message deleted since the search, etc.), record Block Kit refetch failed for {channel_id}:{ts} under Step 5's ⚠️ Fallback events: section and DISCARD the message. Never KEEP on an unverified mention — the match in Slack's search index alone is not sufficient evidence that the current body still @-mentions the user.
Apply 1c: with the refetched body in hand, return to Step 1c's KEEP/DISCARD bullets and run the <@current_user> scan normally.
Non-bot messages are unaffected — skip this step for them.
Teams / Discord: if the equivalent search MCP exhibits similar Block Kit–style truncation, apply the same pattern with the platform's channel-read API. Otherwise skip.
Merge all query results and deduplicate by message unique ID (Slack: channel_id:ts).
(thread check: skipped — MCP does not support thread queries) to the summary.For each message in the deduplicated pool that originated from a thread (i.e., has a thread_ts or equivalent thread identifier), fetch the full thread to provide conversational context for classification and task creation.
For each unique thread_ts among the fetched messages:
slack_read_thread with channel_id and message_ts set to the thread_ts value.thread_context for the message:
[Thread Context — {N} messages in #{channel_name}]
@{parent_author} (thread start): {parent_message_text}
@{reply_1_author}: {reply_1_text}
...
@{reply_N_author}: {reply_N_text}
---
[This message]
@{sender}: {message_text}
... ({K} earlier replies omitted) where truncation occurs.thread_context to the message object for use in Steps 2 and 3.Apply the same pattern using equivalent thread/reply-fetching APIs if available. If the platform's MCP does not support thread fetching, set thread_context = null and note: (thread context: unavailable — platform MCP does not support thread reads).
Messages that are not part of a thread: set thread_context = null. No additional API calls.
If custom_intake_instructions is null, skip this step.
Follow the instructions in custom_intake_instructions to fetch items from each configured custom source:
"Custom source '{source_name}' skipped — required tools not available."
{source_name}:{unique_id} as the message unique ID for dedup against the Intake Log.Many custom sources (notably GOps imports) produce items whose description is effectively a stub — e.g. "GOpsタスク (タスクID: 4548). 見積前". Stub items create low-quality waggle tasks because the orchestrating LLM cannot build a meaningful Acceptance Criteria or Execution Plan from 20 characters of text.
For each item retrieved from a custom source, first detect whether it is a stub using the deterministic detector:
echo '<item_json>' > /tmp/item.json
bash "${CLAUDE_SKILL_DIR}/scripts/detect-stub-import.sh" /tmp/item.json
The output JSON has this shape:
{
"is_stub": true,
"stub_reason": "Short description with task ID reference and only status keyword",
"source_id": "4548",
"description_length": 26
}
If is_stub is false, proceed with the item as-is. If is_stub is true, attempt enrichment:
Fetch the source page body. For Notion-based sources (like GOps), call notion-fetch with the source page ID or URL. For other sources, follow the fetch instructions in custom_intake_instructions.
Fetch discussion comments. For Notion, call notion-get-comments on the same page ID. The comments often contain the real requirements — the specification discussion, approval decisions, and follow-up context that did not make it into the page body.
Transfer fields semantically (LLM judgment). The LLM reads the fetched content and maps it to the waggle task fields:
Description (preserve useful headings, strip navigation)Acceptance Criteria if they are verifiable; otherwise treat as contextContext with a [From {source_name} discussion] header so the executor knows their originlooking-up-members skill and set as waggle AssigneePriority when the source uses a comparable scale; otherwise leave unsetFallback on fetch failure. If the fetch fails (page deleted, permission denied, rate-limited), do not block the ingest. Proceed with the stub item, but:
stub-import to the waggle task's TagsContext: "Imported as stub from {source_name}. Enrichment fetch failed — the source page may need manual review before this task can be executed."This enrichment step is LLM-driven by design. The deterministic detector only decides whether enrichment is worth attempting; the actual Description / AC / Context construction is a semantic task that the orchestrating LLM performs directly. No separate agent is spawned.
For each message in the deduplicated pool, detect and attempt to read image attachments.
Check each message for a files array (Slack) or equivalent attachment field (Teams/Discord). Filter for image file types only:
mimetype starts with image/ (e.g., image/png, image/jpeg, image/gif)If a message has no image attachments, set attachment_info = null and move on.
For each image attachment detected:
permalink_public field.permalink_public exists and is non-null: attempt to read the image using WebFetch with the permalink_public URL.
WebFetch succeeds and returns image content: describe the image in detail, focusing on text content, UI elements, error messages, diagrams, or any actionable information visible. Store the description. Set read_status = "success".WebFetch fails (timeout, empty result, or HTML returned): set read_status = "unread" and description = null.permalink_public is not available (file not publicly shared): set read_status = "unread" and description = null. Do not attempt WebFetch with permalink or url_private as these require authentication that WebFetch cannot provide.Limitation: Most Slack files are not publicly shared, so
permalink_publicwill often be absent. In practice, the majority of images will follow the "unread" path. This is by design — the user is prompted to review unread images via message permalinks in Step 2.7.
For each message that has at least one image with read_status = "unread" or "skipped", construct (or extract from the API response) the message permalink so it can be shown to the user:
permalink field, use it directly. Otherwise construct: https://{workspace}.slack.com/archives/{channel_id}/p{ts_without_dot} where ts_without_dot is the message ts with the dot removed.Attach attachment_info to each message object:
attachment_info:
has_images: true
images:
- filename: "{name}"
mimetype: "{mimetype}"
permalink: "{file_permalink}"
description: "{AI description}" or null
read_status: "success" or "unread" or "skipped"
message_permalink: "{constructed_or_extracted_permalink}" or null
message_permalink: Only set when at least one image has read_status = "unread" or "skipped". Set to null if all images were read successfully.attachment_info = null.remaining_count = total_images - 3 and note: "({remaining_count} additional images not processed)"."Image processing capped at 10. {remaining} images skipped." When the cap is reached mid-message, include all remaining images from that message in attachment_info.images with read_status = "skipped" and description = "(global cap reached)". For subsequent messages that have not been processed at all, set attachment_info = null.Apply the same detection and reading pattern using equivalent attachment/file APIs. If the platform's MCP does not support file metadata, set attachment_info = null and note: (attachment processing: unavailable — platform MCP does not support file metadata).
Classify each message into A (Hearing Needed), B (Self-Action), or C (Delegate). When thread_context is available, use it alongside the message text to improve classification accuracy. When attachment_info is available and contains successfully read image descriptions, treat those descriptions as part of the message content for classification purposes. For example, a message saying "fix this" with an attached screenshot of a bug (successfully described) should be classified as Category B if the description provides enough context to act on. For the full classification heuristics, examples, and confirmation flow, follow references/classification-guide.md in this directory.
When classification is unclear, treat as Category A (safe default).
For each Category A message, before falling through to the [Hearing] task creation in Step 3, evaluate whether the ambiguity can be resolved with a short Slack reply to the sender. A well-placed clarification reply often unblocks a message faster than creating a separate hearing task, and it keeps the conversation in the natural Slack thread where the sender is already engaged.
This entire step is opt-in and gated. If any of the prerequisites below fail, fall through to the existing Category A flow in Step 3 — the clarification logic never runs silently.
slack_send_message MCP tool is available (auto-detect). Teams / Discord clarification is not implemented yet.
The message has a thread_ts or is repliable (has both channel_id and ts). If the message is a DM without a thread, use ts as the thread root for the reply.
The current run is explicitly interactive — gated by the WAGGLE_EXECUTION_MODE environment variable:
WAGGLE_EXECUTION_MODE=interactive → clarification is allowed (still subject to user approval in Step 2.3d)WAGGLE_EXECUTION_MODE=scheduled or unattended → never send Slack messages, always fall through to [Hearing] task creationWAGGLE_EXECUTION_MODE=interactive in their shell profile. Claude Desktop Scheduled Tasks and cron jobs must NOT set it. The setting-up-tasks skill documents this during initial setup.Why this is a hard gate: inference-based detection ("are we inside a Claude Desktop Scheduled Task?") is unreliable because a user can SSH into the same machine and manually run ingest. An explicit env var avoids that class of bypass entirely.
The sender is not a bot or system account. Skip clarification if the message has bot_id, subtype: bot_message, or if the sender's display name contains obvious bot indicators (bot, -bot, app, notification). Bots do not read Slack replies — sending them a clarification is spam. Fall through to a [Hearing] task instead.
No clarification has been sent to this thread in the last 24 hours. This is the idempotency check: query the Active Threads record for the matching {channel_id}:{thread_root_ts} and check its Clarification Sent At field. If the timestamp is within the past 24 hours, skip this message (do not re-send, do not create a duplicate hearing task — the user is already waiting on the previous clarification).
Concurrency lock. Before composing the reply, create a lock file at ~/.waggle/locks/clarification-{channel_id}-{thread_root_ts}.lock with the current timestamp. If the lock file already exists and its mtime is within the last 60 seconds, another ingest run is racing on the same thread — skip this message and let the other run finish. Stale locks (mtime > 60 seconds) are treated as abandoned and overwritten. The lock is released after Step 2.3e completes (success or fallback). On Cowork, the filesystem is ephemeral and runs are single-tenant, so the lock is effectively a no-op there; it remains useful for CLI and Claude Desktop runs that may race.
If all six prerequisites pass, proceed to Step 2.3a. Otherwise fall through to Step 3 Category A flow.
Load ${CLAUDE_SKILL_DIR}/references/clarification-heuristics.md as reference context. For each eligible Category A message, reason through the three dimensions defined there (Action / Target / Completion condition) using semantic understanding, not regex. Produce a structured verdict per message:
{
"missing": ["action", "target", "completion"], // subset, may be empty
"can_clarify": bool,
"questions": [
{ "dimension": "action", "text": "<question in sender's language>" },
...
]
}
Apply the decision rule from the heuristics file:
can_clarify = true, prepare that many questionsFor each message where can_clarify = true, compose the full reply in the sender's language. Determine the language from the message prose using the LLM's native multilingual understanding. Do not use char-class ratios; the LLM can reliably distinguish "Japanese prose with English file paths" from "English prose with a Japanese proper noun". If the prose is truly ambiguous (e.g. the message is only a code block), fall back to the defaultLanguage field on the Waggle Config page, or English if that field is unset.
Follow the question templates in clarification-heuristics.md and wrap them in a short friendly framing — the goal is a reply that feels natural in the thread, not a robotic checklist.
Present all eligible Category A messages in a single AskUserQuestion call, paginated 5 per batch. For each message, show:
[Send reply] / [Create hearing task instead] / [Skip]Batch-level actions:
[Send all drafted replies] — fast path if the user trusts the drafts[Review individually] — per-message decision[Create hearing tasks for all] — fall through to Step 3 for every eligible messageFor each message, based on the user's choice:
Send reply:
Clarification Sent At within 24h, lock file) because the user may have spent time on the preview screen.slack_send_message with channel_id, thread_ts = thread_ts || ts, and the composed reply text.Tool Name = "slack (clarification-sent)" so the original message is marked processed and does not re-surface on the next run.{channel_id}:{thread_root_ts}, setting Status = active, Last Checked = now, and Clarification Sent At = now. This registers the thread for continuous monitoring so the sender's follow-up reply is picked up in the next ingest run.Create hearing task instead: Fall through to the existing Step 3 Category A flow. The [Hearing] task template is defined in task-creation-templates.md.
Skip: Do NOT create an Intake Log entry. The message will re-surface in the next ingest run so the user can reconsider.
Clarification must never dead-end on the user. The fallback chain is:
Primary: Slack clarification reply
│
├─ slack_send_message fails (network, rate limit, permission denied)
│ └─> Fallback 1: Create a [Hearing] task via the Step 3 Category A flow
│ ├─ [Hearing] task creation fails (Notion API error, assignee
│ │ resolution fails, schema error)
│ │ └─> Fallback 2: Log to Intake Log with
│ │ Tool Name = "slack (intake-failed)",
│ │ do NOT mark the message as processed, do NOT create
│ │ a partial task. Surface the failure in the Step 5
│ │ summary so the user can investigate. The message
│ │ will re-surface on the next ingest run.
│ └─ [Hearing] task creation succeeds
│ └─> Proceed, but note the downgrade in the summary
└─ Success → Update Active Threads + Intake Log as in Step 2.3d
No auto-retry at any level. Retries are the next ingest run's job, mediated by the Intake Log dedup check.
After a successful clarification send, the Active Threads DB is the only place waggle remembers that it asked a question. The record clarification_sent_at timestamp is the 24h idempotency key. The existing Active Threads auto-close logic (7-day staleness → Status = closed) applies unchanged — if the sender never answers, the thread closes on its own and the user can decide whether to follow up manually.
Category B messages go through a three-phase enrichment: auto-generation, validation, and user confirmation. Category C messages use the manual-ask path because the delegating user — not the LLM — should be the one defining expectations for the recipient.
For each Category B message, the orchestrating LLM generates a draft task inline using the message content, thread_context, and any successfully-read attachment_info image descriptions as input. No separate agent is spawned; the generation happens in the orchestration context directly.
Generate the following draft fields:
Acceptance Criteria — 2 to 5 verifiable criteria. Each criterion must include at least one of: a specific command (e.g. npm test, curl ...), a file path, a numeric threshold with unit (<2s, 200 OK), or an observable state verb (returns, displays, creates, passes, fails, contains, ...). The list of valid state verbs matches the semantic check that validating-fields applies in Phase A.5 below — produce AC that will pass that check.
Hallucination guard (grounding): Every criterion must reference a specific keyword, entity, file path, URL, or metric that appears in the original message text or thread context. If the LLM is inclined to add a criterion that is not grounded in the source text, it must prefix that criterion with [INFERRED] in the AC text. This prefix is persisted in the Notion task (not stripped before save) so that:
[INFERRED] tag remains visible in the Notion page as an audit trail — whoever executes or reviews the task later knows that particular criterion was inferred, not explicitly stated.Execution Plan — 3 to 7 numbered steps. Each step is an action verb + target + expected outcome. Same grounding rule: steps should reference entities present in the message.
Working Directory inference — if the message (or attachments) mentions a repository name, project name, or file path, suggest the matching absolute working directory. If no repo signal is present, leave empty — the user will decide in Phase B.
Priority inference — determine priority from the message context using natural language understanding, paying attention to negation:
Before showing the auto-generated draft to the user, invoke the validating-fields skill with the generated task data and target status "Ready". It will return {valid, errors, warnings}.
No auto-retry. If validation fails:
[LOW CONFIDENCE] in the Phase B displayAuto-retry with a "stricter prompt" is intentionally avoided because it introduces non-determinism, cost inflation, and potential infinite loops when the underlying message genuinely lacks enough information. It is cheaper and more honest to show the low-confidence draft to the user and let them correct it.
Present Category B messages in pages of up to 5 messages per AskUserQuestion call. If more than 5 messages exist, split into multiple pages.
Within each page, rank messages so the ones most likely to need the user's attention appear first:
[LOW CONFIDENCE] (Phase A.5 validation failed)[INFERRED] prefixesFor each page, present the following top-level options:
[INFERRED] criteria. Single click, whole batch moves.When the user chooses "Review individually", each message gets these per-message options:
[INFERRED] prefixes remain in the saved AC as an audit trail.AskUserQuestion sub-prompts.planning-tasks.Category C tasks are delegations. The delegating user knows what they want from the recipient; the LLM should not guess. Use the existing manual-ask flow via AskUserQuestion:
If there are multiple Category C messages, batch them into a single AskUserQuestion call. If the user replies "as-is" or equivalent, proceed with only the information from the original message. Incorporate answers into the task fields when creating tasks in Step 3.
Display the final task list to be created (messages handled via Step 2.3 clarification replies are NOT listed here — they were processed in Step 2.3 and do not produce tasks on this run):
| # | Category | Sender | Summary | Disposition | Status | Executor | Attachments |
|---|---|---|---|---|---|---|---|
| 1 | B: Self | @alice | Update README with new endpoints | Creating Ready task | Ready | claude-desktop | |
| 2 | A: Hearing | @bob | Fix this layout issue | [Hearing] task (user declined clarification) | Blocked | human | 1 image (read) |
| 3 | B: Self | @charlie | Bug in checkout flow | Creating Ready task | Ready | cli | 1 image (unread) |
| 4 | C: Delegate | @alice | @charlie deployment script | Delegating to @charlie | Backlog | human |
The Disposition column is a transient display field (computed per run) that tells the user why each message is being handled this way. For Category A messages it records whether a clarification was sent, a hearing task was created, or the message was skipped. For Category B/C it shows the standard disposition.
The Attachments column shows: blank if no images, {N} image (read) if all images were read successfully, {N} image (unread) if any image could not be read, {N} image ({S} read, {F} unread) for mixed results. Images with read_status = "skipped" (global cap) are counted as unread for display purposes.
Unread image attachments: If any messages have images with read_status = "unread" or "skipped", display them below the table:
The following messages have image attachments that could not be read automatically. Please review them before confirming task creation:
- #3 (@charlie): View message in Slack
The user can then open the links, review the images, and optionally update the task summary or category before confirming.
Use AskUserQuestion: "Create these N tasks? (Please review unread image links above first)"
If no messages have unread images, use the standard prompt: "Create these N tasks?"
Create tasks directly via notion-create-pages for each message (do not go through the managing-tasks skill). For the dedup check, common fields, and category-specific field templates, follow references/task-creation-templates.md in this directory.
notion-create-pages:
channel_id:ts)slack)For each processed message that was part of a thread (has a thread_ts value):
thread_id = {channel_id}:{thread_ts}thread_id is not already in active_threads: create a new record in Active Threads DB via notion-create-pages with Status=active and Last Checked=current timestamp.Auto-close stale threads: For each Active Thread where Last Checked is more than 7 days ago AND no new messages were found in this run: update Status to closed.
FIFO cleanup: If the number of Active Threads with Status=active exceeds 200, close the oldest threads (by Last Checked) until the count is at or below 200.
Push data to view server:
# Silently skip if server is not running
curl -s http://localhost:3456/api/health -o /dev/null 2>/dev/null && \
curl -s -X POST http://localhost:3456/api/data \
-H "Content-Type: application/json" -d '<tasks_json>' -o /dev/null 2>/dev/null || true
[Message Intake Complete] via {tool_name}
Processed: N / Skipped (already processed): K / Skipped (already exists as task): J
A (Hearing Needed): X total
→ Clarification replies sent in-thread: X1
→ [Hearing] task pairs created: X2
→ Skipped (user chose to defer): X3
B (Self-Action): Y → Ready tasks created (auto-generated AC/EP for {y_gen} of them)
C (Delegate): Z → Backlog tasks created
Custom sources: {list of sources processed, or "none configured"}
→ {stub_count} stub items detected, {stub_enriched} enriched successfully
Thread context: {T} messages enriched with thread history
Attachments: {I} images detected, {S} read successfully, {F} unread or skipped
Execution mode: {"interactive" | "scheduled"} (from WAGGLE_EXECUTION_MODE)
If any Step 2.3 fallbacks fired (Slack send failed, hearing-task creation failed), list them below the summary so the user can investigate:
⚠️ Fallback events:
- Clarification to #channel thread {ts} failed (rate limit) → fell back to [Hearing] task
- Hearing task creation for message {id} failed (Notion schema error) → marked intake-failed, will retry next run