From waggle
Generates or refines Acceptance Criteria and Execution Plans for tasks. Single task or batch mode. Ensures tasks are executable at agent-autonomous quality. Uses multi-round brainstorming to extract quality information from users. Use this skill whenever the user wants to plan, prepare, refine, or improve a task before execution — including writing AC, execution plans, or making tasks ready for autonomous agents. Triggers on: "plan task", "refine task", "generate AC", "write execution plan", "plan all tasks", "auto-plan", "prepare task", "improve task", "batch plan".
npx claudepluginhub kazukinagata/waggle --plugin waggleThis skill uses the workspace's default tool permissions.
You generate and refine Acceptance Criteria (AC) and Execution Plans for tasks. Your goal is to make tasks executable at agent-autonomous quality — detailed enough that an AI agent can complete them without additional questions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
You generate and refine Acceptance Criteria (AC) and Execution Plans for tasks. Your goal is to make tasks executable at agent-autonomous quality — detailed enough that an AI agent can complete them without additional questions.
Invoke the bootstrap-session skill to establish the active provider and current user.
Skip if active_provider and current_user are already set in this conversation.
Invoke the loading-custom-instructions skill with key task-creation to populate custom_task_creation_instructions. If the returned value is non-null, pass it along to every planning agent spawned below — AC drafts, Execution Plan drafts, and Priority defaults should honor the user's project-specific rules (e.g. "AC must use Given/When/Then", "Execution Plan steps must start with a verb", "treat security-related tasks as High priority"). If null, proceed with the normal planning heuristics.
Custom instructions only influence generated field content. They never override the validating-fields gate at Phase 5, and they never decide status transitions or destructive operations.
Three modes of operation:
User specifies a task by title, ID, or description. Search the Tasks DB and confirm the match.
User says "plan all Backlog tasks" or similar. Query by status filter.
Receives a list of task IDs from another skill (e.g., running-daily-tasks). Process each task in the list.
For each task, determine the planning path:
If the task title starts with [Hearing]:
"Confirm with {person} about {topic_from_title}. Record response in Agent Output. Update Status to Done when confirmed.""1. Contact {person} via messaging tool\n2. Ask about: {topic}\n3. Record response in Agent Output\n4. Update Status to Done"Classify the task:
code-planning-agentknowledge-planning-agentCheck minimum input threshold:
Spawn the appropriate planning agent via the Agent tool:
custom_task_creation_instructions (if non-null) so the agent can honor project-specific rules for AC / Execution Plan style and Priority defaultsPresent agent output to user for final confirmation:
[Accept] [Edit] [Skip]Validation gate: For each updated task, invoke the validating-fields skill with the task data and target status "Ready". It will return {valid, errors, warnings}. Report which tasks are now Ready-eligible based on the valid: true results.
When processing multiple tasks (batch mode or pipeline mode):
custom_task_creation_instructions if non-nullcode-planning-agentknowledge-planning-agentExample (3 code tasks + 2 non-code tasks = 5 total, fits in one chunk):
Message 1: [Agent(code-task-1), Agent(code-task-2), Agent(code-task-3), Agent(non-code-task-1), Agent(non-code-task-2)]
→ All 5 run in parallel
Example (8 tasks = 2 chunks):
Message 1: [Agent(task-1), Agent(task-2), Agent(task-3), Agent(task-4), Agent(task-5)]
→ Wait for all 5 to complete
Message 2: [Agent(task-6), Agent(task-7), Agent(task-8)]
→ Wait for all 3 to complete
Wait for all agents in the current chunk to complete. Classify each result:
Present all results together:
Planning results (5 tasks):
1. [OK] "API endpoint" — AC: 4 criteria, Plan: 6 steps
2. [OK] "Fix auth bug" — AC: 3 criteria, Plan: 4 steps
3. [NEEDS INPUT] "Refactor DB" — agent needs: "Which tables are affected?"
4. [OK] "Write blog" — AC: 5 criteria, Plan: 7 steps
5. [FAILED] "Update docs" — agent error: timeout
[Accept all OK] [Review one by one] [Skip all]
For each accepted task, run validation and promote to Ready if valid (same as single-task flow).
Summary: "Planned N tasks. M Ready-eligible. K need more context. J failed."
This protocol is embedded in the planning agent prompts. The agent drives the conversation:
Round 1: Agent proposes an initial AC draft based on Title + Description + Context.
→ "Based on your task, I propose these completion criteria:
1. {criterion 1}
2. {criterion 2}
3. {criterion 3}
What would you add or change? You can also describe your own."
Round 2 (if user response lacks verifiable conditions):
→ Agent refines: "I understood X. Let me also suggest:
- {additional criterion based on user input}
- {edge case consideration}
Anything else? What about error cases or edge conditions?"
Round 3 (continue if user is engaged):
→ Synthesize: "Here's the complete checklist:
1. {final criterion 1}
2. {final criterion 2}
...
Anything missing?"
→ If user says "done" / "OK": finalize
→ If user adds more: incorporate and re-present
Fallback (user disengages — "that's enough", "just go with it", etc.):
→ Accept current state with [LOW CONFIDENCE] tag prepended
→ Move on to next task
Key principle: The agent PROPOSES first, then refines through dialogue. Never wait for the user to provide content from scratch — generate drafts proactively.
Semantic triggers: Round 2 fires when the user's response lacks verifiable conditions (no commands, file paths, metrics, or observable outcomes) — not based on character count.
After AC is finalized, generate the Execution Plan:
references/knowledge-work-patterns.md).[Planning Complete]
Tasks processed: N
AC generated: X
Execution Plans generated: Y
Ready-eligible: Z (passed validation)
Skipped: K (insufficient context or user declined)