From agent-skills
Grooms sprint tasks by cross-referencing Monday.com board with GitHub issues across multiple repos, then produces subitems, descriptions, linked issues, and SP estimates. Use when grooming a sprint, planning sprint tasks, preparing for sprint planning, or when user says "groom", "sprint planning", "cross-reference tasks and issues", "add subitems", or "create missing issues".
npx claudepluginhub oryanmoshe/agent-skills --plugin agent-skillsThis skill uses the workspace's default tool permissions.
**Sprint grooming means every task gets: subitems, a detailed description, linked GitHub issues, proper labels, SP estimate, and priority.** This skill automates the cross-referencing of Monday.com tasks with GitHub issues across multiple repos, identifies gaps, and produces fully groomed tasks.
Verifies tests pass on completed feature branch, presents options to merge locally, create GitHub PR, keep as-is or discard; executes choice and cleans up worktree.
Guides root cause investigation for bugs, test failures, unexpected behavior, performance issues, and build failures before proposing fixes.
Writes implementation plans from specs for multi-step tasks, mapping files and breaking into TDD bite-sized steps before coding.
Sprint grooming means every task gets: subitems, a detailed description, linked GitHub issues, proper labels, SP estimate, and priority. This skill automates the cross-referencing of Monday.com tasks with GitHub issues across multiple repos, identifies gaps, and produces fully groomed tasks.
Collect from three sources in parallel:
Monday.com board:
get_board_info — learn column IDs, status labels, groups, epicsquery { boards(ids: [BOARD_ID]) { groups { id title items_page(limit: 100) { items { id name } } } } }
get_board_items_page with includeColumns: true, includeSubItems: true — get full detailsGitHub issues (ALL repos, in parallel):
gh issue list --repo ORG/REPO --state open --limit 100 --json number,title,state,labels,body,url
Check every repo the team works in — not just the obvious one.
GitHub labels (ALL repos, in parallel):
gh label list --repo ORG/REPO --json name,description,color --limit 50
Know what labels exist before assigning them. Create new labels if the repo is missing useful ones.
For each sprint task:
| Result | Action |
|---|---|
| Task has matching issues | Link them in the task description |
| Task has NO matching issues | Create issues (with full grooming — see Phase 3) |
| Open issue has NO matching task | Suggest adding task to sprint, or note as backlog |
| Issue is superseded by new work | Close with comment explaining what replaces it |
Every task (except DoD/ceremony tasks) needs ALL of these:
RULE: 3-5 subitems per task, each independently completable
"Table rendering in chat UI (client #1802)"Create via Monday API:
create_item with parentItemId → creates subitem
columnValues: {"numbers": "0.5"} → sets Days to Complete
Every task gets an update in this format:
📋 Sprint Grooming Notes
[2-3 sentences: what this task is and why it matters]
🔗 Repo: org/repo-name
🎫 Issues:
• repo #NUM (title) — URL
• repo #NUM (title) — URL
📎 Related: repo #NUM (brief note on relationship)
[Dependencies or blockers if any]
Acceptance criteria:
• [Concrete, testable criterion]
• [Concrete, testable criterion]
• [Concrete, testable criterion]
Multi-repo tasks list all repos and issues.
New issues must be fully groomed — not just a title:
## Context
[What this is and how it fits into the system]
## Motivation
[Why this matters — user impact, business value, what breaks without it]
## Scope
- [ ] Checkbox for each piece of work
## Technical Notes (if applicable)
[Schema, API contracts, data shape, file paths]
## Dependencies (if any)
[What must exist first]
## Acceptance Criteria
[How we know it's done]
Labels: Assign MULTIPLE relevant labels — not just enhancement. Use domain-specific labels (agent, evals, tools, resources, security, observability, etc.). Create labels if the repo is missing useful ones:
gh label create "label-name" --repo ORG/REPO --color "HEX" --description "Description"
After grooming, verify:
| Check | How |
|---|---|
| SP total vs capacity | Sum Est SP. Compare against sprint working days. Flag overload. |
| Committed vs buffer | High + Medium = committed. Best effort = buffer that can slip. |
| Missing fields | Every task needs: Owner, Est SP, Priority, Type, Epic |
| Subitem math | Sum of subitem days ≈ parent Est SP |
| Issue coverage | Every non-DoD task has at least one linked GitHub issue |
| Orphan issues | Open issues not in any sprint task → note for backlog discussion |
After all grooming, present to the user:
| Thought | Action |
|---|---|
| "This task only touches one repo" | VERIFY — many tasks span 2+ repos |
| "enhancement label is enough" | ADD domain-specific labels too |
| "The subitem name is self-explanatory" | ADD days-to-complete anyway |
| "I'll skip the issue body, title is clear" | WRITE full Context/Motivation/Scope/AC |
| "I'll just check shapes-agent issues" | CHECK ALL REPOS — client and server too |
| "No existing issues match, moving on" | CREATE the missing issues |
| "Description can be brief" | INCLUDE repos, issue links, and acceptance criteria |
Single-repo tunnel vision: Checking only one repo when a task spans agent + MCP + client. Always check all repos.
Label laziness: Slapping enhancement on everything. Use agent, evals, tools, resources, security, observability, performance, etc.
Skeleton issues: Creating issues with just a title and no body. Every issue needs Context, Motivation, Scope, and Acceptance Criteria.
Missing cross-links: Creating issues but not linking them back to the Monday task description. The Monday update must reference every related issue with URLs.
Subitem vagueness: "Do the thing" subitems with no days estimate. Each subitem needs a clear name and days-to-complete.
Ignoring superseded issues: Old issues that are replaced by new work should be closed with a comment, not left open to confuse future sprints.