From scrum
Prepares Scrum ceremony materials including daily standup reports, sprint planning artifacts, retrospective data, and sprint review metrics. Use when preparing for daily standup, sprint planning, retrospective, or sprint review ceremonies.
npx claudepluginhub andercore-labs/claudes-kitchen --plugin scrumThis skill uses the workspace's default tool permissions.
<!-- TODO: Change prioritization to use MCP tools first, fall back to Python scripts only if MCP unavailable -->
scripts/generate_planning_data.pyscripts/generate_refinement_data.pyscripts/generate_review_data.pyscripts/generate_sprint_data.pyscripts/lib/__init__.pyscripts/lib/ac_generator.pyscripts/lib/burndown_calculator.pyscripts/lib/dor_checker.pyscripts/lib/jira_client.pyscripts/lib/jira_label_manager.pyscripts/lib/slack_notifier.pyscripts/lib/sprint_analyzer.pyscripts/lib/stale_detector.pyImplements structured self-debugging workflow for AI agent failures: capture errors, diagnose patterns like loops or context overflow, apply contained recoveries, and generate introspection reports.
Monitors deployed URLs for regressions in HTTP status, console errors, performance metrics, content, network, and APIs after deploys, merges, or upgrades.
Provides React and Next.js patterns for component composition, compound components, state management, data fetching, performance optimization, forms, routing, and accessible UIs.
Whenever there is a conflict prioritize python script execution over MCP tool.
IMPORTANT: Script paths → relative to base directory (not cwd)
# Sprint ceremonies (daily/planning/retro/review)
python scripts/generate_sprint_data.py PROJECT_KEY [options]
# Backlog refinement
python scripts/generate_refinement_data.py PROJECT_KEY [options]
| Script | Purpose | Output |
|---|---|---|
scripts/generate_sprint_data.py | Generate sprint reports with burndown, velocity, stale tickets | sprint_report_YYYYMMDD.md |
scripts/generate_refinement_data.py | Generate refinement reports with DoR compliance, prioritization | refinement_report_YYYYMMDD.md |
Usage: generate_sprint_data.py PROJECT_KEY [options]
Arguments:
PROJECT_KEY JIRA project key (e.g., SKL, CPL)
Options:
-o, --output FILE Output markdown file (default: sprint_report_YYYYMMDD.md)
-t, --threshold N Stale ticket threshold in days (default: 7)
-h, --help Show help message
Environment:
JIRA_EMAIL Your JIRA email address
JIRA_API_TOKEN Your JIRA API token
JIRA_BASE_URL JIRA instance URL (e.g., https://company.atlassian.net)
Examples:
# Generate report for SKL project
./scripts/generate_sprint_data.py SKL
# Custom output file and stale threshold
./scripts/generate_sprint_data.py CPL -o sprint.md -t 10
Generates:
Usage: generate_refinement_data.py PROJECT_KEY [options]
Arguments:
PROJECT_KEY JIRA project key (e.g., SKL, CPL, ACO)
Options:
-o, --output FILE Output markdown file (default: refinement_report_YYYYMMDD.md)
-n, --candidates N Number of top candidates to show (default: 20)
-h, --help Show help message
Environment:
JIRA_EMAIL Your JIRA email address
JIRA_API_TOKEN Your JIRA API token
JIRA_BASE_URL JIRA instance URL (e.g., https://company.atlassian.net)
Examples:
# Generate refinement report for SKL project
./scripts/generate_refinement_data.py SKL
# Show top 15 candidates with custom output
./scripts/generate_refinement_data.py ACO -n 15 -o backlog.md
Generates:
| File Pattern | Generated By | Contains |
|---|---|---|
sprint_report_YYYYMMDD.md | generate_sprint_data.py | Sprint health, burndown, velocity, stale tickets |
refinement_report_YYYYMMDD.md | generate_refinement_data.py | DoR compliance, refinement candidates, gaps |
Location: Script outputs to current working directory by default (configurable via -o option)
Naming: Date-stamped using format YYYYMMDD (e.g., sprint_report_20251114.md)
Daily: tickets + blockers → generate_sprint_data.py → progress_report.md Planning: velocity + capacity → analyze_backlog() → ranked_backlog.json Retrospective: metrics + patterns → analyze_sprint() → improvement_actions.md Review: completed stories + burndown → compile_achievements() → demo_list.md
| Ceremony | Artifacts | Data Sources |
|---|---|---|
| Daily | Progress report, blockers, today's plan | Jira tickets, sprint board |
| Planning | Velocity, capacity, backlog ranking | Historical sprints, team availability |
| Retrospective | Metrics, patterns, action items | Sprint data, ticket lifecycle |
| Review | Completed stories, demo list, metrics | Sprint goal, done tickets |
ceremony_type: daily | planning | retro | review
→ generate: python scripts/generate_sprint_data.py
→ verify: cat output/${ceremony_type}_report.json
✓ Complete | ✗ Missing data → regenerate
| Ceremony | Metrics Collected |
|---|---|
| Daily | Completed yesterday, in progress, blockers, planned today |
| Planning | Sprint velocity, team capacity, story point distribution |
| Retrospective | Cycle time, stale tickets, patterns, bottlenecks |
| Review | Sprint goal achievement, completed stories, burndown |
| Issue | Fix |
|---|---|
| Missing Jira data | Set JIRA_API_TOKEN, JIRA_BASE_URL in .env |
| Stale metrics | Run generate_sprint_data.py before ceremony |
| No sprint detected | Verify sprint naming convention, active sprint |
| Authentication error | Configure Jira credentials (see below) |
If you encounter: Configuration Error: Missing required environment variables
JIRA NOT CONFIGURED
Add the following to your .env file or ~/.zshrc:
export JIRA_EMAIL=your-email@company.com
export JIRA_API_TOKEN=your-api-token
export JIRA_BASE_URL=https://company.atlassian.net
Then restart your shell and retry.
Get API token: https://id.atlassian.com/manage-profile/security/api-tokens
MANDATORY: Run after generating ceremony materials.
| Phase | Action |
|---|---|
| 1. Execute | Generate ceremony artifacts using scripts |
| 2. Validate | Verify data completeness, accuracy |
| 3. Report | ✓ Complete → Ready | ✗ Incomplete → List missing data |
| 4. Fix | Missing data → Regenerate → Re-validate |
| 5. Store Metrics | After ALL validation passes, call mcp__agent-orchestrator__store-skill-metrics |
Validation principle:
Validation = Data completeness check + Output verification
NOT re-running generation
Validation method:
Review generated artifacts (files, reports)
→ Check required data present
→ Verify metrics accuracy
→ Confirm temporal consistency
→ Cite evidence in report
Validation checks:
| Check | Evidence Source |
|---|---|
| Data generated | Output file exists |
| Metrics complete | All required fields present in JSON |
| Sprint detected | Sprint name, dates in output |
| Jira accessible | No auth errors in logs |
Output format (with evidence):
VALIDATION REPORT:
✓ Data generated: retro_report.json exists [Evidence] ls output/
✓ Metrics complete: All fields present [Evidence] jq keys output/retro_report.json
✓ Sprint detected: "Sprint 42" [Evidence] Line 2: sprint_name
✗ FAIL: Missing burndown data
VIOLATIONS (1):
1. Burndown chart data missing from output
Evidence: burndown_data field empty in JSON
ACTION: Fix violations and re-validate
Re-validation required after fixes. Repeat until ALL checks pass.
Metrics storage:
mcp__agent-orchestrator__store-skill-metrics({
sessionId: "[context]",
skill: "scrum:ceremonies-recipe",
initialViolations: 1,
iterations: 1,
fixesApplied: 1,
finalViolations: 0,
validationPassed: true,
durationSeconds: 30,
metadata: {
ceremony: "daily",
project: "SKL",
tickets_analyzed: 25
}
})
└── TODO: Add ceremony-specific guidance
├── Daily standup format templates
├── Planning poker integration
├── Retrospective formats (Start/Stop/Continue, 4Ls)
└── Review demo checklist