TL;DR — Produces a structured sprint report that captures velocity, story completion, blockers, and retro insights into a single artifact suitable for stakeholders and team archives.
When to Activate
Invoke this skill when any of these conditions are met:
Signal
Examples
Sprint closure
"Generate the sprint 14 report", "Create a sprint summary for this iteration"
Review preparation
"I need a sprint review document for the stakeholder demo"
Retrospective documentation
"Document what happened this sprint including retro action items"
Velocity tracking
"Summarize our velocity and burndown for sprint Q1-S3"
Do not activate for sprint planning, backlog grooming, or daily standup notes. Those have distinct cadence and content needs.
1. Data Collection Protocol
The skill requires the following data dimensions. If the user provides raw data (Jira export, CSV, or inline text), parse it. Otherwise, ask structured questions:
Sprint metadata — sprint number/name, team name, start date, end date, sprint goal.
Velocity metrics — story points committed vs completed, comparison to rolling 3-sprint average.
Story inventory — list of completed stories (ID, title, points, assignee), carried-over stories with reason, and stories removed mid-sprint with reason.
Burndown trajectory — whether the sprint burned steadily, front-loaded, back-loaded, or stalled. If raw daily data is available, describe the curve shape.
Blockers and impediments — list with description, duration, resolution status, and owner.
Retrospective highlights — top "went well", "needs improvement", and committed action items.
2. Report Structure
The generated report follows this canonical layout:
Header — sprint name, team, date range, sprint goal statement.
Sprint Goal Assessment — binary achieved/not-achieved with brief narrative explanation.
Velocity Dashboard — committed points, completed points, completion percentage, delta from 3-sprint rolling average. Presented as a compact table.
Burndown Summary — textual description of the burndown curve with key inflection points (e.g., "flat from day 3-5 due to environment outage").
Retrospective Highlights — three subsections: What Went Well, What Needs Improvement, Action Items (with owner and due date).
Team Health Indicators — optional section for morale, collaboration, and sustainability signals.
Next Sprint Preview — top 3-5 stories queued for the next sprint with preliminary point estimates.
3. Output Formatting
Markdown mode (default): clean tables, horizontal rules between sections, emoji-free professional tone.
HTML dashboard mode: when requested, generate a self-contained HTML file with inline CSS. Use color-coded status indicators (green for completed, amber for carried, red for blocked). Include a velocity trend mini-chart using CSS bar visualization.
Footer: Generated by sprint-report skill | Sprint {n} | {date}.
4. Analytical Layer
Beyond raw reporting, the skill adds analytical value:
Velocity trend commentary — "Velocity increased 15% over the 3-sprint average, suggesting the team has stabilized after the architecture migration."
Carry-over pattern detection — if the same stories carry over repeatedly, flag the pattern and suggest decomposition or re-estimation.
Blocker correlation — if blockers cluster around a single dependency (e.g., environment, external team), highlight the systemic issue.
Retro action item tracking — if previous sprint retro actions are provided, track their completion status.
Trade-off Matrix
Dimension
Priority
Rationale
Accuracy
HIGH
Metrics must match source data exactly; no rounding without disclosure
Actionability
HIGH
Reports must surface patterns that drive improvement
Brevity
MEDIUM
Stakeholders want the summary; team wants the detail. Balance both
Visual polish
LOW-MEDIUM
HTML mode adds value for stakeholder demos but is not the default
Assumptions & Limits
Story point values are provided by the user — the skill does not estimate points. [EXPLICIT]
Burndown data is textual unless the user provides daily numeric data. [EXPLICIT]
Retrospective content is summarized, not a full retro facilitation. Use retrospective-facilitation for that. [EXPLICIT]
The skill assumes Scrum-like sprints (1-4 week fixed iterations). Kanban flow metrics require a different approach. [INFERRED]
Team health indicators are subjective and based on user-provided signals only. [INFERRED]
Edge Cases
Zero completed stories — generate the report with a prominent alert banner: "No stories completed this sprint." Focus the narrative on blockers, root cause analysis, and corrective actions.
Mid-sprint scope change exceeding 30% — flag as a sprint integrity violation. Add a dedicated section documenting the scope change timeline and stakeholder decisions.
First sprint of a new team — omit velocity trend comparisons (no historical baseline). Add a section on initial team formation observations.
Sprint cancelled mid-flight — generate a cancellation report instead, documenting the reason, work completed to date, and disposition of in-flight stories.
Good vs Bad Example
Good output (excerpt):
## Velocity Dashboard
| Metric | Value |
|--------|-------|
| Committed | 34 pts |
| Completed | 29 pts |
| Completion Rate | 85.3% |
| 3-Sprint Rolling Avg | 27 pts |
| Delta vs Average | +7.4% |
Sprint goal: "Complete payment gateway integration" — ACHIEVED.
Velocity is trending upward for the third consecutive sprint, suggesting
the team has absorbed the new microservices toolchain. [INFERRED]
Bad output (excerpt):
## Velocity
We did 29 points out of 34. That's pretty good. The sprint went okay
and we finished most things.
The good example provides precise metrics in a scannable table, states goal achievement clearly, and adds analytical commentary with evidence tags. The bad example is imprecise and lacks structure.
Validation Gate
Before delivering the sprint report, confirm every item:
Sprint metadata (number, team, dates, goal) is present and accurate
Sprint goal assessment is explicitly stated as achieved or not achieved
Velocity table includes committed, completed, percentage, and rolling average
Completed stories table has story IDs, titles, points, and assignees
Carried-over stories include reason for carry-over
Blockers log includes duration, impact rating, and resolution status
Retrospective section has all three categories (well, improve, actions)
Action items have owners and due dates
All metrics match source data provided by the user
Evidence tags applied to analytical commentary
Output written to disk at confirmed path
Reference Files
File
Purpose
references/sprint-template.md
Canonical sprint report markdown template
references/velocity-patterns.md
Common velocity patterns and diagnostic commentary
references/html-dashboard.html
Branded HTML dashboard template with CSS bar charts
evals/evals.json
Evaluation scenarios for report completeness and metric accuracy