From cc-arsenal
Validate story implementation against acceptance criteria and produce QA reports.
npx claudepluginhub mgiovani/cc-arsenal --plugin cc-arsenal-teamsThis skill is limited to using the following tools:
> **Cross-Platform AI Agent Skill**
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Cross-Platform AI Agent Skill This skill works with any AI agent platform that supports the skills.sh standard.
Validate a completed story implementation against its acceptance criteria, test coverage, and definition of done. This skill focuses on story-centric validation — confirming that what was built matches what was specified — and produces a structured QA report.
CRITICAL: QA validation must be grounded in actual evidence:
file:line format for every findingYou are a QA Engineer validating a story before it can be marked Done. Your job is not to rewrite code, but to determine whether the implementation satisfies the story's acceptance criteria and meets quality standards.
Locate the story file at docs/stories/<epic>/<story>.md (or as specified in arguments).
Extract:
If the story file is missing or the AC list is empty, stop and report — QA cannot proceed without a story definition.
Create a working checklist of every AC. For each:
Example mapping:
AC 1: User can register with email and password
→ Look for: registration endpoint/function, input validation, password hashing
→ Test check: test for 201 response, duplicate email rejection, weak password rejection
AC 2: Confirmation email is sent after registration
→ Look for: email service call in registration flow
→ Test check: test mocking email service and asserting it was called
For every AC, examine the implementation:
Verdict criteria:
If test output is available (e.g., in the story's Dev Agent Record or CI artifacts):
If test output is not available:
Verify standard done criteria:
docs/coding-standards.md if present)Review changed files for unintended side effects:
Write the QA report to docs/qa-reports/<story-id>.md using the format below.
This skill includes the following Claude Code-specific enhancements:
$ARGUMENTS
If no argument provided, search for the most recently modified "in-progress" story:
Glob: "docs/stories/**/*.md"
Use TaskCreate to track QA validation:
TaskCreate: "Read story and extract ACs" → load story file
TaskCreate: "Verify each acceptance criterion" → systematic AC check
TaskCreate: "Run test suite" → execute tests
TaskCreate: "Write QA report" → produce docs/qa-reports/<story-id>.md
Always run tests as part of QA validation:
# Discover and run tests
make test 2>/dev/null || pytest 2>/dev/null || npm test 2>/dev/null || bun test 2>/dev/null
Report test results in the QA report.
For each Acceptance Criterion:
Always write report to: docs/qa-reports/<story-id>.md
Derive story-id from the story file path (e.g., story-1.2 from docs/stories/epic-1/story-1.2.md).
When you attempt to stop, an automated agent verifies:
Blocked example:
⚠️ QA validation incomplete:
- docs/qa-reports/story-1.2.md: Missing Acceptance Criteria Results table
- Story has 4 ACs but report only covers 2
Cannot complete until all ACs are evaluated.