Automates TEST_CATALOG updates after test execution. Records test metrics, pass/fail status, coverage data, and execution time. Maintains single source of truth for all test files and their health status.
Automates TEST_CATALOG updates after test execution. Records test metrics, pass/fail status, coverage data, and execution time. Maintains single source of truth for all test files and their health status.
/plugin marketplace add DarkMonkDev/WitchCityRope/plugin install darkmonkdev-witchcityrope-agents@DarkMonkDev/WitchCityRopeThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Purpose: Automate TEST_CATALOG updates after every test execution.
When to Use: After running any tests (unit, integration, or E2E).
MANDATORY: Every test execution MUST update TEST_CATALOG.
Why: TEST_CATALOG is the single source of truth for:
Location: /docs/standards-processes/testing/TEST_CATALOG.md (Part 1)
Multi-File Structure:
After running tests, update:
When test status changes:
When new tests are added:
When tests are deleted or obsoleted:
Executable Script: execute.sh
# Basic usage with required parameters
bash .claude/skills/test-catalog-updater/execute.sh \
<test-type> \
<passed> \
<failed> \
<total> \
[execution-time] \
[coverage]
# Example: Unit tests with all metrics
bash .claude/skills/test-catalog-updater/execute.sh unit 45 0 45 12.3 85
# Example: E2E tests (no coverage available)
bash .claude/skills/test-catalog-updater/execute.sh e2e 42 3 45 125.7 N/A
# Example: Integration tests passing
bash .claude/skills/test-catalog-updater/execute.sh integration 44 0 44 45.2 82
Parameters:
test-type: Type of tests (unit | integration | e2e)passed: Number of tests that passedfailed: Number of tests that failedtotal: Total number of testsexecution-time: (optional) Total execution time in secondscoverage: (optional) Coverage percentageWhat the script does:
Script includes parameter validation - ensures correct usage and prevents invalid updates.
After running tests, I'll use the test-catalog-updater skill to record the results.
# After running unit tests
bash .claude/skills/test-catalog-updater.md \
unit \
45 \ # passed
0 \ # failed
45 \ # total
12.3 \ # execution time (seconds)
85 # coverage percentage
# Extract test results and update catalog
PASSED=$(grep "Passed:" test-results.log | cut -d: -f2)
FAILED=$(grep "Failed:" test-results.log | cut -d: -f2)
TOTAL=$((PASSED + FAILED))
bash .claude/skills/test-catalog-updater.md \
e2e \
"$PASSED" \
"$FAILED" \
"$TOTAL" \
"125.7" \
"N/A"
| Metric | Description | Source |
|---|---|---|
| Total Tests | Count of test methods/functions | Test runner output |
| Passed | Number passing | Test runner output |
| Failed | Number failing | Test runner output |
| Pass Rate | Percentage passing | Calculated |
| Execution Time | Total run time | Test runner output |
| Coverage | Code coverage % | Coverage tool output |
| Last Run | Timestamp of execution | System date |
| Status | PASSING/FAILING/FLAKY | Calculated |
## Test Summary Metrics
**Last Updated**: 2025-11-04 14:30:00
### Overall Status
- **Total Tests**: 245
- **Passing**: 240 (98%)
- **Failing**: 5 (2%)
- **Flaky**: 3 (1%)
### End-to-End Tests (Playwright)
**Status**: ⚠️ PARTIAL (some failures)
| Status | Total | Passed | Failed | Pass Rate | Exec Time | Coverage | Last Run |
|--------|-------|--------|--------|-----------|-----------|----------|----------|
| ⚠️ PARTIAL | 45 | 42 | 3 | 93% | 125.3s | N/A | 2025-11-04 14:25 |
**Test Files**: 12 spec files in `/tests/playwright/`
**Known Issues**:
- Login flow test flaky (race condition)
- Admin dashboard timeout on slow machines
- Event registration occasionally times out
### Backend Unit Tests
**Status**: ✅ PASSING
| Status | Total | Passed | Failed | Pass Rate | Exec Time | Coverage | Last Run |
|--------|-------|--------|--------|-----------|-----------|----------|----------|
| ✅ PASSING | 156 | 156 | 0 | 100% | 12.4s | 87% | 2025-11-04 14:20 |
**Test Files**: 24 test classes in `/tests/WitchCityRope.Core.Tests/`
### Backend Integration Tests
**Status**: ✅ PASSING
| Status | Total | Passed | Failed | Pass Rate | Exec Time | Coverage | Last Run |
|--------|-------|--------|--------|-----------|-----------|----------|----------|
| ✅ PASSING | 44 | 44 | 0 | 100% | 45.2s | 82% | 2025-11-04 14:22 |
**Test Files**: 8 test classes in `/tests/WitchCityRope.IntegrationTests/`
phase-4-validator depends on TEST_CATALOG:
test-catalog-updater feeds phase-4-validator:
When TEST_CATALOG shows failures:
test-executor documents failure in catalog
Orchestrator reads catalog
Developer agent fixes issue
test-executor re-runs tests
phase-4-validator confirms
Symptom: Tests run but catalog shows old data Impact: Orchestrator makes decisions on stale information Solution: Enforce catalog update in test-executor agent definition
Symptom: Test passes but catalog still shows FAILING Impact: False negative blocks workflow Solution: Update catalog immediately after test run, before reporting
Symptom: Some columns empty or "N/A" Impact: Incomplete tracking Solution: Extract metrics from test runner output consistently
Symptom: Multiple agents update catalog simultaneously Impact: Lost updates or conflicts Solution: Sequential test execution, or use file locking
{
"catalogUpdate": {
"testType": "unit",
"timestamp": "2025-11-04 14:30:00",
"metrics": {
"total": 156,
"passed": 156,
"failed": 0,
"passRate": 100,
"executionTime": 12.4,
"coverage": 87
},
"status": {
"previous": "PASSING",
"current": "PASSING",
"changed": false
},
"location": "/docs/standards-processes/testing/TEST_CATALOG.md",
"section": "### Backend Unit Tests",
"backup": "/docs/standards-processes/testing/TEST_CATALOG.md.backup"
},
"nextSteps": [
"All tests passing - no action needed",
"Coverage at 87% (target: 80%) - maintaining"
]
}
Initial Context: Show test type and pass/fail summary On Request: Show full metrics and catalog location On Failure: Show failure details and required actions On Pass: Show concise confirmation
Remember: TEST_CATALOG is the memory for test health. Without it, every test run starts from zero context. With it, trends emerge, problems surface early, and test quality continuously improves.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.