From test-engineering
Parse test framework output to extract results, failures, and coverage information. Use when processing test execution output from pytest, Jest, JUnit, xUnit, Google Test, Go test, and other frameworks to extract pass/fail counts, failure details, and stack traces.
npx claudepluginhub issacchaos/local-marketplace --plugin test-engineeringThis skill uses the workspace's default tool permissions.
**Version**: 1.0.0
base-parser-interface.mdparser-factory-pattern.mdparsers/catch2-parser.mdparsers/go-test-parser.mdparsers/gtest-parser.mdparsers/jest-parser.mdparsers/junit-parser.mdparsers/mocha-parser.mdparsers/playwright-parser.mdparsers/pytest-parser.mdparsers/testng-parser.mdparsers/vitest-parser.mdparsers/xunit-parser.mdpattern-library/failure-location-patterns.mdpattern-library/test-count-patterns.mdGenerates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Version: 1.0.0 Category: Analysis Languages: Python, JavaScript, TypeScript, Java, C#, Go, C++, Rust, Ruby, C Purpose: Parse test framework output to extract results, failures, and coverage information
The Result Parsing Skill provides comprehensive test output parsing capabilities across 11+ testing frameworks. It uses a factory pattern with auto-registration to select the appropriate parser based on framework type or output analysis, ensuring accurate extraction of test counts, failure details, and coverage information.
test_output:
stdout: Raw standard output from test execution
stderr: Raw standard error from test execution
combined: Merged stdout + stderr
framework: Detected framework name (e.g., "pytest", "jest", "junit")
exit_code: Process exit code (0 = success, non-zero = failure)
test_results:
total_tests: Total number of tests executed
passed_count: Number of tests that passed
failed_count: Number of tests that failed
skipped_count: Number of tests skipped
error_count: Number of tests with errors (not assertion failures)
duration_seconds: Total test execution time
failures: List of individual test failures
- test_name: Name of the failed test
test_file: File containing the test
test_method: Method/function name
failure_type: Type of failure (AssertionError, TimeoutError, etc.)
failure_message: Human-readable error message
stack_trace: Full stack trace
failure_line_number: Line where failure occurred
affected_code: Code snippet showing the failure location
coverage: Optional coverage information
total_coverage: Overall coverage percentage
line_coverage: Line coverage percentage
branch_coverage: Branch coverage percentage (if available)
total_lines: Total lines of code
covered_lines: Lines covered by tests
uncovered_lines: Lines not covered
file_coverage: Per-file coverage breakdown
coverage_tool: Tool used (pytest-cov, istanbul, jacoco, etc.)
Result Parsing Skill
├── BaseTestParser (abstract interface)
│ ├── parse(execution_result) → TestExecutionResult
│ ├── extract_failures(output) → List[TestFailureInfo]
│ ├── extract_coverage(output) → TestCoverageInfo
│ └── can_parse(framework, output) → bool
│
├── ParserFactory (registry and selection)
│ ├── register_parser(parser_class)
│ ├── get_parser(framework, output) → BaseTestParser
│ ├── auto_detect_framework(command, output) → str
│ └── list_registered_parsers() → List[str]
│
└── Framework-Specific Parsers (implementations)
├── PytestParser (Python/pytest)
├── UnittestParser (Python/unittest)
├── JestParser (JavaScript/Jest)
├── VitestParser (TypeScript/Vitest)
├── JUnitParser (Java/JUnit)
├── XUnitParser (C#/xUnit)
├── NUnitParser (C#/NUnit)
├── MSTestParser (C#/MSTest)
├── GoTestParser (Go/built-in)
├── GTestParser (C++/GTest)
├── Catch2Parser (C++/Catch2)
├── CargoTestParser (Rust/cargo test)
├── RSpecParser (Ruby/RSpec)
├── UnityParser (C/Unity)
├── PlaywrightParser (TypeScript/Playwright)
└── GenericParser (fallback)
1. Framework Provided?
├─ YES → Try to match framework name to registered parsers
│ └─ Match found? Use that parser
│ └─ No match? Continue to output analysis
│
└─ NO → Perform output analysis
2. Output Analysis
└─ For each registered parser:
└─ Call parser.can_parse(framework, output)
└─ Returns True? Use that parser
└─ Continue checking
3. Fallback
└─ No parser matched?
└─ Use GenericParser (basic pattern matching)
See base-parser-interface.md for complete details.
parse(execution_result): Main parsing method
extract_failures(output): Failure extraction
extract_coverage(output): Coverage extraction
can_parse(framework, output): Detection method
See parser-factory-pattern.md for complete details.
# Get the factory (singleton)
factory = get_parser_factory()
# Register a custom parser
factory.register_parser(MyCustomParser)
# Get parser by framework name
parser = factory.get_parser(framework="pytest")
# Get parser by output analysis
parser = factory.get_parser(output="===== test session starts =====")
# Auto-detect framework
framework = factory.auto_detect_framework(
command="pytest tests/",
output="collected 10 items"
)
See parsers/pytest-parser.md for complete details.
Patterns:
===== 10 passed, 2 failed in 1.23s =====collected 10 items===== FAILURES =====___________ test_name ___________tests/test_foo.py:15: in test_functionE AssertionError: assert 1 == 2TOTAL 100 25 75%Edge Cases:
Patterns:
Ran 10 tests in 1.230sOK, FAILED (failures=2), FAILED (errors=1, failures=2)FAIL: test_name (module.TestClass)ERROR: test_name (module.TestClass)----------------------------------------------------------------------Patterns:
Test Suites: 2 passed, 2 totalTests: 10 passed, 2 failed, 12 totalPASS tests/foo.test.jsFAIL tests/bar.test.js● TestSuite › test nameat Object.<anonymous> (tests/foo.test.js:15:5)Patterns:
Test Files 2 passed (2)Tests 10 passed | 2 failed (12)Duration 1.23sPatterns:
Tests run: 10, Failures: 2, Errors: 0, Skipped: 1testFailure(com.example.TestClass) Time elapsed: 0.001 s <<< FAILURE!at com.example.TestClass.testMethod(TestClass.java:42)Patterns:
Total tests: 10. Passed: 8. Failed: 2. Skipped: 0.[FAIL] TestNamespace.TestClass.TestMethodat TestNamespace.TestClass.TestMethod() in TestClass.cs:line 42Patterns:
=== RUN TestFunction--- PASS: TestFunction (0.01s)--- FAIL: TestFunction (0.02s)test_file.go:42: Error messageFAIL package/name 0.123sPatterns:
[==========] Running 10 tests from 2 test suites.[ RUN ] TestSuite.TestName[ OK ] TestSuite.TestName (1 ms)[ FAILED ] TestSuite.TestName (2 ms)test_file.cpp:42: Failure[ PASSED ] 8 tests. + [ FAILED ] 2 tests, listed below:Patterns:
running 10 teststest test_name ... oktest test_name ... FAILED---- test_name stdout ----thread 'test_name' panicked at 'assertion failed', src/lib.rs:42:5test result: FAILED. 8 passed; 2 failed; 0 ignored; 0 measuredPatterns:
. (dot)F (letter F)10 examples, 2 failuresFailure/Error:# ./spec/test_spec.rb:42:in 'block (2 levels) in <top (required)>'See parsers/playwright-parser.md for complete details.
Patterns:
Running X tests using Y workers[chromium], [firefox], [webkit]✓ N [browser] > file:line:col > Suite > test name (duration)✗ N [browser] > file:line:col > Suite > test name (duration)- N [browser] > file:line:col > Suite > test nameN) [browser] > file:line:col > Suite > test nameat tests/login.spec.ts:12:50X passed (Ys), X failed, X skippedTest timeout of Xms exceeded(Retry #N)Call log: followed by indented action traceAuto-Detection Signatures:
Running \d+ tests? using \d+ workers? -> PlaywrightParser\[chromium\] / \[firefox\] / \[webkit\] -> PlaywrightParserEdge Cases:
The pattern library provides reusable regex patterns for common parsing tasks.
See pattern-library/test-count-patterns.md
Common patterns for extracting test counts:
See pattern-library/failure-location-patterns.md
Common patterns for extracting failure locations:
Scenario: Test discovery found no tests
Detection:
collected 0 itemsNo tests foundTests run: 0No tests found or no tests foundResponse:
total_tests: 0
passed_count: 0
failed_count: 0
skipped_count: 0
error_count: 0
Scenario: All tests successful
Detection:
Response:
total_tests: N
passed_count: N
failed_count: 0
skipped_count: 0
failures: []
Scenario: Every test failed
Detection:
Response:
Scenario: Test execution was interrupted
Detection:
Response:
total_tests: N (partial)
passed_count: N (before timeout)
failed_count: 0 or more
error_count: 1 (timeout)
failures:
- test_name: "Test Execution"
failure_type: "TimeoutError"
failure_message: "Test execution timed out after X seconds"
Scenario: Output doesn't match any known framework
Detection:
Response:
When executing tests, the Execute Agent uses this skill to parse results:
# Read Result Parsing Skill
Read file: skills/result-parsing/SKILL.md
Read file: skills/result-parsing/base-parser-interface.md
Read file: skills/result-parsing/parser-factory-pattern.md
# For framework-specific parsing
Read file: skills/result-parsing/parsers/pytest-parser.md # If pytest detected
# Parse Test Output
1. Get parser from factory:
parser = factory.get_parser(framework="pytest", output=test_output)
2. Parse execution result:
result = parser.parse(execution_result)
3. Extract failure details:
failures = parser.extract_failures(test_output)
4. Extract coverage (if available):
coverage = parser.extract_coverage(test_output)
# Return Structured Results
Return:
- test_counts: {total, passed, failed, skipped, error}
- failures: [{test_name, file, line, message, trace}, ...]
- coverage: {total_coverage, file_coverage, ...}
When analyzing test results:
# Read Failure Information
failures = execution_result.failures
# For each failure:
- Identify failure type (assertion, exception, timeout)
- Categorize as test bug vs source bug
- Extract relevant code context
- Suggest fix approaches
Test individual parser methods with known output samples:
# Test pytest parser
def test_pytest_parser_all_pass():
output = """
===== test session starts =====
collected 10 items
tests/test_foo.py::test_1 PASSED
...
===== 10 passed in 1.23s =====
"""
parser = PytestParser()
result = parser.parse(output)
assert result.total_tests == 10
assert result.passed_count == 10
assert result.failed_count == 0
assert len(result.failures) == 0
Test parser factory with real framework output:
def test_factory_auto_detect_pytest():
factory = get_parser_factory()
output = "===== test session starts ====="
framework = factory.auto_detect_framework("pytest tests/", output)
assert framework == "pytest"
parser = factory.get_parser(framework=framework, output=output)
assert isinstance(parser, PytestParser)
Test with challenging scenarios:
dante/src/dante/runner/test_execution/parsers/parser_factory.pydante/src/dante/runner/test_execution/parsers/base_parser.pydante/src/dante/runner/test_execution/parsers/pytest_parser.pyLast Updated: 2025-12-05 Status: Phase 1 - pytest parser implemented Next: Add remaining framework parsers in future phases