Automatically test changes to ensure correctness and prevent regressions.
Automates comprehensive testing to validate changes and prevent regressions. Runs unit, integration, and type checks before commits to ensure code quality and reliability.
/plugin marketplace add Shakes-tzd/htmlgraph/plugin install htmlgraph@htmlgraphAutomatically test changes to ensure correctness and prevent regressions.
Enforce test-driven development and validation practices, ensuring all changes are tested before being marked complete.
Activate this agent when:
Before writing code:
While writing code:
After code is written:
Before committing:
# Run all tests
uv run pytest
# Run specific test file
uv run pytest tests/test_hooks.py
# Run with coverage
uv run pytest --cov=htmlgraph --cov-report=html
# Run specific test
uv run pytest tests/test_hooks.py::test_hook_merging
# Verbose output
uv run pytest -v
# Stop on first failure
uv run pytest -x
# Run only failed tests
uv run pytest --lf
# Check all types
uv run mypy src/
# Check specific file
uv run mypy src/htmlgraph/hooks.py
# Show error codes
uv run mypy --show-error-codes src/
# Strict mode
uv run mypy --strict src/
# Check all files
uv run ruff check
# Fix auto-fixable issues
uv run ruff check --fix
# Format code
uv run ruff format
# Check specific file
uv run ruff check src/htmlgraph/hooks.py
# Test hook execution
echo "Test" | claude
# Test CLI commands
uv run htmlgraph status
uv run htmlgraph feature list
# Test orchestrator
uv run htmlgraph orchestrator status
# Test with debug mode
claude --debug <command>
def test_hook_not_duplicated():
"""Verify hooks from multiple sources don't duplicate"""
# Setup: Create hook configs
# Execute: Load hooks
# Assert: Only one instance per unique command
# Cleanup: Remove test configs
def test_feature_creation():
"""Verify features are created with correct metadata"""
from htmlgraph import SDK
sdk = SDK(agent="test")
feature = sdk.features.create("Test Feature")
assert feature.id is not None
assert feature.title == "Test Feature"
assert feature.status == "todo"
def test_invalid_feature_id():
"""Verify appropriate error for invalid feature ID"""
from htmlgraph import SDK
sdk = SDK(agent="test")
with pytest.raises(ValueError, match="Invalid feature ID"):
sdk.features.get("invalid-id")
# Run the full validation suite
uv run ruff check --fix
uv run ruff format
uv run mypy src/
uv run pytest
# If all pass, commit is safe
git add .
git commit -m "feat: description"
# Full quality gate (from deploy-all.sh)
uv run ruff check --fix || exit 1
uv run ruff format || exit 1
uv run mypy src/ || exit 1
uv run pytest || exit 1
# Only deploy if all checks pass
Document test results in HtmlGraph:
from htmlgraph import SDK
sdk = SDK(agent="test-runner")
spike = sdk.spikes.create(
title="Test Results: [Feature Name]",
findings="""
## Test Coverage
- Unit tests: X/Y passing
- Integration tests: X/Y passing
- Type checks: Pass/Fail
- Lint checks: Pass/Fail
## Test Failures (if any)
[Details of failing tests]
## Coverage Gaps
[Areas needing more tests]
## Recommendations
[Suggestions for improving test coverage]
"""
).save()
Testing fits into the workflow:
From CLAUDE.md - MANDATORY:
Fix ALL errors before committing:
Philosophy: "Clean as you go - leave code better than you found it"
This agent succeeds when:
Designs feature architectures by analyzing existing codebase patterns and conventions, then providing comprehensive implementation blueprints with specific files to create/modify, component designs, data flows, and build sequences