Systematic test debugging with 5-phase investigation workflow. Use when debugging test failures, investigating complex errors, or when AI consultation would help. Uses native test commands with optional AI consultation for complex failures.
Systematic test debugging with 5-phase investigation workflow. Use when debugging test failures, investigating complex errors, or when AI consultation would help. Uses native test commands with optional AI consultation for complex failures.
/plugin marketplace add tylerburleigh/claude-foundry/plugin install foundry@claude-foundryThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/common-fixes.mdreferences/failure-categories.mdreferences/investigation.mdreferences/pre-flight.mdreferences/subagent-patterns.mdreferences/tool-selection.mdreferences/troubleshooting.mdThe Skill(foundry:run-tests) skill provides systematic test debugging with a 5-phase investigation workflow. It guides debugging complex test failures and leverages AI consultation when available.
When to use this skill:
When NOT to use this skill:
pytest, go test, npm test directly)
[x?]=decision ·(GATE)=user approval ·→=sequence ·↻=loop
- **Entry** → Run failing test(s)
- [pass?] → **Exit**: Done
- [fail?] → Categorize failure
- FormHypothesis → GatherContext[Explore preferred]
- [Research available?] → `research action="chat"`
- [yes] → Consult (MANDATORY)
- [no] → skip
- ImplementFix → VerifySpecific ↻ [pass?]
- RunFullSuite ↻ [pass?] → **Exit**: Done
Decision rules:
Use native test commands:
# Python
pytest tests/test_module.py::test_function -vvs
# Go
go test -v -run TestName ./...
# JavaScript
jest path/to/test.js -t "test name"
npm test
Categorize the failure:
| Category | Description |
|---|---|
| Assertion | Expected vs actual mismatch |
| Exception | Runtime errors (language-specific) |
| Import | Missing dependencies or module issues |
| Setup | Fixture, configuration, or initialization issues |
| Timeout | Performance or hanging issues |
| Flaky | Non-deterministic failures |
Extract key information:
Form hypothesis: What's causing the failure?
For detailed failure categories, see
references/failure-categories.md
Use Explore subagents (preferred) for code context, or Glob, Grep, and Read for targeted lookups.
Subagent selection:
For detailed subagent patterns, see
references/subagent-patterns.md
Mandatory for complex failures when research tools are available.
Standard consultation (most cases):
mcp__plugin_foundry_foundry-mcp__research action="chat" prompt="..." system_prompt="You are debugging a test failure."
Multi-perspective analysis (flaky tests, architectural issues):
mcp__plugin_foundry_foundry-mcp__research action="consensus" prompt="..." strategy="synthesize"
When to use which:
chat - Standard debugging, quick diagnosisconsensus - Complex failures, want multiple AI perspectives, flaky teststhinkdeep - Systematic hypothesis-driven investigation for complex issuesCRITICAL: Read references/tool-selection.md before AI consultation. Contains required prompt templates and strategy parameters.
When debugging reveals missing spec requirements, document them:
mcp__plugin_foundry_foundry-mcp__task action="add-requirement" spec_id={spec-id} task_id={task-id} requirement="Handle empty input array gracefully"
When to use:
pytest tests/test_file.py::test_name -vvspytest --pdb tests/test_file.py::test_name-m "unit", -m "not slow"pytest --cov=src --cov-report=term-missinggo test -v -run TestName ./...go test -race ./...go test -cover -coverprofile=coverage.out ./...dlv test -- -test.run TestNamejest path/to/test.js -t "test name"jest --watchnode --inspect-brk node_modules/.bin/jest --runInBandjest --coveragenpm testnpm test -- --specific-flagreferences/failure-categories.mdreferences/investigation.mdreferences/subagent-patterns.mdreferences/tool-selection.mdreferences/common-fixes.mdreferences/troubleshooting.mdThis skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.