test-failure-mindset
Use when encountering failing tests, diagnosing test errors, or establishing a systematic approach to test failure investigation. Activates on "test failure analysis", "debugging tests", or "why tests fail" requests. Establishes the mindset that treats test failures as valuable diagnostic signals requiring root-cause investigation โ not automatic code fixes or test dismissal.
From python3-developmentnpx claudepluginhub jamie-bitflight/claude_skills --plugin python3-developmentThis skill uses the workspace's default tool permissions.
Test Failure Analysis Mindset
Establish a balanced investigative approach for all test failures encountered in this session.
Core Principle
Consult ../python3-development/references/python3-standards.md when shared testing or quality rules from this plugin apply; full standards, graphs, and amendment process are documented there.
Tests are specifications - they define expected behavior. When they fail, it's a critical moment requiring balanced investigation, not automatic dismissal.
Dual Hypothesis Approach
Always consider both possibilities when a test fails:
| Hypothesis A | Hypothesis B |
|---|---|
| Test expectations are incorrect | Implementation has a bug |
| Test is outdated | Test caught a regression |
| Test has wrong assumptions | Test found an edge case |
Investigation Protocol
For EVERY test failure:
1. Pause and Read
- Understand what the test is trying to verify
- Read its name, comments, and assertions carefully
- Check the test's history (git blame) for context
2. Trace the Implementation
- Follow the code path that leads to the failure
- Understand actual behavior vs. expected behavior
- Check if recent changes affected this code path
3. Consider the Context
- Is this testing a documented requirement?
- Would current behavior surprise a user?
- What would be the impact of each possible fix?
4. Make a Reasoned Decision
| Situation | Action |
|---|---|
| Implementation is wrong | Fix the bug |
| Test is wrong | Fix test AND document why |
| Unclear | Seek clarification before changing |
5. Learn from the Failure
- What can this teach about the system?
- Should additional tests cover related cases?
- Is there a pattern being missed?
Red Flags (Dangerous Patterns)
- ๐ซ Immediately changing tests to match implementation
- ๐ซ Assuming implementation is always correct
- ๐ซ Bulk-updating tests without individual analysis
- ๐ซ Removing "inconvenient" test cases
- ๐ซ Adding mock/stub workarounds instead of fixing root causes
Good Practices
- โ Treat each test failure as a potential bug discovery
- โ Document analysis in comments when fixing tests
- โ Write clear test names that explain intent
- โ When changing a test, explain why the original was wrong
- โ Consider adding more tests when finding ambiguity
Example Responses
Good: "I see test_user_validation is failing. Let me trace through the validation logic to understand if this is catching a real bug or if the test's expectations are incorrect."
Bad: "The test is failing so I'll update it to match what the code does."
Remember
Every test failure is an opportunity to:
- Discover and fix a bug before users do
- Clarify ambiguous requirements
- Improve system understanding
- Strengthen the test suite
The goal is NOT to make tests pass quickly. The goal IS to ensure the system behaves correctly.
Related Skills
- analyze-test-failures: Detailed analysis of specific test failures
- comprehensive-test-review: Full test suite review