From draft
Performs exhaustive 14-dimension bug hunt across codebase using Draft context for false-positive elimination. Generates severity-ranked report with code evidence, data flow traces, fixes, and optional regression tests.
npx claudepluginhub mayurpise/draft --plugin draftThis skill uses the workspace's default tool permissions.
You are conducting an exhaustive bug hunt on this Git repository, enhanced by Draft context when available.
Proactively hunts bugs by assessing codebase risks via complexity, coverage gaps, and structural analysis, then writes reproducing tests for high-risk hotspots. Use before releases for confirmed issues.
Executes hypothesis-driven debugging workflow: triage and reproduce bugs, investigate with evidence, analyze root causes, fix and verify, then report. For 'fix this bug' or debug requests.
Share bugs, ideas, or general feedback.
You are conducting an exhaustive bug hunt on this Git repository, enhanced by Draft context when available.
The bug report is the primary deliverable. Every verified bug MUST appear in the final report regardless of whether a regression test can be written. Regression tests are a supplementary output — helpful when possible, but never a filter for bug inclusion.
Some AI tools (e.g., Claude Code) provide a built-in bughunt agent that auto-discovers project structure and runs parallel sweeps. /draft:bughunt is complementary, not competing:
/draft:bughunt | Built-in bughunt agent | |
|---|---|---|
| Approach | Context-driven methodology with 14 analysis dimensions and verification protocol | Auto-discovery with parallel sweep subagents |
| Draft context | Uses architecture, tech-stack, product, guardrails for false-positive elimination | No Draft context awareness |
| Output | Severity-ranked report with evidence | Inline fixes + regression tests |
| Modifies code | No (report + regression tests only) | Yes (finds AND fixes) |
When to use which: Use /draft:bughunt when you need context-aware analysis with structured evidence and false-positive elimination. Use the built-in agent when you want fast parallel sweeps with auto-fix capability. For maximum coverage, run both — /draft:bughunt catches context-specific bugs the built-in misses, and vice versa.
Verify before you report. Evidence over assumptions.
Before starting analysis, capture the current git state:
git branch --show-current # Current branch name
git rev-parse --short HEAD # Current commit hash
Store this for the report header. All bugs found are relative to this specific branch/commit.
Read and follow the base procedure in core/shared/draft-context-loading.md.
Bug-hunt-specific context application:
draft/graph/ exists) — Load module-graph.jsonl for dependency awareness. Flag imports from unexpected modules (not in established dependency edges). Flag code in modules involved in dependency cycles as higher risk. Use hotspots.jsonl to prioritize analysis of high-complexity, high-fanIn files. See core/shared/graph-query.md.draft/guardrails.md exists, read the ## Learned Anti-Patterns section. During the bug sweep, when a bug matches a learned anti-pattern, prefix the report entry with [KNOWN-ANTI-PATTERN: {pattern name}]. This distinguishes recurring documented patterns from newly discovered bugs, and signals that a systemic fix may be needed rather than a one-off patch.When invoked programmatically by /draft:review with with-bughunt, skip scope confirmation and inherit the scope from the calling command.
Otherwise, ask user to confirm scope:
<track-id>) - Focus on files relevant to a specific trackIf running for a specific track, also load:
draft/tracks/<id>/spec.md - Requirements, acceptance criteria, edge casesdraft/tracks/<id>/plan.md - Implementation tasks, phases, dependenciesUse track context to:
If no Draft context exists, proceed with code-only analysis.
Before analyzing all 14 dimensions, determine which apply to this codebase:
Examples of skipping:
Analyze systematically across all applicable dimensions. Skip N/A dimensions explicitly (see Dimension Applicability Check above).
encode(decode(x)) == x, sorting idempotency, associativity)assertTrue(true), expect(result).toBeDefined() only, empty catch blocks in test code, assert result is not None as sole check)npm audit, pip-audit, cargo audit, go vuln)^, ~, *, >=) without lockfile enforcement, missing lockfile entirelylodahs vs lodash, reqeusts vs requests), recently published packages with few downloads.filter() inside .map(), repeated linear scans, cartesian joins in application code)(a+)+, (a|a)*), unbounded repetition with overlapping alternatives — flag any regex applied to user-controlled input<, >, localeCompare without locale), date formatting (toLocaleDateString without explicit locale), number formatting, sorting (alphabetical sort that assumes ASCII ordering)left/right positioning, directional margin/padding, text alignment assumptions)string.length vs grapheme count), surrogate pair handling in substring operationsCRITICAL: No bug is valid without verification. Before declaring any finding as a bug, complete ALL applicable verification steps:
Code Path Verification
Context Cross-Reference
.ai-context.md (or architecture.md) — Is this behavior intentional by design?tech-stack.md — Does the framework handle this case?tech-stack.md ## Accepted Patterns — Is this pattern explicitly documented as intentional?product.md — Is this actually a requirement violation?Framework/Library Verification
Example Framework Documentation Quote:
"React automatically escapes JSX content to prevent XSS (React Docs: Main Concepts > JSX). However, dangerouslySetInnerHTML bypasses this protection. Framework version: React 18.2.0 (from tech-stack.md)."
Codebase Pattern Check
False Positive Elimination
Pattern Prevalence Check (before reporting)
Example Pattern Prevalence Check:
1. Grep: `rg 'dangerouslySetInnerHTML' src/` → found 12 occurrences
2. Sampled 3: src/Blog.tsx:45, src/About.tsx:12, src/FAQ.tsx:30
3. All 3 sanitize input via `DOMPurify.sanitize()` before rendering
4. THIS instance (src/Comment.tsx:88) passes raw user input without sanitization
5. Decision: REPORT — this instance lacks the sanitization all others have
Only report bugs with HIGH or CONFIRMED confidence:
| Level | Criteria | Action |
|---|---|---|
| CONFIRMED | Verified through code trace, no mitigating factors found | Report as bug |
| HIGH | Strong evidence, checked context, no obvious mitigation | Report as bug |
| MEDIUM | Suspicious but couldn't verify all factors | Ask user to confirm before reporting |
| LOW | Possible issue but likely handled elsewhere | Do NOT report |
Example confirmation prompt for MEDIUM Confidence:
"I found a potential race condition in src/handler.ts:45 where async state updates may overwrite each other. However, I couldn't verify if there's a locking mechanism elsewhere. Should I report this as a bug?"
Each reported bug MUST include:
For suspected bugs that can be tested, write a minimal failing test to confirm:
Example:
// Suspected bug: off-by-one in pagination
test('should handle last page boundary', () => {
const items = Array(100).fill('item');
const result = paginate(items, { page: 10, perPage: 10 });
expect(result.items.length).toBe(10); // Currently returns 9
});
If test fails, upgrade confidence to CONFIRMED and include test in bug report.
For each verified bug, generate a regression test in the project's native test framework that would expose the bug as a failing test. Before writing any new test, first discover the project's language/framework and whether existing tests already cover (or partially cover) the bug scenario.
Identify the project's language(s) and test framework by examining the codebase:
| Signal | Language | Test Framework | Build/Run Command |
|---|---|---|---|
BUILD/WORKSPACE/MODULE.bazel + .cpp/.cc/.h | C/C++ | GTest | bazel build / bazel test |
CMakeLists.txt + .cpp/.cc | C/C++ | GTest | cmake --build / ctest |
go.mod or go.sum | Go | testing (stdlib) | go test |
pytest.ini/pyproject.toml/setup.py/conftest.py | Python | pytest | pytest |
requirements.txt + unittest imports | Python | unittest | python -m pytest |
package.json + Jest config | JavaScript/TypeScript | Jest | npx jest / npm test |
package.json + Vitest config | JavaScript/TypeScript | Vitest | npx vitest |
package.json + Mocha config | JavaScript/TypeScript | Mocha | npx mocha |
Cargo.toml | Rust | built-in #[test] | cargo test |
pom.xml | Java | JUnit | mvn test |
build.gradle/build.gradle.kts | Java/Kotlin | JUnit | gradle test |
Resolution order:
draft/tech-stack.md first — it may explicitly state the test frameworkIf the project is polyglot (multiple languages), detect per-component and generate tests in the matching language for each bug.
If no test framework is detected: Mark all bugs with Regression Test Status: N/A — no test framework detected and proceed with bug reporting. Do not skip bugs because tests cannot be written. The regression test section is supplementary — the primary deliverable is the bug report.
Record the detected configuration:
Language: [detected | none]
Test Framework: [detected | none]
Build System: [detected | none]
Test Command: [detected | N/A]
For each verified bug, search the codebase for existing tests before generating new ones:
Locate test files for the buggy module using language-appropriate patterns:
| Language | Search Patterns |
|---|---|
| C/C++ | *_test.cpp, *_test.cc, test_*.cpp; patterns: TEST(, TEST_F(, TEST_P( |
| Go | *_test.go in same package; patterns: func Test, func Benchmark |
| Python | test_*.py, *_test.py in tests/; patterns: def test_, class Test |
| JS/TS | *.test.ts, *.spec.ts, __tests__/*.ts; patterns: describe(, it(, test( |
| Rust | #[cfg(test)] in same file, or tests/*.rs; patterns: #[test], fn test_ |
| Java | *Test.java, *Tests.java in src/test/; patterns: @Test, @ParameterizedTest |
Analyze existing test coverage
Classify the coverage status — one of:
| Status | Meaning | Action |
|---|---|---|
| COVERED | Existing test already catches this bug (test fails on buggy code) | Report the existing test — no new test needed |
| PARTIAL | Test exists for the function but misses this specific scenario | Add the missing case to the existing test file |
| WRONG_ASSERTION | Test exists but asserts the buggy behavior as correct | Fix the assertion in the existing test |
| NO_COVERAGE | No test exists for this code path | Generate a new test |
| N/A | Bug is in non-testable code (config, markdown, LLM workflow) | Write N/A — [reason] |
Document discovery results in the bug report's Regression Test field
Example Existing Test Discovery:
1. Bug location: src/parser.cpp:145 — off-by-one in tokenize()
2. Grep: `rg 'tokenize' tests/` → found tests/parser_test.cpp
3. Read tests/parser_test.cpp:
- TEST(Parser, TokenizeSimpleInput) — tests basic input ✓
- TEST(Parser, TokenizeEmptyString) — tests empty string ✓
- No test for boundary input length (the bug trigger)
4. Status: PARTIAL — parser_test.cpp covers tokenize() but misses boundary case
5. Action: Add new TEST case to existing tests/parser_test.cpp
Based on discovery results, generate tests in the project's native framework:
**Regression Test:**
**Status:** COVERED — existing test already catches this bug
**Existing Test:** `tests/parser_test.cpp:45` — `TEST(Parser, TokenizeBoundary)`
No new test needed.
Each new test MUST:
#include <gtest/gtest.h>
// #include "relevant/project/header.h"
// Bug: [SEVERITY] Category: Brief Title
// Location: path/to/file.cpp:line
// This test FAILS against current code, PASSES after fix
TEST(BugCategory, BriefBugTitle) {
// Setup
// Act
// Assert
EXPECT_EQ(actual, expected) << "Description of what should happen";
}
# Bug: [SEVERITY] Category: Brief Title
# Location: path/to/file.py:line
# This test FAILS against current code, PASSES after fix
import pytest
from module.under.test import function_under_test
def test_brief_bug_title():
"""[Category] Brief description of the bug scenario."""
# Setup
# Act
result = function_under_test(input)
# Assert
assert result == expected, "Description of what should happen"
package package_name
import (
"testing"
// project imports
)
// Bug: [SEVERITY] Category: Brief Title
// Location: path/to/file.go:line
// This test FAILS against current code, PASSES after fix
func TestBriefBugTitle(t *testing.T) {
// Setup
// Act
got := FunctionUnderTest(input)
// Assert
if got != expected {
t.Errorf("FunctionUnderTest() = %v, want %v", got, expected)
}
}
// Bug: [SEVERITY] Category: Brief Title
// Location: path/to/file.ts:line
// This test FAILS against current code, PASSES after fix
import { functionUnderTest } from './module-under-test';
describe('BugCategory', () => {
it('should brief bug title', () => {
// Setup
// Act
const result = functionUnderTest(input);
// Assert
expect(result).toBe(expected);
});
});
// Bug: [SEVERITY] Category: Brief Title
// Location: path/to/file.rs:line
// This test FAILS against current code, PASSES after fix
#[cfg(test)]
mod bug_regression_tests {
use super::*;
#[test]
fn test_brief_bug_title() {
// Setup
// Act
let result = function_under_test(input);
// Assert
assert_eq!(result, expected, "Description of what should happen");
}
}
// Bug: [SEVERITY] Category: Brief Title
// Location: path/to/File.java:line
// This test FAILS against current code, PASSES after fix
import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.*;
class BugCategoryTest {
@Test
void briefBugTitle() {
// Setup
// Act
var result = classUnderTest.methodUnderTest(input);
// Assert
assertEquals(expected, result, "Description of what should happen");
}
}
After all bugs are documented, collect all test cases into a single consolidated section in the report (see Report Generation). Group by discovery status so the reader knows which tests are new vs modifications to existing tests.
Before writing any test files, discover the project's test infrastructure and conventions:
Detect Build System & Test Runner
| Language | Build System Signals | Test Runner |
|---|---|---|
| C/C++ | WORKSPACE/MODULE.bazel → Bazel; CMakeLists.txt → CMake | bazel test / ctest |
| Go | go.mod (always present) | go test ./... |
| Python | pyproject.toml / setup.cfg / tox.ini / bare | pytest (prefer) / python -m unittest |
| JS/TS | package.json → check scripts.test and devDeps | npx jest / npx vitest / npm test |
| Rust | Cargo.toml (always present) | cargo test |
| Java | pom.xml → Maven; build.gradle → Gradle | mvn test / gradle test |
If no recognized build system is found, inform user and keep report-only test output:
"No recognized build/test system detected. Regression tests are included in the report only."
Map Source Files to Test Locations For each buggy source file, determine where its tests live (or should live):
| Language | Common Conventions |
|---|---|
| C/C++ (Bazel) | Co-located foo_test.cpp or separate tests/ tree; check cc_test in BUILD |
| Go | Same directory: foo.go → foo_test.go (always co-located) |
| Python | src/auth/handler.py → tests/auth/test_handler.py or tests/test_auth_handler.py |
| JS/TS | src/auth/handler.ts → src/auth/handler.test.ts or __tests__/handler.test.ts |
| Rust | In-file #[cfg(test)] module, or tests/ directory for integration tests |
| Java | src/main/java/com/... → src/test/java/com/... (Maven convention) |
tests/ directory mirroring the source treeIdentify Test Dependencies (language-specific)
| Language | What to Find |
|---|---|
| C/C++ (Bazel) | GTest dep label: @com_google_googletest//:gtest_main; source cc_library targets |
| Go | No extra deps needed (testing is stdlib) |
| Python | Check if pytest is in requirements*.txt / pyproject.toml; add if missing |
| JS/TS | Check if test framework is in devDependencies; identify import style |
| Rust | No extra deps for unit tests; dev-dependencies for integration test crates |
| Java | JUnit version in pom.xml / build.gradle dependencies |
Skip this step entirely if no test framework was detected in Step 1.
For bugs with status NO_COVERAGE, PARTIAL, or WRONG_ASSERTION, write the actual test files. Bugs with COVERED or N/A status do not need action here — they are still included in the final report:
Create directory if it doesn't exist:
mkdir -p <test_directory>/
Write the test file using the language-appropriate template:
| Language | Example Target File |
|---|---|
| C/C++ | tests/auth/login_handler_test.cpp |
| Go | auth/login_handler_test.go (same package) |
| Python | tests/auth/test_login_handler.py |
| JS/TS | src/auth/login_handler.test.ts or __tests__/auth/login_handler.test.ts |
| Rust | tests/login_handler_test.rs or #[cfg(test)] in source |
| Java | src/test/java/com/example/auth/LoginHandlerTest.java |
Create or update build config (if required by the build system):
C/C++ (Bazel) — add cc_test to BUILD:
cc_test(
name = "<source_filename>_test",
srcs = ["<source_filename>_test.cpp"],
deps = [
"//src/<component>:<library_target>",
"@com_google_googletest//:gtest_main",
],
)
Java (Maven) — no build config change needed (convention-based discovery)
Java (Gradle) — no build config change needed
Go — no build config change needed (go test discovers _test.go automatically)
Python — no build config change needed (pytest discovers test_*.py automatically)
JS/TS — no build config change needed (Jest/Vitest discover *.test.* automatically)
Rust — no build config change needed (cargo test discovers #[test] automatically)
If multiple bugs affect different files in the same component, create one test file per source file (not one per bug). Group related bug tests into the same file.
describe() block, or at end of file#[cfg(test)] moduleConstraints:
After writing all test files, validate them using the project's native toolchain.
Validate each new/modified test using the language-appropriate command:
| Language | Validation Command | What It Checks |
|---|---|---|
| C/C++ (Bazel) | bazel build //tests/<component>:<target>_test | Compilation + linking |
| C/C++ (CMake) | cmake --build <build_dir> --target <target>_test | Compilation + linking |
| Go | go vet ./path/to/package/... | Syntax + type checking (no execution) |
| Python | python -m py_compile tests/path/test_file.py | Syntax validation |
| JS/TS | npx tsc --noEmit tests/path/file.test.ts (TS) or node --check tests/path/file.test.js (JS) | Type check / syntax |
| Rust | cargo check --tests | Type check + borrow check (no execution) |
| Java (Maven) | mvn test-compile | Compilation only |
| Java (Gradle) | gradle testClasses | Compilation only |
Handle validation results:
| Result | Action |
|---|---|
| Succeeds | Mark as BUILD_OK in report |
| Fails — import/include error | Fix the import path, retry (up to 2 retries) |
| Fails — missing dep | Add the dependency, retry (up to 2 retries) |
| Fails — type/API mismatch | Fix the test to match actual API signatures, retry (up to 2 retries) |
| Persistent failure (3 attempts) | Mark as BUILD_FAILED with the error message in report. Delete the broken test file and note in the report: "Test file removed due to persistent build failure." |
Do NOT run the tests. The tests are designed to FAIL against the current buggy code — that's the point. Validation checks only syntax, types, and linking. Running them would produce expected failures that aren't useful here.
Exception for Go: go vet is preferred over go build for test files because Go compiles tests as part of go test only. go vet catches type errors and common issues without executing.
Validation summary — Record results for the report:
BUILD_OK: 3 targets
BUILD_FAILED: 1 target (tests/config/test_loader.py — ImportError: no module named 'config.loader')
SKIPPED: 1 target (N/A — race condition not reliably testable)
For each bug with CONFIRMED or HIGH confidence, generate a minimal suggested fix alongside the bug report. Fix suggestions are advisory — they are never auto-applied.
SUGGESTED (REVIEW REQUIRED) — never imply auto-application.N/A for bugs where the fix requires architectural changes, significant refactoring, or domain knowledge beyond what the code provides.Reference: Meta SapFix — automated fix suggestion with human-in-the-loop validation.
For each verified bug:
### [SEVERITY] Category: Brief Title
**Location:** `path/to/file.ts:123`
**Confidence:** [CONFIRMED | HIGH | MEDIUM]
**Code Evidence:**
```[language]
// The actual problematic code
Data Flow Trace: [How data reaches this point: caller → caller → this function]
Issue: [Precise technical description of what is wrong]
Impact: [User-visible effect or system failure mode]
Verification Done:
Why Not a False Positive: [Explicit statement: "No sanitization exists because X", "Framework Y doesn't escape Z in this context", etc.]
Fix: [Minimal code change or mitigation]
Suggested Fix (REVIEW REQUIRED):
// BEFORE (current buggy code):
[exact code snippet from the codebase]
// AFTER (suggested fix):
[minimal change that addresses root cause]
This fix is SUGGESTED only — human review required before applying. Reference: Meta SapFix methodology.
Regression Test:
Status: [COVERED | PARTIAL | WRONG_ASSERTION | NO_COVERAGE | N/A]
Existing Test: [path/to/test_file:line — test name | None found]
[Action: existing test reference, proposed modification, or new test case]
// New or modified test case (omit if COVERED or N/A)
**Example — COVERED (no new test needed):**
```markdown
**Regression Test:**
**Status:** COVERED — existing test already catches this bug
**Existing Test:** `tests/validator_test.cpp:89` — `TEST(Validator, RejectsScriptTags)`
No new test needed. Existing test fails when XSS sanitization is removed.
Example — PARTIAL (C++ / GTest):
**Regression Test:**
**Status:** PARTIAL — tests exist for processInput() but miss unsanitized HTML path
**Existing Test File:** `tests/input_test.cpp`
**Modification:** Add to existing file:
```cpp
TEST(InputSanitization, RejectsMaliciousScript) {
std::string malicious = "<script>alert('xss')</script>";
std::string result = processInput(malicious);
EXPECT_EQ(result.find("<script>"), std::string::npos)
<< "Input should be sanitized to remove script tags";
}
**Example — NO_COVERAGE (Python / pytest):**
```markdown
**Regression Test:**
**Status:** NO_COVERAGE — no tests found for process_input()
**Target File:** `tests/test_input_processor.py` (new file)
```python
import pytest
from input.processor import process_input
def test_rejects_malicious_script():
"""Input should be sanitized to remove script tags."""
malicious = "<script>alert('xss')</script>"
result = process_input(malicious)
assert "<script>" not in result, "XSS script tag should be stripped"
# Expected: FAILS against current code (passes XSS through), PASSES after fix
**Example — NO_COVERAGE (Go / testing):**
```markdown
**Regression Test:**
**Status:** NO_COVERAGE — no tests found for ProcessInput()
**Target File:** `input/processor_test.go` (new file)
```go
package input
import (
"strings"
"testing"
)
func TestProcessInputRejectsMaliciousScript(t *testing.T) {
malicious := "<script>alert('xss')</script>"
result := ProcessInput(malicious)
if strings.Contains(result, "<script>") {
t.Error("XSS script tag should be stripped from input")
}
}
// Expected: FAILS against current code (passes XSS through), PASSES after fix
**Example — N/A (not testable, but still report the bug):**
```markdown
**Regression Test:**
**Status:** N/A — environment config, no executable code path
**Reason:** Bug is in `config/production.yaml` which sets incorrect timeout value. Config files are not unit-testable; fix requires changing the YAML value directly.
Severity levels:
Generate report at:
draft/bughunt-report-<timestamp>.md (where <timestamp> is generated via date +%Y-%m-%dT%H%M, e.g., 2026-03-15T1430)draft/tracks/<track-id>/bughunt-report-<timestamp>.md (if analyzing specific track)After writing the timestamped report, create a symlink pointing to it:
# Project-level
ln -sf bughunt-report-<timestamp>.md draft/bughunt-report-latest.md
# Track-level
ln -sf bughunt-report-<timestamp>.md draft/tracks/<track-id>/bughunt-report-latest.md
Previous timestamped reports are preserved. The -latest.md symlink always points to the most recent report.
MANDATORY: Include YAML frontmatter with git metadata. Follow the procedure in core/shared/git-report-metadata.md to gather git info and generate the frontmatter. Use generated_by: "draft:bughunt".
Report structure:
[YAML frontmatter — see core/shared/git-report-metadata.md]
# Bug Hunt Report
[Report header table — see core/shared/git-report-metadata.md]
**Scope:** [Entire repo | Specific paths | Track: <track-id>]
**Draft Context:** [Loaded | Not available]
## Summary
| Severity | Count | Confirmed | High Confidence |
|----------|-------|-----------|-----------------|
| Critical | N | X | Y |
| Important | N | X | Y |
| Minor | N | X | Y |
## Critical Issues
[Issues...]
## Important Issues
[Issues...]
## Minor Issues
[Issues...]
## Dimensions With No Findings
| Dimension | Status |
|-----------|--------|
| Correctness | No bugs found |
| Reliability | N/A — no runtime application |
| Performance | N/A — static site, no dynamic content |
| Concurrency | N/A — no async operations |
## Regression Test Suite
**Language:** [detected language]
**Test Framework:** [detected framework]
**Validation Command:** [command used]
### Test Discovery Summary
| # | Bug Title | Severity | Status | Existing Test | Action |
|---|-----------|----------|--------|---------------|--------|
| 1 | [Brief title] | [SEV] | COVERED | `path:line` | None needed |
| 2 | [Brief title] | [SEV] | PARTIAL | `path:line` | Added case to existing file |
| 3 | [Brief title] | [SEV] | WRONG_ASSERTION | `path:line` | Fixed assertion |
| 4 | [Brief title] | [SEV] | NO_COVERAGE | — | Created new test |
| 5 | [Brief title] | [SEV] | N/A | — | Not testable |
### Validation Status
| # | Bug Title | Test File / Target | Validation Status |
|---|-----------|-------------------|-------------------|
| 2 | [Brief title] | `tests/test_foo.py` | BUILD_OK (modified) |
| 3 | [Brief title] | `tests/test_bar.py:67` | BUILD_OK (modified) |
| 4 | [Brief title] | `tests/test_baz.py` | BUILD_OK (new) |
| 5 | [Brief title] | — | SKIPPED (N/A) |
Validation Summary: 3 BUILD_OK, 0 BUILD_FAILED, 1 SKIPPED Validation Command: python -m py_compile
### New Tests Written (NO_COVERAGE)
New test files created for bugs with no existing test coverage.
| Bug # | File Created | Build Target / Runner |
|-------|-------------|----------------------|
| 4 | `tests/test_baz.py` | `pytest tests/test_baz.py` |
```[language]
// Contents of new test file
Changes applied to existing test files.
| File | Bug # | Change Applied |
|---|---|---|
tests/test_foo.py | 2 | Added test_missing_case() |
tests/test_bar.py:67 | 3 | Changed assert result == 0 → assert result == 1 |
Bugs already caught by existing tests — no action needed.
| Bug # | Bug Title | Existing Test |
|---|---|---|
| 1 | [Brief title] | tests/test_foo.py:45 — test_sanitize_input() |
Bugs that cannot have automated regression tests (config issues, documentation, LLM workflows, etc.).
| Bug # | Bug Title | Reason |
|---|---|---|
| 6 | [Brief title] | Config file — no executable code |
## Final Instructions
**CRITICAL: All verified bugs appear in the main report body.** The Regression Test Suite section organizes test artifacts, but every bug — regardless of whether a test can be written — MUST be documented in the severity sections (Critical/Important/Minor Issues) above. Bugs with `N/A` regression test status are still valid bugs that need reporting.
**CRITICAL: Regression tests are supplementary, not a filter.** If no test framework is detected, or if a bug cannot have a test written (config, docs, LLM workflows), mark it as `N/A` and **still include the bug in the report**. Never skip a verified bug because you cannot write a test for it.
- **No unverified bugs** — Every finding must pass the verification protocol
- **Evidence required** — Include code snippets and trace for every bug
- **Explicit false positive elimination** — State why each bug isn't handled elsewhere
- Analyze all applicable dimensions — skip N/A dimensions explicitly with reason (see Dimension Applicability Check)
- Assume the reader is a senior engineer who will verify your findings
- If Draft context is available, explicitly note which architectural violations or product requirement bugs were found
- Be precise about file locations and line numbers
- Include git branch and commit in report header
- **Write regression tests when possible** — If a test framework is detected, write test files using the project's native framework (Steps 4-6). If no framework exists, skip Steps 2-6 and mark all bugs as `N/A` for regression tests
- **Never modify production code** — Only create/modify test files and their build configs
- **Validate before reporting** — If tests were written, validate syntax/compilation before finalizing; include validation status in the report
- **Respect project conventions** — Match existing test directory structure, naming patterns, import conventions, and framework idioms
- **Use native frameworks** — pytest for Python, `go test` for Go, GTest for C++, Jest/Vitest for JS/TS, `cargo test` for Rust, JUnit for Java — never force a foreign test framework
- **Learn from findings** — After report generation, execute the pattern learning phase from `core/shared/pattern-learning.md` to update `draft/guardrails.md` with newly discovered conventions and anti-patterns
---
## Cross-Skill Dispatch
### Suggestions at Completion
After bughunt report generation:
**If critical bugs found:**
"Critical bugs found. Consider: → /draft:debug — Run structured debug session on critical finding #{n} → git bisect — Find the exact commit that introduced the bug"
### Test Writing Guardrail
When offering to write regression tests for found bugs:
ASK: "Want me to write regression tests for the {n} bugs found? [Y/n]"
Never auto-write tests — always ask first.
### Jira Sync
If Jira ticket linked, sync via `core/shared/jira-sync.md`:
- Attach `bughunt-report-latest.md` to ticket
- Post comment: "[draft] bughunt-complete: Found {n} issues — {critical} critical, {major} major."