Generates professional Test Plan and Test Cases documents following IEEE 829 test documentation standards, ISTQB testing methodologies, and Google Testing Blog best practices. This skill activates when the user asks to write a test plan, create test cases, draft a test plan document, create a test cases document, define a test strategy, write a QA document, produce a testing document, draft a test specification, or build a quality assurance plan. It produces comprehensive, structured test documentation that ensures thorough coverage, traceability to requirements, and a clear path from test design through execution and defect management.
Generates comprehensive test plans and cases following IEEE 829 standards, ISTQB methodologies, and best practices for full coverage.
npx claudepluginhub tercel/spec-forgeThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/checklist.mdreferences/generation-instructions.mdreferences/template.mdA Test Plan is the master document that defines the scope, approach, resources, and schedule of all testing activities for a software project or feature. As specified by IEEE 829 (Standard for Software and System Test Documentation), a test plan communicates the intent of testing to all stakeholders and provides a framework for organizing, tracking, and evaluating test efforts. It answers fundamental questions: what will be tested, how it will be tested, who will test it, when testing will happen, and what criteria determine whether testing is complete.
Within the software development lifecycle a Test Plan sits downstream of the Software Requirements Specification (SRS) and Technical Design documents. It translates functional and non-functional requirements into concrete, verifiable test cases. A well-constructed Test Plan reduces the risk of undetected defects reaching production, provides measurable quality gates, and serves as the contractual agreement between development, QA, and product teams on what "done" means from a quality perspective.
This skill treats the Test Plan as a living artifact. It is authored once, but it evolves as requirements change, new risks emerge, and test execution reveals areas needing deeper coverage.
Every Test Plan generated by this skill follows a disciplined six-step process. Each step must be completed before moving to the next.
Before writing any test documentation, scan the project to build situational awareness of both the application under test and the existing test infrastructure.
**/*.md, **/package.json, **/pyproject.toml, **/*.test.*, **/*.spec.*, **/__tests__/**, or language-specific test directories to map the landscape.jest.config.*, pytest.ini, vitest.config.*, .github/workflows/*, Makefile, or docker-compose.test.yml. Understanding what test infrastructure already exists prevents duplicated effort and ensures the plan integrates with the team's workflow.The Test Plan must trace every test case back to a requirement. Automatically scan for upstream documents.
docs/ directory for files matching patterns like */srs.md, */tech-design.md, */prd.md, or equivalent naming conventions.FR-XXX-NNN, NFR-XXX-NNN) from the SRS to build the requirements-to-test-cases traceability matrix later.After scanning, present the user with targeted clarifying questions. Good questions surface missing context that cannot be inferred from the codebase. Typical areas to probe include:
Do not proceed to generation until the user has answered enough questions to fill the core sections of the template.
Using the answers from Step 3 and the context from Steps 1 and 2, generate the full Test Plan by filling in every applicable section of references/template.md. Follow the writing guidelines and standards described in the sections below. Generate all Mermaid diagrams inline. Assign test case IDs, priorities, and types as you go.
After generating the Test Plan, construct the Requirements Traceability Matrix (RTM). Map every SRS requirement ID (both functional and non-functional) to one or more test case IDs. Flag any requirements that lack test coverage and any test cases that do not trace back to a stated requirement. The RTM is the primary mechanism for proving that testing is complete and aligned with the specification.
Validate the completed Test Plan against every item in references/checklist.md. Fix any issues before presenting the final document to the user. Summarize the checklist results so the user can see what passed and whether any items were intentionally skipped (with justification).
The test strategy follows the well-established test pyramid model, with one critical modification: any test that touches the database must use a real database, not a mock.
Mocking the database hides real bugs:
The only acceptable use of mocks is for external third-party services (payment gateways, email/SMS providers, external APIs) that you don't control and may be unavailable during tests. Your own database, cache, and message queues should always be tested for real.
Tools like TestContainers make real database testing as easy as mocking — they spin up a real database in Docker, run tests, and auto-cleanup. There is no longer a valid excuse to mock the database.
The plan must specify the approximate distribution of tests across these levels (e.g., 60% unit, 25% integration, 10% system/E2E, 5% acceptance) and justify any deviation.
Every test case receives a unique identifier following this pattern:
TC-<MODULE>-<NNN>
AUTH, PAY, DASH, NOTIF, CART, SRCH.Examples: TC-AUTH-001, TC-PAY-012, TC-CART-003.
Each test case is an implementation guide for engineers. It must be detailed enough to be translated directly into test code. Each test case must include the following fields:
[action] [condition] [expected outcome] (e.g., "Create user with valid email returns 201 and saves to database").users table with id='uuid-1', role='admin', status='active'; Auth token valid for user uuid-1".name: "John Doe", email: "test@example.com", age: 25. Never use placeholders like [valid name] or [valid email]. Specify exact HTTP method, endpoint, headers, and request body.users WHERE email = 'test@example.com' → verify name = 'John Doe', status = 'active'").The Test Plan must address the following test types, allocating appropriate effort to each based on the project's risk profile:
Three primary test methods are applied depending on the test level and objective:
Not all features carry equal risk. The Test Plan applies risk-based prioritization to focus testing effort where it matters most.
Entry criteria define the preconditions that must be met before testing begins. They act as a gate to prevent wasted effort on an untestable build. Typical entry criteria include: code complete for the features in scope, build successfully deployed to the test environment, unit tests passing with minimum coverage thresholds, test data prepared and loaded, and upstream documentation (SRS, Tech Design) reviewed and approved.
Exit criteria define the measurable conditions that must be met before testing is declared complete. They provide an objective, defensible answer to "are we done testing?" Typical exit criteria include: all P0 and P1 test cases executed, overall pass rate at or above 95%, no open Critical or Major defects, requirements traceability matrix showing 100% coverage of in-scope requirements, and performance benchmarks met.
Defects are classified into four severity levels:
Every defect follows a defined lifecycle: New, Assigned, In Progress, Fixed, Verified, and Closed. If verification fails, the defect is Reopened and cycles back through the process. The Test Plan must include a Mermaid state diagram illustrating this lifecycle.
The Requirements Traceability Matrix (RTM) is one of the most critical sections of the Test Plan. It provides a bidirectional mapping between SRS requirements and test cases.
Every test plan must include a dedicated section for data integrity test cases. These tests can ONLY be caught with a real database — mocks will always pass regardless of constraint violations. Data integrity tests must cover:
deleted_at set) rather than physically removed; verify soft-deleted records are excluded from normal queries.Every feature must have both positive test cases (verifying correct behavior with valid inputs) and negative test cases (verifying proper error handling with invalid, unexpected, or malicious inputs). A ratio of roughly 60% positive to 40% negative is a useful starting guideline for most features.
For any input that has defined ranges or limits, include test cases at the exact boundary, one value below the boundary, and one value above the boundary. This technique catches off-by-one errors and range validation defects that are among the most common bugs in software.
Beyond boundary values, identify and test edge cases specific to the domain: empty inputs, maximum-length strings, concurrent operations, timezone transitions, Unicode characters, null values, and other scenarios that stress the system's assumptions.
This skill relies on two reference files stored alongside it.
references/template.md -- The full Test Plan template following IEEE 829 structure with placeholder text for every section. The generated Test Plan is built by filling in this template.references/checklist.md -- A quality checklist organized into four categories (Completeness, Quality, Consistency, Format). The checklist is used during Step 6 to validate the finished document.Always read both files before generating a Test Plan so that any updates to the template or checklist are picked up automatically.
The finished Test Plan is written to:
docs/<feature-name>/test-plan.md
where <feature-name> is a lowercase, hyphen-separated slug derived from the feature name (for example, docs/user-authentication/test-plan.md or docs/payment-processing/test-plan.md). If the docs/<feature-name>/ directory does not exist, create it. If a file with the same name already exists, confirm with the user before overwriting.