From test-engineering
Generates a structured, risk-driven test plan from design documentation. The output follows a standardized format used by QA teams to plan testing for features, systems, and initiatives.
npx claudepluginhub issacchaos/local-marketplace --plugin test-engineeringThis skill uses the workspace's default tool permissions.
Generates a structured, risk-driven test plan from design documentation. The output follows a standardized format used by QA teams to plan testing for features, systems, and initiatives.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Generates a structured, risk-driven test plan from design documentation. The output follows a standardized format used by QA teams to plan testing for features, systems, and initiatives.
/test-planner path/to/design-doc.pdf
/test-planner path/to/design-doc.md
/test-planner path/to/docs-folder/
$ARGUMENTS - Required: path to one or more design documents.
.md, .txt, .pdf, .html--code-path <path> - Optional: path to existing source code for refactors or enhancements.
$ARGUMENTS.offset and limit parameters to read in chunks of 500 lines. Summarize key points from each chunk before reading the next. For very large document sets (multiple files totaling thousands of lines), prioritize reading executive summaries, architecture sections, and requirements sections first.Before generating the plan, ask the user the following questions using the AskUserQuestion tool. Each question must be asked as its own separate AskUserQuestion prompt. Wait for the user's answer to each question before asking the next one. Do NOT combine multiple questions into a single prompt.
Question 1: Feature Type — "What best describes this feature?"
Question 2: Team Name — "What is the team or product name that owns this feature?"
Question 3: Target Release / Milestone — "What is the target release or milestone? (e.g., Season 34, FN35, Phase 1 November 2024)"
Question 4: Known Constraints — "Are there any known constraints you want called out? (e.g., dependency on another team, platform limitations, late delivery risk)"
Before generating test plan sections, build a mental model:
If --code-path was provided: Read key source files at the given path to understand the current implementation. Identify existing behavior, integration points, and areas of complexity that the design document proposes to change. This informs Step 4 with concrete regression risks and untested interaction points that would not be visible from the design document alone.
Identify risks using the "3 Whys" methodology:
For each potential risk, ask "why is it a risk?" three times to drill down to the core impact.
Example:
Surface observation: "This feature is coming in late."
- Why is that a risk? → "Because it means we have less time to test it."
- Why is THAT a risk? → "Because it means we won't catch all critical bugs/blockers before go-live."
- Why is THAT a risk? → "Because escaped defects could hurt player retention and lower engagement with a major release beat."
Fully articulated risk: "This feature is coming in late and QA won't have enough time to adequately test it, which means that we are at risk of lowered player retention due to escaped critical defects impacting their engagement with one of our major beats."
Apply this methodology to every risk. Do not write surface-level risks. Every entry in the Risks & Constraints table must contain the fully drilled-down risk statement.
Risk sources to examine:
--code-path was provided: regression risks from behavioral changes, untested integration points in existing code, and complexity hotspots identified from the sourceIMPORTANT — Mitigation column boundaries: The third column of the Risks & Constraints table ("How the other items in this test plan will mitigate the risk") must ONLY reference:
Do NOT include entry criteria, exit criteria, or edge cases in the mitigation column. Those belong in their own dedicated sections. If a risk naturally leads to an entry criterion or edge case, that content goes in the Entry/Exit Criteria or Edge Cases table — not here.
Scope is risk-driven. Items are placed in-scope specifically to mitigate identified risks. Items are placed out-of-scope as deliberate trade-offs.
For each in-scope item, you should be able to trace it back to a risk it mitigates. If a testing area doesn't mitigate any identified risk, question whether it belongs in scope.
Assign percentage allocations across four testing types. Use the feature type from Step 2 as a starting baseline, then adjust based on the specific risks and design characteristics.
Baselines by feature type:
| Feature Type | Checklist | Exploratory | PerfMem | Playtest |
|---|---|---|---|---|
| Seasonal game feature | 40% | 40% | 10% | 10% |
| Infrastructure / system | 40% | 30% | 10% | 20% |
| UX/UI flow | 30% | 50% | 0% | 20% |
| Content / asset change | 50% | 30% | 10% | 10% |
Adjustment guidelines:
Write the complete test plan to disk as a markdown file alongside the source document:
{feature-name}-test-plan.md{feature-name} from the document title or the feature name identified in Step 3, sanitized to lowercase with hyphens.The generated test plan must follow this structure. All sections are required. Tables must use the exact column structure shown.
# [Team/Product] - [Feature Name]
## Overview
[2-4 sentence description of the feature/system being tested. What is it, what does it change, and why does it matter? Written from the QA perspective — what are we focused on validating?]
[Reference links to design docs, JIRA epics, Slack channels, or other relevant documentation provided by the user or found in the source documents.]
## Scope
*You should be defining your test scope based on the risks you have identified for your feature. You might place some items as out of scope as a trade-off to bring other things in-scope in order to mitigate certain risks. Use testing trade-offs to inform scope.*
### In Scope
[Each entry is a high-level testing area in bold, followed by bullet points of specific items within that area.]
- **[Area of testing]**
- [Specific item]
- [Specific item]
- **[Area of testing]**
- [Specific item]
- [Specific item]
### Out of Scope
[Each entry is an exclusion with a brief rationale explaining *why* it is excluded (different team owns it, lower risk, not changing, etc.).]
- **[Excluded area]** — [Rationale for exclusion]
- **[Excluded area]** — [Rationale for exclusion]
## Risks & Constraints
*This should not be a full risk analysis (do that in reports). This should list major risks to the feature/system and the mitigation should tie into how you create this test plan — it is why you are writing the test plan.*
### [Short Risk Title]
**Risk**: [Fully articulated risk statement using the 3-whys methodology. Not a surface observation — the drilled-down core impact.]
**Likelihood**: [Low/Medium/High] | **Severity**: [Low/Medium/High]
[Additional context on what it blocks or enables.]
**Mitigation**: [Reference ONLY scope decisions, test approach allocation (%), or cadence choices that mitigate this risk. Do NOT restate entry/exit criteria or edge cases here — those have their own sections.]
---
[Repeat for each risk. Use a horizontal rule (---) between risks for visual separation.]
## Edge Cases and Integration Testing
*List out edge cases for the feature and any other features/systems the feature may interact with that will require integration testing. Keep the edge case scenarios high level and avoid writing out detailed test cases here.*
| Edge Case / Integration | Expected Behavior |
|---|---|
| [System or integration point] | [What should happen — the expected correct behavior] |
| ... | ... |
## Test Approach
*Use the baselines table from Step 6 as a starting point, then adjust based on identified risks.*
| Checklist | Exploratory | PerfMem | Playtest |
|---|---|---|---|
| [X]% | [Y]% | [Z]% | [W]% |
[1-2 sentences justifying the chosen split based on the feature type and identified risks. If the plan covers multiple distinct areas, break down the split per area.]
## Cadence
*Your cadence for each activity depends on a number of factors unique to your team/feature, such as the speed of development, stability trends, responsible dev testing/check-ins, adherence to deadlines, and overall risk. The cadence may need to be re-adjusted later if some of these factors change.*
| Test Activity | Cadence |
|---|---|
| [Activity type, e.g., Smoketests] | [Frequency and conditions, e.g., Daily on Android, iOS] |
| [Golden Path Regression] | [e.g., At Hardlock, At Pencils Down] |
| [Targeted Regression / One-offs] | [e.g., Once at Hardlock, then as-needed] |
| [Playtests] | [e.g., Minimum 1 full playtest before Pencils Down] |
| [PerfMem captures] | [e.g., One-off A/B capture comparing before/after] |
| ... | ... |
## Entry / Exit Criteria
*Any criteria not met in either column at the time it is expected should be converted into a risk during risk reporting.*
| Entry Criteria | Exit Criteria |
|---|---|
| [What must be true before testing can begin — prerequisites, dependencies resolved, builds available, systems stable] | [What must be validated/achieved before testing is considered complete — smoke tests pass, regression clean, golden path validated, etc.] |
| ... | ... |
---
*Generated by test-planner skill. Review and validate with your team before finalizing.*