From developer-workflow
Generates structured, prioritized test plans from feature specs, Figma designs, or code. Includes risk analysis, coverage matrix, automation candidates. Saves Markdown to docs/testplans/.
npx claudepluginhub kirich1409/krozov-ai-tools --plugin developer-workflowThis skill uses the workspace's default tool permissions.
Analyze a feature from its specification, design, or implementation and produce a structured,
Generates test plans, manual test cases, regression suites, bug reports, and Figma design validations for QA engineers.
Generates test cases from PRD documents or user requirements, covering functional, edge case, error handling, and state transition scenarios. For QA planning and test documentation.
Generates a structured testing plan prioritized by risk, covering unit, integration, e2e tests, edge cases, and negative scenarios. Analyzes impact, probability, and visibility to focus on critical areas.
Share bugs, ideas, or general feedback.
Analyze a feature from its specification, design, or implementation and produce a structured,
prioritized test plan as a markdown document. No tests are executed — the output is a plan ready
for a human QA engineer or the manual-tester agent to pick up later.
Save every test plan to the repository:
docs/testplans/<feature-name>-test-plan.md
Create the docs/testplans/ directory if it doesn't exist. Use kebab-case for the feature name.
Examples: user-authentication-test-plan.md, cart-checkout-test-plan.md.
Determine what the user has provided and gather context accordingly.
Read the document. Extract:
Use Figma MCP tools (get_design_context, get_screenshot) to retrieve the design.
Extract:
When code is the primary (or only) source of truth, read the implementation thoroughly:
When deriving test cases from code alone, be explicit about assumptions. Mark any inferred
behavior that has no spec backing with [inferred from code] so reviewers know what to verify
against product intent.
Often the user provides more than one source. Cross-reference them:
[inferred from code]Before writing test cases, identify:
Risk areas — parts of the feature most likely to break or cause user-visible issues. Consider: complexity, number of integration points, data sensitivity, new vs. changed behavior.
Edge cases — boundary values, empty/null inputs, concurrent actions, permission boundaries, network failures, locale/timezone effects, large datasets.
State combinations — which states interact and which transitions are possible. A simple matrix helps: list states on one axis, user actions on the other, mark which intersections need coverage.
Every generated test plan must follow this exact structure:
# Test Plan: [Feature Name]
| Field | Value |
|-------|-------|
| **Source** | [spec link / Figma link / code path — whatever was provided] |
| **Generated** | [YYYY-MM-DD] |
| **Scope** | [one-line summary of what is covered] |
| **Status** | Draft / Ready for Review / Approved |
---
## Findings
Discrepancies, ambiguities, or assumptions discovered during analysis.
Each finding has a short title and explanation.
- **[Finding title]** — [explanation]
> Omit this section entirely if there are no findings.
---
## Risk Areas
| Area | Risk Level | Reason |
|------|-----------|--------|
| [area name] | High / Medium / Low | [why this area is risky] |
---
## Test Cases
### [Group Name]
Group related test cases by feature area, screen, or workflow
(e.g., Authentication, Cart Checkout, Error Handling).
#### TC-[N]: [Short descriptive title]
| Field | Value |
|-------|-------|
| **Priority** | P0 Critical / P1 High / P2 Medium / P3 Low |
| **Tier** | Smoke / Feature / Regression |
| **Preconditions** | What must be true before starting |
| **Steps** | 1. First step 2. Second step 3. Third step |
| **Expected Result** | Observable outcome that means the test passed |
| **Source** | Spec §section / Figma frame name / `path/to/file.kt:42` / [inferred from code] |
---
## Edge Cases & Negative Scenarios
Same TC format as above. Grouped separately for visibility.
Includes: boundary values, invalid inputs, error states, permission denials,
network failures, empty/null data, concurrent operations.
---
## Coverage Matrix
| Requirement / Screen / Flow | Test Cases | Risk |
|-----------------------------|-----------|------|
| [requirement or screen name] | TC-1, TC-3 | High |
| [another requirement] | TC-2 | Low |
---
## Suggested Automation Candidates
Test cases that are good candidates for automated testing.
| Test Case | Rationale |
|-----------|-----------|
| TC-[N] | [why this is a good automation candidate] |
> Omit this section if no test cases are suitable for automation.
| Priority | Meaning | Guideline |
|---|---|---|
| P0 Critical | Core happy path | If this fails, the feature is unusable |
| P1 High | Important flows | Security, data integrity, key user journeys |
| P2 Medium | Secondary flows | Edge cases with moderate impact |
| P3 Low | Minor scenarios | Cosmetic, rare edge cases, minor UX |
| Tier | Meaning | Guideline |
|---|---|---|
| Smoke | Is it alive? | Minimum set to confirm the feature works at all (3-5 tests max) |
| Feature | Does it work correctly? | Thorough coverage of the feature's behavior |
| Regression | Did we break anything? | Guards against breaking existing functionality |
| Source type | Format | Example |
|---|---|---|
| Spec section | Spec §[section] | Spec §3.2 — Login flow |
| Figma frame | Figma: [frame name] | Figma: Login / Error State |
| Code path | backtick-wrapped path with line | src/auth/LoginViewModel.kt:87 |
| Inferred | [inferred from code] | Behavior derived from code with no spec backing |
[inferred from code] so reviewers can verify against
product intent