Generate a sign-off-ready test plan from PRD, Figma designs, and Jira epics. PM-first perspective: validates the feature ships in the right state, the user experience holds under failure, and the business is protected. Covers happy path, validation, error recovery, integration failures, financial accuracy, state transitions, and access control. Outputs prioritised manual test cases with a PM sign-off gate.
From pm-executionnpx claudepluginhub jupitermoney/pm-superic-skills --plugin pm-executionThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Help a PM answer one question: is this feature ready to ship, and what do I need to see before I say yes?
The output is a structured test plan the PM can hand to the team, walk through in a review session, or use to run checks themselves. Every case maps to a product promise. Every expected result is something a non-engineer can observe. The sign-off gate is what the PM holds the team to before approving launch.
1. Start from the product promise, not the code. Every test case should trace back to something the feature is supposed to do for the user. If you cannot connect a test to a user benefit or a business protection, question whether it belongs in the PM's test plan.
2. Expected results must be observable without engineering tools. "API returns 200" is an engineering check. "User sees a success screen with the confirmed amount and a reference number" is a PM check. Frame expected results as what a person looking at the product would see, hear, or receive. Where backend verification is needed, note it explicitly as an engineering check alongside the user-facing assertion.
3. For every flow that works, test what happens when it breaks. The error path is where users lose trust. A broken success screen is bad. A broken error screen — one that leaves the user stranded with no next step — is worse. Every happy path needs at least one failure counterpart.
4. Flag what the spec does not answer.
If the PRD does not describe what happens when the payment fails, the error state does not exist in the product. Write the test case anyway, mark it [SPEC GAP], and surface it to the designer and engineer before testing begins. Untested states ship as incidents.
5. Pre-conditions must be set up, not assumed. Do not write "user has completed onboarding." Write "User account: test-user-01, status: verified, balance: $500 — set up via [method]." Anyone on the team should be able to reproduce the setup independently.
6. The sign-off gate is a ship decision, not a formality. Define in advance what "done" looks like — which cases must pass, which open issues are acceptable, who gives the final approval. If the gate is vague, sign-off is meaningless.
The quality of the test plan is directly limited by the inputs provided. Before generating any test cases, confirm what is available:
| Input | What it unlocks | If unavailable |
|---|---|---|
| PRD | Acceptance criteria, business logic, constraints, compliance requirements | Test plan cannot map to product requirements — all cases will be inferred, not traced |
| Figma | UI states, error screen copy, empty states, loading states, navigation flows | UI-facing assertions will be incomplete — error copy, loading behavior, and edge states will be marked [SPEC GAP] |
| Jira epic / stories | Scope boundaries, cross-team dependencies, definition of done, known risks | Integration points and adjacent feature impact will be unknown |
If any of these is missing, ask the user to provide it or confirm explicitly that it is not available. Then flag in the test plan what coverage is missing as a result.
Flag if the PRD does not cover:
For every screen, extract each distinct state:
Also extract:
Flag if Figma is missing error states, empty states, or loading states for any interactive element.
Generate cases across all applicable categories. If a category does not apply, state why rather than silently skipping it.
The feature works as designed for the primary user journey. Cover all distinct entry points and user types — if there are 3 ways to reach this feature, test all 3.
For every user input:
Expected result for each: the exact error message shown to the user, or [SPEC GAP] if not defined in the PRD or Figma.
What happens when something goes wrong and the user tries to continue:
Every call to an external system can fail. For each integration point in the feature:
This is a PM check because the answer determines whether a partner outage creates a user-visible incident.
Users do not always follow the intended path:
Apply to any feature involving money, amounts, fees, limits, or records:
| Field | What to write |
|---|---|
| TC ID | TC-[FEATURE PREFIX]-[NUMBER] |
| Category | One of the 8 categories above |
| Scenario | Specific — not "happy path" but "user with verified account completes first-time purchase with saved card, sufficient balance" |
| Pre-conditions | Exact account state, feature flag status, test data values — no assumptions |
| Steps | Numbered, one action per step, with the immediate observable result noted inline where relevant |
| Expected result | What a person looking at the product sees — UI state, message text, amount displayed. Add "Engineering check:" for any backend verification needed alongside it. |
| Maps to | The PRD acceptance criterion or Jira story this test validates |
| Priority | P0 / P1 / P2 |
| Notes | Spec gaps, setup dependencies, known risks |
Priority definitions:
## Test Plan: [Feature Name]
**PRD**: [link or "not provided — coverage is inferred"]
**Figma**: [link or "not provided — UI state assertions are incomplete"]
**Jira**: [link or "not provided — scope boundaries and dependencies unconfirmed"]
**Date**: [today]
**Feature owner**: [PM name]
**Test environment**: [staging / UAT / production — controlled access]
---
### Spec gaps
[States or behaviors not defined in the inputs. Each is an untested risk that must be resolved before ship.]
1. [What is missing and which input should define it]
---
### Pre-test setup
[What must be true before any test runs — test accounts, feature flags, partner sandbox configuration, data setup method]
---
### Test suite
| TC ID | Category | Scenario | Pre-conditions | Steps | Expected Result | Maps to | Priority | Notes |
|-------|----------|----------|----------------|-------|-----------------|---------|----------|-------|
---
### Coverage summary
| Acceptance criterion | Test cases | Status |
|----------------------|------------|--------|
| AC-01: [criterion] | TC-X-001, TC-X-002 | |
| AC-02: [criterion] | TC-X-003 | |
---
### Sign-off gate
Before approving ship, confirm:
1. All P0 cases executed and passed
2. All P1 cases executed — any failures have a documented workaround or accepted risk with owner named
3. [Feature-specific condition — e.g., "financial accuracy cases verified by [team]"]
4. All spec gaps resolved or explicitly accepted as known risk before GA
5. [Who gives final sign-off and by when]
---
### What this plan cannot verify
[Risks that require additional spec, a different environment, or partner confirmation to test]
| Anti-pattern | Example | Fix |
|---|---|---|
| Vague expected result | "Screen should display correctly" | Name the specific element, its state, and the exact text |
| Chained pre-conditions | "Assuming prior test completed" | Write as standalone setup — specify the exact account state |
| Generic input values | "Enter a valid email address" | Use a specific value, plus a boundary case value |
| No failure counterpart | Only a success test for a user action | Add at least one case for what happens when it fails |
| Unverifiable assertion | "Performance should be acceptable" | Specify the observable threshold — "page loads before the spinner disappears" |
| Missing error copy | "User sees an error" | Specify the exact message or mark [SPEC GAP] |
| Unmapped test | Test case with no AC or story reference | Every case must trace to a requirement |
Apply the relevant module for the product area. These are the failure modes and accuracy checks a generic template misses.
Offer these after delivering the core test plan. Do not include them by default.
Regression map — "Want me to identify which existing flows this feature touches and suggest checks to run on them?" Use when: the feature changes shared state, modifies a core flow, or touches a system used by other features.
Automation candidates — "Want me to flag which cases are candidates for automated testing?" Use when: the team has a test automation practice and wants to identify scripted vs exploratory coverage.
Test data setup — "Want me to write the setup instructions or scripts for the pre-conditions?" Use when: pre-conditions are complex, require specific account states, or need to be repeatable across test runs.
Exploratory testing brief — "Want me to write a 30-minute exploratory testing brief for a team member to run alongside the scripted cases?" Use when: the feature is complex, has many edge states, or is in a high-risk area. Exploratory testing finds what scripted cases miss.