This skill should be used when "applying PR feedback", "processing discussion conclusions", "updating requirements from meeting notes", or any update that touches requirements.md, design.md, design-approach.md, or decisions.md. Ensures content lands at the correct abstraction level and in the correct document.
From forgenpx claudepluginhub flox/forge-plugin --plugin forgeThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
How to process incoming changes from any source and correctly route content to the right document at the right abstraction level.
Always load when:
The discipline applies regardless of which document is currently in scope. If a design-only PR produces a change that belongs in requirements, update requirements too.
Translate between abstraction levels. Never copy-paste.
When a reviewer says something, their words contain both the WHAT (requirement/outcome) and the HOW (design/mechanism). Your job is to separate these and route each to the correct document.
| Document | Contains | Abstraction Level |
|---|---|---|
| requirements.md | What, why, outcomes | Behavioral |
| design-approach.md | Which direction, why | Strategic |
| design.md | How, where, mechanisms | Implementation |
| decisions.md | Why we chose, ADR format | Rationale |
For each piece of content, ask: "Do we have a choice?"
Content likely belongs in requirements when it contains:
Content likely belongs in design-approach when it contains:
Content likely belongs in design when it contains:
Single-thread discussions with clear conclusions.
Process:
Multiple unresolved threads on the same PR.
Process:
Why scan first: Thread 3 may reverse Thread 1's conclusion. Applying Thread 1 first and Thread 3 later wastes effort.
Long-form input from a meeting or conversation.
This is the highest-risk source for conflation. Meeting discussions are nonlinear — participants explore, backtrack, and refine positions throughout.
Process:
Read the entire transcript/summary before extracting anything. During the scan, note:
For each topic, identify the final position — the last agreed-upon state, not intermediate positions:
Topic: [topic]
Final conclusion: [what was ultimately decided]
Reversed from: [earlier position, if any]
Supporting context: [rationale, examples, constraints]
Classification: requirement | design | decision
Confidence: high | medium | low
Mark "low confidence" when:
For each conclusion, determine target document(s):
| If the conclusion is about... | Target |
|---|---|
| What the system must achieve | requirements.md |
| How the system achieves it | design.md |
| A significant choice with alternatives | decisions.md + affected doc |
| Strategic direction change | design-approach.md |
| A constraint imposed externally | requirements.md |
For each conclusion, write it in the target document's language:
Example — reviewer says:
"The API should store all paths without checking for conflicts. The client handles conflict detection during lock time."
This contains both a requirement and a design decision:
requirements.md gets the outcome:
"Path conflicts are detected and reported to users"
design.md gets the mechanism:
"The API stores all published paths without server-side conflict prevention. The client detects conflicts during lock time and warns."
decisions.md gets the ADR:
"D6: No Server-Side Conflict Detection — stores data for any path regardless of conflicts. Conflict logic is client-side."
After applying any updates, verify the boundary was maintained.
Scan requirements.md for implementation-language patterns. If found in success criteria or scope sections, rewrite:
Flag these patterns:
Rewrite pattern:
Before: "Database stores URL decomposed into base URL, ref,
and rev as distinct components"
After: "Source URL components (base URL, ref, rev) tracked
to support the latest algorithm (D2, D4)"
Scan design.md for bare behavioral statements that match requirements.md verbatim. Design should reference requirements, not restate them.
Rewrite pattern:
Before: "Only successfully built packages appear in the catalog"
After: "The CLI enforces D1 by only calling the API after a
successful build completes"
When a conclusion reverses an existing decision:
Never apply the reversal language verbatim to requirements — translate to outcome level.
When a conclusion targets a document outside the current PR's scope, ask the user before applying.
In-scope = any document already changed by the PR. Out-of-scope = any document NOT changed by the PR.
"This discussion conclusion belongs in {target_doc}, which is not part of this PR:
Conclusion: {brief summary} Classified as: {requirement | design | decision}
Options:
- Apply now (update {target_doc} in this branch)
- Capture as follow-up (note in PR comment, apply later)
- Skip (leave for human review)"
When content doesn't clearly classify as requirement or design:
"This conclusion could be either a requirement or design detail:
Content: {the conclusion}
As a requirement: {how it would read in requirements.md} As design: {how it would read in design.md}
Which classification is correct?"
Default when the user is unavailable: classify as design. Design documents tolerate more specificity, while requirements documents are harmed by implementation leakage.
When processing any update, ask these five questions: