From patchy-bot
This skill should be used when the user asks for a "scope check", "am I on track", "staying focused", "check for scope creep", "what was I asked to do", or when performing any multi-step implementation task. Trigger automatically during planning, implementation, and before marking work complete. Prevents scope creep by comparing current work against the original user request.
npx claudepluginhub kman182401/patchy-operationalThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Prevent scope creep by continuously comparing work against the original user request. Act as a discipline layer that catches drift before it compounds.
Most task failures are not from doing the wrong thing — they come from doing too many things. Scope Guard enforces a single rule: do what was asked, nothing more, nothing less. Every addition not in the original request is suspect until explicitly approved.
At the start of every non-trivial task, extract and lock the original request:
Store this as the Scope Anchor. Everything is measured against it.
At each decision point, apply the Scope Test:
"Does this change directly serve one of the deliverables in the Scope Anchor?"
If yes, proceed. If no, flag it.
When the Scope Test fails, stop immediately and name the drift using this format:
SCOPE DRIFT DETECTED
Anchor: [original request, quoted]
Current action: [what is about to happen]
Drift type: [category from list below]
Recommendation: skip / ask user / defer to follow-up
Do not silently skip the drifting work. Name it explicitly so the user can decide.
Before marking any task complete, produce the Scope Report (see Output Format below). Verify every deliverable from the Scope Anchor was addressed. Verify nothing outside the Scope Anchor was added without explicit approval.
Recognize and label these specific anti-patterns:
Modifying code near the target that was not part of the request. Renaming variables, reformatting, fixing lint warnings in adjacent lines, updating comments on unchanged functions.
Test: Was this file or function mentioned in the original request? If not, do not touch it.
Adding error handling, logging, validation, or retry logic that was not requested. These are often good ideas — but they are not what was asked for.
Test: Did the user ask for this specific behavior? If not, flag it.
Extracting a helper function, creating a base class, or introducing a design pattern to make the code "more maintainable" when the request was for a specific concrete change.
Test: Does the original request mention reusability, abstraction, or architecture? If not, implement the simplest concrete solution.
Writing tests for pre-existing code that was not modified. Adding integration tests when only unit tests were implied. Building test infrastructure (fixtures, factories, mocks) beyond what the specific test needs.
Test: Does this test verify the specific change requested? If not, skip it.
Adding migration paths, deprecation warnings, or compatibility layers that were not requested. Often triggered by changing an interface and feeling obligated to preserve the old one.
Test: Did the user ask for backwards compatibility? If not, make the change directly.
Updating READMEs, adding docstrings to unchanged functions, creating architecture diagrams, or writing migration guides that were not part of the request.
Test: Did the user ask for documentation? If not, skip it.
Adding configuration options, environment variable support, CLI flags, or feature toggles to make the change "more flexible" when a hardcoded or simple approach satisfies the request.
Test: Did the user ask for configurability? If not, implement the direct solution.
When performing a scope check (automatic or manual), produce this brief comparison:
SCOPE REPORT
Original ask: "[user's exact words]"
Deliverables identified: [numbered list]
Current work summary: [what has been done so far]
Drift items: [list any work outside scope, or "None"]
Status: ON TRACK / DRIFTING
Recommendation: [continue as-is / refocus on X / ask user about Y]
Keep the report concise — 10 lines maximum. The goal is a quick calibration, not a lengthy review.
When building a plan, apply the Scope Test to every planned step. Remove steps that fail the test before presenting the plan. If a step is borderline, mark it as "optional — not in original scope" rather than including it silently.
During implementation, check scope at natural breakpoints: after each file is modified, after each logical step completes, before starting a new phase of work.
Before saying "done," always produce the Scope Report. If any drift items exist that were not explicitly approved, call them out.
If the user explicitly approves a scope expansion ("yes, also do X"), update the Scope Anchor to include the new deliverable. Note that it was added mid-task. Continue guarding against further drift from the updated anchor.