From code-quality
Mid-task self-reflection checkpoint using Serena metacognitive tools. Use when you need to pause and evaluate: "am I on track?", "have I gathered enough info?", "am I actually done?". Triggers on: completing a significant chunk of work, before claiming done, when feeling uncertain about direction, after a long sequence of tool calls, before making large code changes.
npx claudepluginhub wgordon17/personal-claude-marketplace --plugin code-qualityThis skill is limited to using the following tools:
Structured metacognitive pause using Serena's reflection tools. Forces you to stop, evaluate
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Structured metacognitive pause using Serena's reflection tools. Forces you to stop, evaluate your progress, and course-correct before continuing.
Run these Serena tools in order. Each returns a structured prompt — follow its instructions before proceeding to the next.
mcp__serena__think_about_task_adherence()
Evaluates: Am I deviating from the task? Have I loaded relevant project memories? Should I stop and ask the user rather than make potentially misaligned changes?
Act on the result. If it identifies deviation, correct course before continuing. If it suggests loading memories, do so.
mcp__serena__think_about_collected_information()
Evaluates: Have I gathered all the information I need? What's missing? Can I acquire it with available tools, or do I need to ask the user?
Act on the result. If information gaps are identified, fill them before continuing.
mcp__serena__think_about_whether_you_are_done()
Evaluates: Have I performed all required steps? Should I run tests? Update docs? Write new tests? Read completion-related memory files?
Act on the result. If steps remain, do them. Don't claim done until this passes.
When wrapping up a significant body of work (not just a mid-task checkpoint):
mcp__serena__summarize_changes()
Explores the diff, summarizes all changes, explains test coverage, flags dangers and breaking changes, identifies documentation needs.
If the reflection surfaces important insights (architectural decisions, gotchas, patterns discovered), persist them:
mcp__serena__write_memory(memory_name="reflections/<topic>", content="...")
Full reflection (default) — run all 3 steps in sequence:
/reflect
Targeted reflection — run only the relevant step:
If Serena MCP tools are not available (server not running, tools not enabled), perform the same metacognitive checks using extended thinking. For each step, explicitly pause and think through:
The tools enforce structured reflection; without them, be explicit about reasoning through the same questions.