Self-feedback review loop to verify implementation matches specification. Use with --mode quick/standard/thorough to control review depth. Default is thorough (3 consecutive passes). (user)
Automatically verifies implementations match specifications through systematic self-review with iterative improvement. Activates when you say "implementation is done" or "review my implementation.
/plugin marketplace add teliha/dev-workflows/plugin install dev-workflows@dev-workflowsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
README.mdThis skill automatically activates when:
Ensure that implementations fully and correctly match their specifications through systematic self-review and iterative improvement.
Use --mode or -m to control review depth:
| Mode | Passes | Categories | Use Case |
|---|---|---|---|
| quick | 1 | SPEC_COMPLIANCE only | Fast sanity check |
| standard | 2 consecutive | All 4 categories | Balanced review |
| thorough | 3 consecutive | All 4 categories | Full review (default) |
implementation review --mode quick
implementation review -m standard src/feature/
implementation review # defaults to thorough
Quick Mode (1 pass):
Standard Mode (2 consecutive passes):
Thorough Mode (3 consecutive passes) - DEFAULT:
Problem: If you (Claude) wrote the implementation, reviewing it in the same conversation introduces severe bias:
Solution: Always perform reviews with a FRESH context using a subagent.
MANDATORY: When reviewing implementation against spec, spawn a NEW agent with NO prior context:
Use the Task tool with subagent_type="general-purpose" and a prompt that:
1. Contains ONLY the spec file path and implementation file paths
2. Does NOT include any conversation history
3. Does NOT mention what was discussed or decided
4. Asks for objective comparison against the spec
The reviewer must have ZERO memory of writing the code.
Find the specification and implementation:
Launch parallel subagents using Task tool (subagent_type: general-purpose) for context-isolated review:
Category 1: SPEC COMPLIANCE
- All requirements implemented as specified
- Response formats match spec exactly
- Error codes and messages match spec
- Edge cases handled as specified
Category 2: CODE QUALITY
- No hardcoded values that should be configurable
- Proper error handling
- Type safety (no `any` types in TypeScript)
- Following project patterns
Category 3: TASK COMPLETION
- All tasks/requirements are completed
- No partial implementations
- Tests written for all scenarios
Category 4: SECURITY
- Authentication applied where required
- Input validation implemented
- No sensitive data exposure
- Authorization checks in place
Each subagent returns:
{
"category": "...",
"issues": [
{"severity": "error|warning", "location": "file:line", "description": "...", "fix": "..."}
],
"passed": true/false
}
IMPORTANT: Fix ALL issues including warnings, not just errors.
# Passes required based on mode:
# - quick: 1 pass (no consecutive requirement)
# - standard: 2 consecutive passes
# - thorough: 3 consecutive passes (default)
consecutive_passes = 0
iteration = 0
REQUIRED_PASSES = mode_to_passes(mode) # 1, 2, or 3
WHILE consecutive_passes < REQUIRED_PASSES:
iteration++
1. Aggregate results from all subagents
2. If issues found (errors OR warnings):
- consecutive_passes = 0 # Reset counter
- Apply fixes to code
- Update tests if needed
3. If no issues (PASS):
- consecutive_passes++
4. Re-run parallel review with fresh subagents
Termination Conditions by Mode:
Output structured summary:
## Implementation Review Results: <feature-name>
| Category | Status | Issues |
|----------|--------|--------|
| Spec Compliance | PASSED/FAILED | N errors, M warnings |
| Code Quality | PASSED/FAILED | N errors, M warnings |
| Task Completion | PASSED/FAILED | N errors, M warnings |
| Security | PASSED/FAILED | N errors, M warnings |
### Iterations Summary
| Iteration | Issues Found | Consecutive Passes |
|-----------|--------------|-------------------|
| 1 | 5 errors, 3 warnings | 0 |
| 2 | 0 | 1 |
| 3 | 0 | 2 |
| 4 | 0 | 3 |
### Fixed Issues (per iteration)
- Iteration 1: [list of fixes]
### Final Status: PASSED (3 consecutive)
Review stability confirmed with 3 consecutive passes.
| Level | Meaning | Action |
|---|---|---|
| error | Must fix before commit | Blocks merge |
| warning | Should fix for quality | Also fixed in feedback loop |
For each requirement in the spec:
Template:
| Spec Requirement | Status | Implementation Location | Notes |
|-----------------|--------|------------------------|-------|
| [Requirement 1] | PASS/FAIL/PARTIAL | `file.ts:line` | [Notes] |
| [Requirement 2] | PASS/FAIL/PARTIAL | `file.ts:line` | [Notes] |
When an issue is found after a previous PASS iteration, it indicates a review oversight. Record these in CLAUDE.md at the directory level where the issue exists.
Place overlooked issues in CLAUDE.md files at the directory containing the problematic file.
Example: If issue is in src/features/auth/login.ts
src/features/auth/CLAUDE.mdWhen oversight occurs (PASS then FAIL):
CLAUDE.md in that directory# Overlooked Issues for This Directory
## [Issue Pattern Name]
**Category**: SPEC_COMPLIANCE/CODE_QUALITY/TASK_COMPLETION/SECURITY
**File**: filename.ts
**Missed in iteration**: N
**Found in iteration**: M
**Description**: What was missed
**Check instruction**: Specific verification step
---
When reviewing, subagents should:
Additional checks:
Additional checks:
Additional checks:
Stop the loop when:
Don't stop if:
User: "I've finished implementing the feature"
Skill activates:
1. Finds related spec
2. Reviews implementation
3. Reports gaps
4. Applies fixes
5. Loops until complete (3 consecutive passes)
6. Reports final status
After implementation review passes, DO NOT automatically proceed to next steps.
When the implementation review passes (ALL REQUIREMENTS MET):
Rationale: The user should maintain control over when to proceed.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.