npx claudepluginhub cahaseler/cc-track-marketplace --plugin cc-trackQuickly add an item to the backlog without disrupting current work
Code review a pull request
Complete the current active task (Phase 2 of task completion workflow)
Update the configuration file at `.cc-track/track.config.json` based on the user's request.
---
Clean up and reduce bloat in context files
Triage code review issues one-by-one with Fix/Defer/Dismiss/Discuss options
Migrate from old task structure to spec-driven workflow format
---
Prepare the current active task for completion (Phase 1 of task completion workflow)
Guide the user through setting up cc-track in their project. This is the primary entry point for new users after installing the plugin.
Multi-agent spec-focused code review for task completion
---
Scoped dead code analysis & cleanup with parallel subagent investigation
---
Use this agent when reviewing AI-generated code changes to identify potential bugs, silent failures, inadequate error handling, and security issues. This agent should be invoked PROACTIVELY after completing a logical chunk of AI-generated work, especially code involving error handling, data validation, async operations, or external integrations. <example> Context: Claude has implemented error handling for an API client. user: "Let's review the error handling in the API client" assistant: "I'll use the bug-scanner agent to thoroughly examine the AI-generated error handling for potential issues and silent failures." <Task tool invocation to launch bug-scanner agent> </example> <example> Context: Running /prepare-completion to validate AI-generated work before PR. user: "/prepare-completion" assistant: "I'll launch bug-scanner to rigorously check for potential bugs and silent failures in the AI-generated code." <Task tool invocation to launch bug-scanner agent> </example> <example> Context: Claude has implemented data validation logic. user: "Let's check the input validation for security issues" assistant: "I'll use the bug-scanner agent to analyze the AI-generated validation logic for edge cases and security issues." <Task tool invocation to launch bug-scanner agent> </example>
Use this agent when analyzing AI-generated code comments for accuracy, completeness, and long-term maintainability. This agent should be invoked PROACTIVELY during task completion review to catch misleading comments, stale documentation, unresolved TODOs, and comments that don't match the code they describe. <example> Context: Claude has added documentation comments to new functions. user: "Let's verify the documentation comments are accurate" assistant: "I'll use the comment-compliance-reviewer agent to rigorously verify the AI-generated comments match the actual code." <Task tool invocation to launch comment-compliance-reviewer agent> </example> <example> Context: Running /prepare-completion to validate AI-generated work before PR. user: "/prepare-completion" assistant: "I'll launch comment-compliance-reviewer to check for stale or misleading comments in the AI-generated code." <Task tool invocation to launch comment-compliance-reviewer agent> </example> <example> Context: Claude has refactored code that had existing comments. user: "Let's check if the comments need updating after that refactor" assistant: "I'll use the comment-compliance-reviewer agent to identify AI-modified comments that may be stale or inaccurate." <Task tool invocation to launch comment-compliance-reviewer agent> </example>
Validates a single code review issue reported by another agent. Scores the issue 0-100 based on whether it's a real problem or false positive. Used by prepare-completion to filter review findings. This agent should NOT be invoked directly by users. It is spawned by the prepare-completion orchestrator, once per issue found by review agents.
Use this agent when checking for deprecated, orphaned, or dead code after AI-generated changes. This agent should be invoked PROACTIVELY during task completion review to catch code that should have been cleaned up but wasn't - stale imports, unused files, orphaned tests, and deprecated modules. <example> Context: Claude has refactored or replaced existing code. user: "Let's check if there's any leftover code that should be cleaned up" assistant: "I'll use the dead-code-detector agent to find deprecated or orphaned code from the AI-generated changes." <Task tool invocation to launch dead-code-detector agent> </example> <example> Context: Running /prepare-completion to validate AI-generated work before PR. user: "/prepare-completion" assistant: "I'll launch dead-code-detector to check for dead code and cleanup needed after the AI's changes." <Task tool invocation to launch dead-code-detector agent> </example> <example> Context: Claude has moved or reorganized code. user: "Did we leave any dead code behind after that reorganization?" assistant: "I'll use the dead-code-detector agent to find orphaned files or imports from the AI-generated reorganization." <Task tool invocation to launch dead-code-detector agent> </example>
Use this agent when checking AI-generated code for duplicate implementations of existing functionality. This agent should be invoked PROACTIVELY during task completion review to catch cases where the AI reimplemented something that already exists in the codebase or dependencies. <example> Context: Claude has implemented a new utility function. user: "Let's check if we already had something like this" assistant: "I'll use the duplication-detector agent to check if this AI-generated code duplicates existing functionality in the codebase." <Task tool invocation to launch duplication-detector agent> </example> <example> Context: Running /prepare-completion to validate AI-generated work before PR. user: "/prepare-completion" assistant: "I'll launch duplication-detector to check for redundant implementations in the AI-generated code." <Task tool invocation to launch duplication-detector agent> </example> <example> Context: Claude has added a new helper or service. user: "Does this duplicate anything we already have?" assistant: "I'll use the duplication-detector agent to search for existing implementations that the AI-generated code may be duplicating." <Task tool invocation to launch duplication-detector agent> </example>
Use this agent when reviewing AI-generated code for adherence to project guidelines in CLAUDE.md and constitution.md. This agent should be invoked PROACTIVELY during task completion review to ensure AI-generated code follows established patterns, coding standards, and project-specific guardrails. <example> Context: Claude has implemented a feature and user wants to check it follows project standards. user: "Does this code follow our project conventions?" assistant: "I'll use the guidelines-reviewer agent to rigorously check the AI-generated code against CLAUDE.md and constitution.md." <Task tool invocation to launch guidelines-reviewer agent> </example> <example> Context: Running /prepare-completion to validate AI-generated work before PR. user: "/prepare-completion" assistant: "I'll launch guidelines-reviewer to verify the AI-generated code meets project standards and guardrails." <Task tool invocation to launch guidelines-reviewer agent> </example> <example> Context: Checking if AI-generated code matches existing patterns. user: "Does this follow our established patterns?" assistant: "I'll use the guidelines-reviewer agent to compare AI-generated code against system_patterns.md and CLAUDE.md." <Task tool invocation to launch guidelines-reviewer agent> </example>
Use this agent to implement production code that makes failing tests pass. This agent is part of the TDD orchestration system and runs as the third step in each phase. In phase-based orchestration, this agent is Step 3 of 4 in each phase: 1. Stub Writer → Creates exports so imports resolve 2. Test Writer → Writes failing tests 3. **Implementer (this agent)** → Makes tests pass, runs them, reports output 4. Validator → Verifies requirements met <example> Context: Phase-based TDD orchestration. orchestrator: "Phase 3 tests are failing as expected. Implement email validation to make them pass." <Task tool invocation to launch implementer agent for Phase 3> </example> <example> Context: Tests exist but implementation is incomplete. orchestrator: "The stub for parseConfig throws 'Not implemented'. Write the actual logic." <Task tool invocation to launch implementer agent> </example>
Use this agent when verifying that AI-generated code follows the technical design in plan.md. This agent should be invoked PROACTIVELY during task completion review to ensure the planned architecture, data model, and technical approach were actually followed in the AI-generated implementation. <example> Context: Claude has implemented a feature and user wants to verify it matches the technical plan. user: "Let's check if the caching layer matches what we planned" assistant: "I'll use the plan-adherence-reviewer agent to rigorously verify the AI-generated implementation follows plan.md's technical design." <Task tool invocation to launch plan-adherence-reviewer agent> </example> <example> Context: Running /prepare-completion to validate AI-generated work before PR. user: "/prepare-completion" assistant: "I'll launch plan-adherence-reviewer to check if AI-generated code matches the planned architecture." <Task tool invocation to launch plan-adherence-reviewer agent> </example> <example> Context: Architectural review of AI-generated code changes. user: "Did we stick to the technical approach we documented?" assistant: "I'll use the plan-adherence-reviewer agent to compare AI-generated implementation against plan.md." <Task tool invocation to launch plan-adherence-reviewer agent> </example>
Use this agent when you need to research technical topics, library documentation, best practices, or implementation approaches. This agent synthesizes findings from documentation, web sources, and codebases into actionable recommendations. <example> Context: Need to understand how to implement a feature using a library. user: "How should we handle authentication in Next.js App Router?" assistant: "I'll use the researcher agent to investigate authentication patterns for Next.js App Router." <Task tool invocation to launch researcher agent> </example> <example> Context: Comparing different approaches to solve a problem. user: "What's the best way to implement rate limiting in Bun?" assistant: "I'll use the researcher agent to compare rate limiting options for Bun." <Task tool invocation to launch researcher agent> </example>
Use this agent when verifying that AI-generated code implementation matches the requirements in spec.md. This agent should be invoked PROACTIVELY during task completion review to ensure all acceptance scenarios, functional requirements, and success criteria from the specification are properly implemented. <example> Context: Claude has been implementing a feature and the user wants to verify completion. user: "Let's verify the authentication feature is ready for completion" assistant: "I'll use the spec-compliance-reviewer agent to rigorously verify all requirements from spec.md are implemented in the AI-generated code." <Task tool invocation to launch spec-compliance-reviewer agent> </example> <example> Context: Running /prepare-completion to validate AI-generated work before PR. user: "/prepare-completion" assistant: "I'll launch spec-compliance-reviewer to cross-check the AI-generated implementation against spec.md requirements." <Task tool invocation to launch spec-compliance-reviewer agent> </example> <example> Context: Mid-implementation check on AI-generated code. user: "Are we on track with the spec?" assistant: "I'll use the spec-compliance-reviewer agent to audit current AI-generated code against spec.md." <Task tool invocation to launch spec-compliance-reviewer agent> </example>
Use this agent to create minimal file stubs with exported interfaces, types, and function signatures so that imports resolve before tests are written. This agent is part of the TDD orchestration system and runs as the first step in each phase. <example> Context: Starting a new phase that needs a new module. orchestrator: "Phase 2 requires a new validation module. Create stubs first." <Task tool invocation to launch stub-writer agent for Phase 2> </example> <example> Context: Tests are failing due to missing exports. orchestrator: "Test writer reports import errors. Stub writer needs to add missing exports." <Task tool invocation to launch stub-writer agent to fix exports> </example>
Use this agent when verifying that all tasks in tasks.md are actually completed by the AI. This agent should be invoked PROACTIVELY during task completion review to catch partial implementations, incomplete work, and tasks that were skipped or deferred without documentation. <example> Context: Claude has been working on tasks and user wants to verify completion. user: "Let's verify all the tasks are actually done" assistant: "I'll use the task-completion-reviewer agent to rigorously verify each task in tasks.md has concrete evidence of completion in the AI-generated code." <Task tool invocation to launch task-completion-reviewer agent> </example> <example> Context: Running /prepare-completion to validate AI-generated work before PR. user: "/prepare-completion" assistant: "I'll launch task-completion-reviewer to verify all tasks are actually complete in the AI-generated code." <Task tool invocation to launch task-completion-reviewer agent> </example> <example> Context: Checking progress on AI implementation mid-task. user: "How many tasks do we have left?" assistant: "I'll use the task-completion-reviewer agent to audit task completion status against the AI-generated code." <Task tool invocation to launch task-completion-reviewer agent> </example>
Use this agent when you need to generate comprehensive test suites for code that lacks tests or needs better coverage. This agent follows TDD principles with proper mocking and behavior verification. In phase-based orchestration, this agent is Step 2 of 4 in each phase: 1. Stub Writer → Creates exports so imports resolve 2. **Test Writer (this agent)** → Writes failing tests, runs them, reports output 3. Implementer → Makes tests pass 4. Validator → Verifies requirements met <example> Context: Phase-based TDD orchestration. orchestrator: "Phase 3 stubs are ready. Write tests for email validation." <Task tool invocation to launch test-generation agent for Phase 3> </example> <example> Context: New code has been written without tests. user: "We need tests for the authentication module" assistant: "I'll use the test-generation agent to create comprehensive tests for the authentication module." <Task tool invocation to launch test-generation agent> </example> <example> Context: Expanding test coverage for undertested code. user: "The validation utils have poor test coverage" assistant: "I'll use the test-generation agent to expand test coverage for the validation utilities." <Task tool invocation to launch test-generation agent> </example>
Use this agent to verify that a phase accomplished its stated requirements. This agent has read-only access and performs verification, not implementation. It is the final step in each TDD phase. In phase-based orchestration, this agent is Step 4 of 4 in each phase: 1. Stub Writer → Creates exports so imports resolve 2. Test Writer → Writes failing tests 3. Implementer → Makes tests pass 4. **Validator (this agent)** → Verifies requirements met, reports pass/fail <example> Context: Phase-based TDD orchestration. orchestrator: "Phase 3 implementation complete. Validate that email validation meets requirements." <Task tool invocation to launch validator agent for Phase 3> </example> <example> Context: Verifying a completed phase before moving on. orchestrator: "Tests pass for user authentication. Verify the phase requirements are satisfied." <Task tool invocation to launch validator agent> </example>
Conductor: Context-driven development for Claude Code - Measure twice, code once
Matches all tools
Hooks run on every tool call, not just specific ones
Executes bash commands
Hook triggers when Bash tool is used
Share bugs, ideas, or general feedback.
GitHub Spec-Kit integration for Specification-Driven Development - define WHAT and HOW before coding
Meta-prompting and spec-driven development system for Claude Code. Productivity framework for structured AI-assisted development.
Get Shit Done -- a structured workflow plugin for Claude Code that adds planning, execution, and verification commands with MCP-backed project state
Agent Alchemy Dev Tools — dev utilities, debugging, and workflow enhancements
Complete developer toolkit for Claude Code
Modifies files
Hook triggers on file write and edit operations
Modifies files
Hook triggers on file write and edit operations
Uses power tools
Uses Bash, Write, or Edit tools
Uses power tools
Uses Bash, Write, or Edit tools
Runs pre-commands
Contains inline bash commands via ! syntax
Runs pre-commands
Contains inline bash commands via ! syntax
Bash prerequisite issue
Uses bash pre-commands but Bash not in allowed tools
Bash prerequisite issue
Uses bash pre-commands but Bash not in allowed tools
Share bugs, ideas, or general feedback.