Workflow automation with session management and code review
npx claudepluginhub kastheco/claude-plugins --plugin kasComplete the current session by ensuring all work is committed and pushed.
Finalize and merge a pull request after work is complete and approved.
Run code review using kas-code-reviewer agent
Review implementation plan using plan-reviewer agent
Run reality assessment using project-reality-manager agent
Save session progress for continuation across multiple sessions. Unlike `/kas:done`, this doesn't close all work - it creates a snapshot and generates a continuation prompt.
Validate prerequisites and verify the environment is ready.
Entry point for creating implementation plans with proper workflow enforcement.
Tiered verification with smart agent selection
Browser automation specialist for web testing, scraping, and interaction. Use this agent when you need to interact with web pages, test UIs, capture screenshots, or automate browser workflows. Handles all Claude-in-Chrome MCP operations.
Use this agent when you need a ruthless code review channeling Linus Torvalds philosophy. This agent performs rigorous review with zero tolerance for mediocrity, focusing on correctness, simplicity, performance, readability, and robust error handling. <example> Context: User has made code changes and wants review. user: "Review my recent changes" assistant: "I'll launch the kas-code-reviewer agent for a thorough review." <commentary> User explicitly requested code review. Trigger kas-code-reviewer agent. </commentary> </example> <example> Context: User is about to commit code. user: "Check this code before I commit" assistant: "Let me run the kas-code-reviewer agent to catch issues." <commentary> Pre-commit review request. Proactively trigger code review. </commentary> </example> <example> Context: User wants staged files reviewed. user: "Code review the staged files" assistant: "I'll use the kas-code-reviewer agent to review the staged changes." <commentary> Staged files review request. Invoke kas-code-reviewer. </commentary> </example>
Use this agent when you need to review an implementation plan for security gaps, design flaws, or missing tests before execution. This agent should be invoked PROACTIVELY whenever a plan is ready for user approval - run the review BEFORE asking the user to approve the plan. <example> Context: User just created an implementation plan. user: "Review this plan for security gaps and design issues" assistant: "I'll launch the plan-reviewer agent to evaluate the plan." <commentary> User explicitly requested plan review. Invoke plan-reviewer agent. </commentary> </example> <example> Context: Assistant just finished writing a plan to a plan file in plan mode. user: [No new message - assistant proactively reviews] assistant: "I've written the plan. Let me run the plan-reviewer agent to check for issues before asking for your approval." <commentary> Plan file was just written. PROACTIVELY invoke plan-reviewer BEFORE calling ExitPlanMode or asking user to approve. This ensures plans are reviewed automatically. </commentary> </example> <example> Context: Assistant is about to call ExitPlanMode to request plan approval. user: [No new message - assistant proactively reviews] assistant: "Before requesting approval, let me run the plan-reviewer agent to ensure the plan is solid." <commentary> About to exit plan mode. MUST invoke plan-reviewer proactively before ExitPlanMode. Never ask for plan approval without running review first. </commentary> </example>
Use this agent when you need to validate claimed task completions, assess the actual functional state of implementations, or create pragmatic plans to complete work that may have been marked done prematurely. This agent excels at cutting through optimistic status reports to identify what truly works versus what merely exists in code. <example> Context: User suspects claimed work might be incomplete. user: "Verify if this feature actually works" assistant: "I'll use the project-reality-manager to validate the actual state." <commentary> Completion validation request. Trigger project-reality-manager. </commentary> </example> <example> Context: Task was marked complete but seems broken. user: "Check if this is really done" assistant: "Let me run the project-reality-manager to assess what truly works." <commentary> Skeptical completion check. Invoke project-reality-manager. </commentary> </example> <example> Context: User wants to audit progress on a feature. user: "What's the actual state of the authentication feature?" assistant: "I'll use the project-reality-manager to cut through status reports and assess reality." <commentary> Reality assessment request. Trigger project-reality-manager. </commentary> </example>
Battle-tested Claude Code plugin for engineering teams — 38 agents, 156 skills, 72 legacy command shims, production-ready hooks, and selective install workflows evolved through continuous real-world use
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques
Context-Driven Development plugin that transforms Claude Code into a project management tool with structured workflow: Context → Spec & Plan → Implement
Manus-style persistent markdown files for planning, progress tracking, and knowledge storage. Works with Claude Code, Kiro, Clawd CLI, Gemini CLI, Cursor, Continue, and 11+ AI coding assistants.
AI-supervised issue tracker for coding workflows. Manage tasks, discover work, and maintain context with simple CLI commands.
AI-powered development tools for code review, research, design, and workflow automation.