Execute Human-on-the-Loop (HOTL) workflows for AI-native development: brainstorm features into contracts and Markdown plans, run tasks linearly or in loops with git branches/worktrees, verification gates, agent dispatching, code/PR reviews, TDD cycles, systematic debugging, and resuming interrupted runs.
npx claudepluginhub yimwoo/hotl-plugin --plugin hotlDesign a feature with HOTL contracts before writing any code
Check if a newer HOTL version is available
Execute plan linearly with explicit human checkpoints
Execute hotl-workflow-*.md with loop execution and auto-approve
Review a PR across multiple dimensions (description, code, scan, tests)
Resume an interrupted HOTL workflow run
Generate HOTL adapter files for your team's code assistants (Codex, Cline, Cursor, Copilot)
Execute a reviewed hotl-workflow-*.md in this session with delegated subagent steps and controller-owned verification
Create a hotl-workflow-<slug>.md implementation plan with loops and gates
Use before any feature work — explores intent, requirements, and design. Produces HOTL contracts (intent, verification, governance) before implementation.
Use after completing implementation steps and before merging — reviews against plan and HOTL contracts.
Use when you have 2+ independent tasks with no shared state — dispatches parallel subagents for each task.
Optional utility for reviewing existing docs, external specs, hand-authored plans, or non-HOTL documents. HOTL design docs and workflow plans get structural lint + AI review; other documents get AI-only review with a generic rubric.
Use when executing an implementation plan linearly with explicit human checkpoints between batches of tasks.
Use when executing a hotl-workflow-*.md — reads steps, loops until success criteria met, auto-approves low-risk gates, pauses at high-risk gates.
Review a PR across multiple dimensions — description, code changes, code scan, unit tests — using parallel subagents. Supports GitHub, GitLab, and enterprise platforms.
Use when review findings arrive — verify each claim against the codebase and HOTL contracts before making changes. Governs how agents respond to review feedback.
Use at executor review checkpoints to dispatch the code-reviewer agent with structured context — git range, workflow contracts, and verification evidence.
Resume an interrupted workflow run with verify-first strategy — loads sidecar state, verifies the last step, and continues execution.
Use to generate HOTL adapter files for the current project — creates AGENTS.md, .clinerules, cursor rules, or copilot instructions depending on tools the team uses.
Delegated step runner over the HOTL execution state machine — delegates eligible steps to fresh subagents while the controller keeps governance, verification, and stop conditions.
Use when encountering any bug, test failure, or unexpected behavior — before proposing fixes.
Use before writing any implementation code — enforces RED-GREEN-REFACTOR cycle.
Use when starting any conversation - establishes how to find and use HOTL skills for implementation tasks
Use before claiming any work is complete, fixed, or passing — runs verification commands and confirms output before making success claims.
Use after design approval to create a hotl-workflow-<slug>.md implementation plan with bite-sized tasks, exact file paths, and loop/gate definitions.
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques
Uses power tools
Uses Bash, Write, or Edit tools
No model invocation
Executes directly as bash, bypassing the AI model
Battle-tested Claude Code plugin for engineering teams — 38 agents, 156 skills, 72 legacy command shims, production-ready hooks, and selective install workflows evolved through continuous real-world use
Access thousands of AI prompts and skills directly in your AI coding assistant. Search prompts, discover skills, save your own, and improve prompts with AI.
Manus-style persistent markdown files for planning, progress tracking, and knowledge storage. Works with Claude Code, Kiro, Clawd CLI, Gemini CLI, Cursor, Continue, and 16+ AI coding assistants. Now with Arabic, German, Spanish, and Chinese (Simplified & Traditional) support.
Orchestrate multi-agent teams for parallel code review, hypothesis-driven debugging, and coordinated feature development using Claude Code's Agent Teams
Comprehensive toolkit for developing Claude Code plugins. Includes 7 expert skills covering hooks, MCP integration, commands, agents, and best practices. AI-assisted plugin creation and validation.
Executes directly as bash, bypassing the AI model