From hotl
Guides invocation of HOTL skills for code-changing tasks like features, refactors, debugging, and reviews. Establishes workflow selection at conversation start.
npx claudepluginhub yimwoo/hotl-plugin --plugin hotlThis skill uses the workspace's default tool permissions.
HOTL skills are for **code-changing tasks that require planning** — new features, refactors, and significant changes. Not every task needs a skill.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Designs, implements, and audits WCAG 2.2 AA accessible UIs for Web (ARIA/HTML5), iOS (SwiftUI traits), and Android (Compose semantics). Audits code for compliance gaps.
HOTL skills are for code-changing tasks that require planning — new features, refactors, and significant changes. Not every task needs a skill.
Answer directly without invoking a skill:
Use hotl:systematic-debugging (no brainstorm/plan needed):
Use the full HOTL workflow (brainstorm → plan → execute):
Use the Skill tool to invoke any of these when appropriate:
| Skill | When to Use |
|---|---|
hotl:brainstorming | Before any feature work — design with HOTL contracts |
hotl:writing-plans | After design approval — produces hotl-workflow-<slug>.md |
hotl:executing-plans | Linear execution with human checkpoints |
hotl:loop-execution | Execute a hotl-workflow-*.md with loops + auto-approve |
hotl:subagent-execution | Delegated step runner over the loop execution engine — delegates eligible steps to fresh subagents |
hotl:dispatch-agents | 2+ independent tasks that can run in parallel |
hotl:tdd | Before writing any implementation code |
hotl:systematic-debugging | When encountering any bug or unexpected behavior |
hotl:document-review | Optional — review existing docs, external specs, or hand-authored plans |
hotl:requesting-code-review | Dispatched by executors at review checkpoints — standardizes what context the reviewer receives |
hotl:receiving-code-review | Invoked when review findings arrive — verify, evaluate against contracts, then implement |
hotl:code-review | After completing implementation, before merging |
hotl:pr-reviewing | Review a PR across multiple dimensions — description, code, scan, tests |
hotl:resuming | Resume an interrupted workflow run with verify-first strategy |
hotl:verification-before-completion | Before claiming work is done |
hotl:setup-project | To generate adapter files for Codex, Cline, Cursor, Copilot |
Human-on-the-Loop: Set intent + constraints upfront. AI executes autonomously within guardrails. Human reviews final output.
Three contracts every implementation workflow should define: