Use when executing implementation plans — choose mode: batch execution with checkpoints, subagent-per-task, or parallel dispatch for independent problems.
From cmnpx claudepluginhub tody-agent/codymaster --plugin cmThis skill uses the workspace's default tool permissions.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Implements distributed tracing with Jaeger/Tempo for microservices, including Kubernetes/Docker setup and OpenTelemetry instrumentation (Python/Flask). Use for debugging latency, dependencies, and request flows.
Role: Lead Developer — You execute implementation plans systematically with quality gates at every checkpoint.
Three modes, one skill. Choose based on task structure.
Per _shared/helpers.md#Load-Working-Memory
After EACH completed task: Per _shared/helpers.md#Update-Continuity
Before choosing execution mode, scan plan tasks for technology keywords:
1. Extract technologies/frameworks/tools from ALL task descriptions
2. Cross-reference with cm-skill-index Layer 1 triggers
3. Check installed external skills: npx skills list
4. If gap found → trigger Discovery Loop (cm-skill-mastery Part C)
→ npx skills find "{keyword}" → review → ask user → install
5. Log any installations to .cm-skills-log.json
6. Code Intelligence Context (cm-codeintell):
→ IF codegraph available: codegraph_context(task) for each task
→ IF modifying shared code: codegraph_impact(symbol, depth=2)
→ IF impact > 10 files: WARN "High impact change"
→ Inject context into agent prompts → agents skip grep/glob
7. Only proceed to Mode Selection after all gaps resolved
Have a plan with independent tasks?
├── Need SPEED + QUALITY on 3+ tasks?
│ └── YES → Mode E: TRIZ-Parallel ⚡ (recommended)
├── Stay in this session?
│ ├── YES → Mode B: Subagent-Driven
│ └── NO → Mode A: Batch Execution
└── Multiple independent failures/problems?
└── YES → Mode C: Parallel Dispatch
| Mode | When | Strategy |
|---|---|---|
| A: Batch | Plan with checkpoints | Execute 3 tasks → report → feedback → next batch |
| B: Subagent | Plan with independent tasks, same session | Fresh subagent per task + 2-stage review |
| C: Parallel | 2+ independent problems | One agent per problem domain |
| E: TRIZ-Parallel ⚡ | 3+ independent tasks, need speed + quality | Dependency-aware parallel dispatch with per-agent quality gates |
openspec/changes/[initiative-name]/tasks.md and design.md) → review critically → raise concernscm-code-review to finishopenspec/changes/archive/[date]-[name]/openspec/changes/[initiative-name]/tasks.md → extract ALL tasks with full textcm-code-reviewImplement [TASK_NAME]:
[Full task text from plan]
Context: [Where this fits in the project]
Rules:
- Follow TDD (cm-tdd)
- Commit when done
- Self-review before reporting
- Ask questions if unclear
Return: Summary of what you did + test results
Self-driving execution. Tasks flow through Reason → Act → Reflect → Verify automatically.
/cm-start with a goalcm-tasks.json exists with backlog itemsLOOP until backlog empty or user interrupts:
1. REASON → Read cm-tasks.json → pick highest-priority backlog task
Update task status to "in_progress"
Log: { phase: "REASON", message: "Selected: <title>" }
2. ACT → Execute using the task's assigned CM skill
(cm-tdd, cm-debugging, cm-safe-deploy, etc.)
Log: { phase: "ACT", message: "<what was done>" }
3. REFLECT → Update cm-tasks.json with results
Log: { phase: "REFLECT", message: "<outcome summary>" }
4. VERIFY → Run tests/checks (cm-quality-gate)
If PASS → status = "done", completed_at = now()
If FAIL → rarv_cycles++, log error, retry from REASON
If rarv_cycles >= 2 → attempt Skill Discovery Fallback:
→ npx skills find "{task keywords}"
→ If skill found + user approves → install, reset rarv_cycles = 0, retry
→ If NOT found → rarv_cycles >= 3 → status = "blocked"
Log: { phase: "VERIFY", message: "✅ passed" or "❌ <error>" }
5. NEXT → Recalculate stats, pick next task
After EVERY phase, you MUST:
cm-tasks.jsonopenspec/changes/[initiative-name]/tasks.md (Keep both human-readable MD and AI-executable JSON in parallel sync)idstatus, logs[], timestampsstats object:
stats.total = tasks.length
stats.done = tasks.filter(t => t.status === 'done').length
stats.in_progress = tasks.filter(t => t.status === 'in_progress').length
stats.blocked = tasks.filter(t => t.status === 'blocked').length
stats.backlog = tasks.filter(t => t.status === 'backlog').length
stats.rarv_cycles_total = tasks.reduce((sum, t) => sum + (t.rarv_cycles || 0), 0)
updated to current ISO timestampcm-tasks.jsonSpeed AND quality. 6 TRIZ principles resolve the contradiction.
| # | Principle | How Applied |
|---|---|---|
| #1 | Segmentation | Tasks split by file-dependency graph → only truly independent tasks run together |
| #3 | Local Quality | Each agent runs its own mini quality gate (syntax + tests) before reporting |
| #10 | Prior Action | Pre-flight check scans for file overlaps BEFORE dispatch |
| #15 | Dynamicity | Batch size adapts: starts at 2, scales up after clean runs, down after conflicts |
| #18 | Feedback | Real-time conflict detection via shared ledger of modified files |
| #40 | Composite | Each agent = implementer + tester + reviewer (3 roles in 1) |
1. ANALYZE → Extract file dependencies from task descriptions
2. GRAPH → Build dependency graph, group into independent batches
3. ADAPT → Read parallel history, compute optimal batch size
4. PRE-FLIGHT → Check conflict ledger for overlaps with running agents
5. DISPATCH → Send batch to agents with quality contracts
6. MONITOR → Each agent reports modified files → detect conflicts
7. VERIFY → Each agent runs mini quality gate before reporting done
8. RECORD → Update parallel history for future batch sizing
Code that touches files, subprocesses, or the DOM MUST follow these rules. No exceptions.
| Pattern | Risk | Fix |
|---|---|---|
innerHTML = \...${data}...`` | DOM XSS | innerHTML = \...${esc(data)}...`` |
innerHTML = variable | DOM XSS | textContent = variable |
eval(input) / new Function(input) | Code injection | Avoid entirely |
document.write(data) | DOM XSS | Use DOM API |
el.setAttribute('on*', data) | Event injection | el.addEventListener() |
Always: Escape before innerHTML, prefer textContent, validate URLs via allowlist.
| Pattern | Risk | Fix |
|---|---|---|
Path(user_input) / "file" | Path Traversal | safe_resolve(base, user_input) |
subprocess.run(f"cmd {arg}", shell=True) | Command Injection | subprocess.run(["cmd", arg]) |
open(config["path"]) | Path Traversal | safe_open(base, config["path"]) |
json.load() → paths unvalidated | Path Traversal | Validate ALL paths from config via safe_resolve() |
Always: Import safe_path, validate EVERY path from CLI/config/API against a base directory.
| Pattern | Risk | Fix |
|---|---|---|
Missing app.disable('x-powered-by') | Info leak | Add after express() |
| No body size limit | DoS | express.json({ limit: '1mb' }) |
path.resolve(userInput) without validation | Path Traversal | Check null bytes + relative_to(baseDir) |
Object.assign(config, userInput) | Prototype Pollution | Filter __proto__, constructor keys |
| Skill | When |
|---|---|
cm-git-worktrees | REQUIRED: isolated workspace before starting |
cm-planning | Creates the plan this skill executes |
cm-code-review | Complete development after all tasks |
cm-tdd | Subagents follow TDD for each task |
cm-quality-gate | VERIFY phase uses this for validation |
cm-ui-preview | RECOMMENDED: Preview UI on Google Stitch before implementing frontend tasks |
| Command | Purpose |
|---|---|
/cm-start | Create tasks + launch RARV + open dashboard |
/cm-status | Quick terminal progress summary |
/cm-dashboard | Open browser dashboard |
Choose your mode. Execute systematically. Review at every checkpoint.