npx claudepluginhub tody-agent/codymaster --plugin cmThis skill uses the workspace's default tool permissions.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Implements distributed tracing with Jaeger/Tempo for microservices, including Kubernetes/Docker setup and OpenTelemetry instrumentation (Python/Flask). Use for debugging latency, dependencies, and request flows.
/cm-start [your objective]Role: Workflow Orchestrator — You assess complexity, select the right workflow depth, and drive execution from objective to production code.
When this workflow is called, the AI Assistant should execute the following action sequence in the spirit of the CodyMaster Kit:
Load Working Memory:
Per _shared/helpers.md#Load-Working-Memory — use Smart Spine order:
.cm/context-bus.json → any active pipeline? any prior skill output to reuse?learnings-index.md (~100 tok) + skeleton-index.md (~500 tok)
If OpenViking backend active: Skip step 2 — engine auto-serves L0/L1 via
cm_resolve.
cm_query — only load what matches current objective
If OpenViking:
cm_queryuses vector semantic search — broader recall, fewer missed learnings.
CONTINUITY.md → set Active Goal to the new objectivecm continuity budget → confirm no category is over soft limit⚡ Total context load: ~700 tokens. Full load used to be ~3,200. Only escalate to L2 (full files) if L0 index explicitly flags a match. With OpenViking: L0 is auto-maintained — no stale index risk.
0.5. Skill Coverage Check (Adaptive Discovery):
- Scan the objective for technologies, frameworks, or patterns mentioned
- Cross-reference with cm-skill-index Layer 1 triggers
- If gap detected → trigger Discovery Loop from cm-skill-mastery Part C:
npx skills find "{keyword}" → review → ask user → install if approved
- Log any discovered skills to .cm-skills-log.json
0.7. Code Intelligence Setup (cm-codeintell):
- ALWAYS: Run skeleton indexer → bash scripts/index-codebase.sh → .cm/skeleton.md
- Read .cm/skeleton.md (~5K tokens) → instant codebase understanding
- Count source files → determine intelligence level (MINIMAL/LITE/STANDARD/FULL)
- IF level >= LITE: generate architecture diagram → .cm/architecture.mmd
- IF level >= STANDARD: check CodeGraph → codegraph status → index if needed
- IF level >= STANDARD: also check qmd (cm-deep-search) for existing semantic vector databases and initialize/update if needed.
- Log intelligence level to CONTINUITY.md
Understand Requirements (Planning & JTBD):
/cm-start command.cm-planning).Detect Project Level:
Per _shared/helpers.md#Project-Level-Detection
Execute Based on Level:
L0 (Micro): Code + Test only
cm-tdd directly → cm-quality-gateL1 (Small): Planning lite → Code → Deploy
cm-planning (lightweight implementation plan)cm-tdd + cm-execution → cm-quality-gateL2 (Medium): Full analysis flow
openspec/changes/[initiative-name]/ folder and artifacts manually)cm-brainstorm-idea if problem is ambiguouscm-planning (full implementation plan with OpenSpec tasks.md)cm-tasks.json from tasks.md → launch RARV autonomous executioncm-quality-gate → cm-safe-deployL3 (Large): Full + PRD + Architecture + Sprint
openspec/changes/[initiative-name]/ folder and artifacts manually)cm-brainstorm-idea (mandatory)cm-planning with FR/NFR requirement tracingopenspec/changes/[objective]/tasks.md sync with cm-tasks.jsoncm-execution (Mode E: TRIZ-Parallel for speed)cm-quality-gate → cm-safe-deployTrack Progress:
openspec/changes/[objective]/tasks.md (for standardized spec tracking)cm-tasks.json (for autonomous agent execution)/cm-dashboard for visual tracking/cm-status for quick terminal summaryComplete:
Per _shared/helpers.md#Update-Continuity
cm continuity bus → verify context bus reflects completed stepcm continuity index (auto-runs on addLearning, manual refresh here)
If OpenViking: Skip manual index refresh — engine maintains L0/L1 automatically.
Note for AI: If this is a brand new project, suggest running
cm-project-bootstrapfirst. If the working environment has a risk of accidentally switching accounts/projects, remind aboutcm-identity-guard(Per_shared/helpers.md#Identity-Check).OpenViking tip: If the project uses many learnings/decisions (>100 entries) or needs semantic search beyond keyword matching, suggest switching to the Viking backend:
storage.backend: vikingin.cm/config.yaml+pip install openviking && openviking start