From klair-legacy
Explores a problem space, gathers context, designs approach, and decomposes into implementable specs. Use when starting new work, investigating issues, planning features, or decomposing large changes. This is the "fuzzy front end" where exploration and human dialog happen.
npx claudepluginhub ai-builder-team/ai-builder-plugin-marketplace --plugin klair-legacyThis skill is limited to using the following tools:
The first phase of development: understand the problem, explore the codebase, design the approach, and decompose into spec-able units.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
The first phase of development: understand the problem, explore the codebase, design the approach, and decompose into spec-able units.
Understand Request
│
▼
Explore Codebase ──────► Dialog with User
│ │
▼ ▼
Architectural Design ◄───► Scope Decisions
│
▼
Interface Contracts ───► Test Planning (from contracts)
│
▼
Decompose into Specs
│
▼
Create FEATURE.md
│
▼
Create ALL spec-research.md files
│
▼
QC Validation (research-qc agent)
│
├─ FAIL → Address issues, re-run QC
│
▼
Commit to main
│
▼
Create Spec Tasks
│
▼
Output: Research complete, ready for spec
Ask if unclear:
Use AskUserQuestion if the request is ambiguous:
- "What's the primary goal here?"
- "Are there constraints I should know about?"
Use the built-in Explore agent (via Task tool) for thorough investigation.
Launch exploration:
subagent_type: "Explore"
description: "Explore [area] for [purpose]"
prompt: "I need to understand [specific question].
Look for: [patterns, files, implementations].
Report: key files, existing patterns, integration points."
What to explore:
Thoroughness levels:
quick - Basic file search, known locationsmedium - Moderate exploration, multiple areasvery thorough - Comprehensive analysisWhen exploring backend endpoints, document the full response schema:
Find the backend endpoints for {feature} data.
For each endpoint discovered:
1. Document the URL, method, and parameters
2. List ALL fields in the response schema (read actual code/types)
3. Note any computed or derived fields
4. Flag if response doesn't include fields the UI/feature seems to need
Output format:
## Endpoint: {path}
**Method:** GET/POST
**Parameters:** {list}
**Response Schema:**
| Field | Type | Notes |
|-------|------|-------|
**Potential Gaps:** {fields the UI/feature needs but aren't in the response}
When finding reference patterns to follow, trace the complete data flow:
Find how {similar feature} implements {capability}.
Document:
1. Key files and their responsibilities
2. Data flow: where does data come from?
3. Props/interfaces: what does each component expect?
4. For each data input, trace to its source (API call, computed, parent prop)
Output format:
## Pattern: {name}
**Key Files:** {list}
**Data Flow:**
{component}
← receives: {props}
← from: {parent}
← which gets it from: {API/hook/context}
Use AskUserQuestion at key decision points. Present findings first, then ask.
After initial exploration, confirm scope:
"Based on exploration, this involves:
- [Component A] - [what needs to change]
- [Component B] - [what needs to change]
- [Component C] - [what needs to change]
Should we include all of these, or narrow the scope?"
Options:
- Include all (comprehensive)
- Focus on [subset] first
- Other
When multiple patterns or approaches exist:
"Found two approaches in the codebase:
1. **[Pattern A]** - Used in [X, Y]
- Pros: [...]
- Cons: [...]
2. **[Pattern B]** - Used in [Z]
- Pros: [...]
- Cons: [...]
Which approach fits better for this change?"
For testing approach:
"Existing tests use [pattern/framework].
Options:
- Follow existing pattern
- Use [alternative] because [reason]
- Mix: [specific suggestion]"
Document the chosen approach:
Define the interfaces before implementation:
This enables clear boundaries for decomposition AND test planning.
For each field in each interface, document its source:
TBD items must be resolved before decomposition:
Example formats:
Inline annotation:
interface SupportMetrics {
total_tickets: number; // ← API: /get-metrics.total_tickets
solved_count: number; // ← ⚠️ TBD - needs resolution
}
Or companion table:
### SupportMetrics Sources
| Field | Source |
|-------|--------|
| total_tickets | /get-metrics.total_tickets |
| solved_count | ⚠️ TBD |
After Interface Contracts are defined, derive test cases from them.
For each function contract, identify:
| Contract Element | Test Cases to Derive |
|---|---|
| Function signature | Happy path with valid inputs |
| Input types | Edge cases (empty, null, boundaries) |
| Return type | Expected output verification |
| Dependencies | Mock strategies |
| Error conditions | Error case handling |
Output format:
## Test Plan
### {function_name}
**Signature:** `{signature}`
**Happy Path:**
- [ ] {test case description}
**Edge Cases:**
- [ ] Empty input: {expected behavior}
- [ ] Null handling: {expected behavior}
- [ ] Boundary: {expected behavior}
**Error Cases:**
- [ ] Network failure: {expected behavior}
- [ ] Invalid input: {expected behavior}
**Mocks Needed:**
- {dependency}: {mock strategy}
This test plan feeds directly into Phase X.0 of implement.
Break work into implementable units:
"This work can be decomposed as:
Option A: By layer
1. Data layer changes
2. Business logic
3. UI updates
Option B: By feature slice
1. Core functionality
2. Edge cases
3. Polish/optimization
Option C: By dependency order
1. [Foundation piece]
2. [Depends on 1]
3. [Depends on 2]
Which decomposition makes more sense?"
For each decomposed piece, create a dependency-aware table:
| Spec | Description | Blocks | Blocked By | Complexity |
|---|---|---|---|---|
| 01-foundation | Core types and interfaces | 02, 03 | - | M |
| 02-data-layer | API integration | 03 | 01 | M |
| 03-ui | Component implementation | - | 01, 02 | L |
Column definitions:
This table enables Task-based parallel execution after research completes.
Before finalizing decomposition, verify coverage:
For each data source identified in interface contracts:
For each "Deferred to X" in Out of Scope:
Checklist before finalizing:
If new feature:
features/{domain}/{feature}/FEATURE.mdIf existing feature:
Create folders and spec-research.md for ALL decomposed specs:
# Create all spec folders
mkdir -p features/{domain}/{feature}/specs/01-{spec-name}
mkdir -p features/{domain}/{feature}/specs/02-{spec-name}
mkdir -p features/{domain}/{feature}/specs/03-{spec-name}
# ... for each decomposed spec
Output spec-research.md in EACH spec folder:
Use spec-research-template.md.
Each document captures (for THAT spec):
Before committing, spawn the research-qc agent to validate outputs.
subagent_type: "research-qc"
description: "Validate research outputs"
prompt: "Validate the research outputs for feature at:
- FEATURE.md: features/{domain}/{feature}/FEATURE.md
- Specs directory: features/{domain}/{feature}/specs/
Read FEATURE.md and all spec-research.md files. Run all checks and report PASS or FAIL."
If PASS:
If FAIL:
If stuck after 2-3 iterations:
Stage specific files and commit:
git add features/{domain}/{feature}/FEATURE.md
git add features/{domain}/{feature}/specs/*/spec-research.md
git commit -m "$(cat <<'EOF'
research({feature}): add FEATURE.md and spec-research for {N} specs
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
EOF
)"
Important: Stage specific files, not git add -A.
After research is committed, create Tasks for parallel spec execution:
For each spec in the decomposition table:
Example:
TaskCreate:
subject: "Spec + Implement 01-foundation"
description: "Run spec and implement skills for features/{domain}/{feature}/specs/01-foundation"
TaskCreate:
subject: "Spec + Implement 02-data-layer"
description: "Run spec and implement skills for features/{domain}/{feature}/specs/02-data-layer"
blockedBy: [task-id-of-01]
TaskCreate:
subject: "Spec + Implement 03-ui"
description: "Run spec and implement skills for features/{domain}/{feature}/specs/03-ui"
blockedBy: [task-id-of-01, task-id-of-02]
Report to user:
/orchestrate in a fresh sessionExample output:
"Created {N} spec tasks with dependencies:
Task IDs:
- #1: Spec + Implement 01-foundation (no blockers - ready)
- #2: Spec + Implement 02-data-layer (blocked by: #1)
- #3: Spec + Implement 03-ui (blocked by: #1, #2)
Dependency Graph:
01-foundation
├── 02-data-layer
│ └── 03-ui
└── 03-ui
Ready to execute: 01-foundation
NEXT STEP:
Start a fresh session and run:
/orchestrate features/{domain}/{feature}
This will spawn spec-executor agents for all unblocked specs."