From attune
Guides project ideation via Socratic questioning, constraint discovery, and structured phases to generate actionable project briefs comparing alternative approaches. Use for new projects without clear requirements.
npx claudepluginhub athola/claude-night-market --plugin attuneThis skill uses the workspace's default tool permissions.
- [When to Use](#when-to-use)
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Guide project ideation through Socratic questioning, constraint analysis, and structured exploration.
Skill(attune:project-planning) instead)Skill(attune:project-specification) instead)/attune:project-init)Skill(attune:war-room) for strategic decisions)With superpowers:
Skill(superpowers:brainstorming) for Socratic methodWithout superpowers:
War Room Integration (REQUIRED):
Skill(attune:war-room)Socratic Questions:
Output: Problem statement in docs/project-brief.md
Template:
## Problem Statement
**Who**: [Target users/stakeholders]
**What**: [The problem they face]
**Where**: [Context where problem occurs]
**When**: [Frequency/timing of problem]
**Why**: [Impact of the problem]
**Current State**: [Existing solutions and limitations]
Verification: Run the command with --help flag to verify availability.
Questions:
Output: Constraints matrix
Template:
## Constraints
### Technical
- [Constraint 1 with rationale]
- [Constraint 2 with rationale]
### Resources
- **Timeline**: [Duration with milestones]
- **Team**: [Size and skills]
- **Budget**: [If applicable]
### Integration
- [Required system 1]
- [Required system 2]
### Compliance
- [Requirement 1]
- [Requirement 2]
### Success Criteria
- [ ] [Measurable criterion 1]
- [ ] [Measurable criterion 2]
Verification: Run the command with --help flag to verify availability.
Technique: Generate 3-5 distinct approaches
For each approach:
Template:
## Approach [N]: [Name]
**Description**: [Clear 1-2 sentence description]
**Stack**: [Technologies and tools]
**Pros**:
- [Advantage 1]
- [Advantage 2]
- [Advantage 3]
**Cons**:
- [Disadvantage 1]
- [Disadvantage 2]
- [Disadvantage 3]
**Risks**:
- [Risk 1 with likelihood]
- [Risk 2 with likelihood]
**Effort**: [S/M/L/XL or time estimate]
**Trade-offs**:
- [Trade-off 1 with mitigation]
- [Trade-off 2 with mitigation]
Verification: Run the command with --help flag to verify availability.
Design for Isolation:
When generating approaches, evaluate each against two isolation tests:
File size as design signal: Files exceeding 500 lines (Python/Go) or 300 lines (JavaScript/TypeScript) often indicate a unit is doing too much. This is a design smell, not just a style issue. When flagging large files, suggest extracting specific concerns (e.g., "Extract validation logic into a separate module to improve testability").
Verification: Run the command with --help flag to verify availability.
Automatic Trigger: After generating approaches, MUST invoke Skill(attune:war-room) for expert deliberation
When War Room is invoked:
Command:
# Automatically invoked from brainstorm - DO NOT SKIP
/attune:war-room --from-brainstorm
War Room Output:
Bypass Conditions (ONLY skip war room if ALL true):
Proceed to Phase 4 only after War Room completes
Comparison Matrix:
| Criterion | Approach 1 | Approach 2 | Approach 3 | Approach 4 |
|---|---|---|---|---|
| Technical Fit | 🟢 High | 🟡 Medium | 🟡 Medium | 🔴 Low |
| Resource Efficiency | 🟡 Medium | 🟢 High | 🔴 Low | 🟡 Medium |
| Time to Value | 🟢 Fast | 🟡 Medium | 🔴 Slow | 🟢 Fast |
| Risk Level | 🟡 Medium | 🟢 Low | 🔴 High | 🟡 Medium |
| Maintainability | 🟢 High | 🟡 Medium | 🟢 High | 🔴 Low |
Scoring: 🟢 = Good, 🟡 = Acceptable, 🔴 = Concern
Selection Criteria:
Template:
## Selected Approach: [Approach Name] ⭐
### Rationale
[2-3 paragraphs explaining why this approach was selected]
Key decision factors:
- [Factor 1]
- [Factor 2]
- [Factor 3]
### Trade-offs Accepted
- **Trade-off 1**: [Description] → Mitigation: [Strategy]
- **Trade-off 2**: [Description] → Mitigation: [Strategy]
### Rejected Approaches
- **Approach X**: Rejected because [reason]
- **Approach Y**: Rejected because [reason]
Verification: Run the command with --help flag to verify availability.
Final output saved to docs/project-brief.md:
# [Project Name] - Project Brief
**Date**: [YYYY-MM-DD]
**Author**: [Name]
**Status**: Draft | Approved
## Problem Statement
[From Phase 1]
## Goals
1. [Primary goal]
2. [Secondary goal]
3. [Tertiary goal]
## Constraints
[From Phase 2]
## Approach Comparison
[From Phase 3 & 4]
## War Room Decision
[From Phase 3.5 - includes RS assessment, Red Team challenges, premortem]
## Selected Approach
[From Phase 5, informed by War Room synthesis]
## Next Steps
1. `/attune:specify` - Create detailed specification
2. `/attune:blueprint` - Plan architecture and tasks
3. `/attune:project-init` - Initialize project structure
Verification: Run the command with --help flag to verify availability.
Clarification:
Probing Assumptions:
Probing Reasoning:
Questioning Viewpoints:
Probing Implications:
Must Have (Non-negotiable):
Should Have (Important):
Could Have (Nice to have):
Won't Have (Explicit exclusions):
During brainstorming, watch for:
Save session to .attune/brainstorm-session.json:
{
"session_id": "20260102-143022",
"started_at": "2026-01-02T14:30:22Z",
"current_phase": "approach-selection",
"problem": {
"statement": "...",
"stakeholders": ["..."]
},
"constraints": {
"technical": ["..."],
"resources": {"timeline": "...", "team": "..."}
},
"approaches": [
{
"name": "...",
"pros": ["..."],
"cons": ["..."]
}
],
"selected_approach": null,
"decisions": {}
}
Verification: Run the command with --help flag to verify availability.
Automatic Trigger: After Phase 5 (Decision & Rationale) completes and docs/project-brief.md is saved, MUST auto-invoke the next phase.
When continuation is invoked:
docs/project-brief.md exists and is non-emptyBrainstorming complete. Project brief saved to docs/project-brief.md.
Proceeding to specification phase...
Skill(attune:project-specification)
Bypass Conditions (ONLY skip continuation if ANY true):
--standalone flag was provided by the userdocs/project-brief.md does not exist or is empty (phase failed)Do NOT prompt the user for confirmation — this is a lightweight checkpoint, not an interactive gate. The user can always interrupt if needed.
Automatic Trigger: After Phase 6 saves the project brief, and before invoking the next phase, run the spec review loop.
Procedure:
modules/spec-review-loop.md for the review
prompt templateBypass Conditions:
--standalone flag was provided--skip-review flag was providedSkill(superpowers:brainstorming) - Socratic method (if available)Skill(attune:war-room) - REQUIRED AUTOMATIC INTEGRATION - Invoked after Phase 3 for multi-LLM deliberationSkill(imbue:scope-guard) - Scope creep preventionSkill(attune:project-specification) - AUTO-INVOKED next phase after brainstormingSkill(attune:mission-orchestrator) - Full lifecycle orchestration/attune:brainstorm - Invoke this skill/attune:specify - Next step in workflow/imbue:feature-review - Worthiness assessmentSee /attune:brainstorm command documentation for complete examples.
Command not found Ensure all dependencies are installed and in PATH
Permission errors Check file permissions and run with appropriate privileges
Unexpected behavior
Enable verbose logging with --verbose flag