Walk through QA-based planning workflow to build complete product vision and spec-graph
From knownpx claudepluginhub eighteyes/know-cli --plugin knowknow//planRestates requirements, assesses risks and dependencies, generates phased step-by-step implementation plan with complexity estimates, and waits for user confirmation before coding.
/planStarts Manus-style file-based planning: creates task_plan.md, findings.md, progress.md if missing, invokes planning skill, and guides through workflow.
/planBreaks project into small verifiable tasks with acceptance criteria, dependencies, checkpoints. Reads spec/codebase, presents plan for review, saves to tasks/plan.md and tasks/todo.md.
/planLaunches interactive 7-step wizard to build Scope, Metrics, Direction, and Verification from a goal description.
/planRestates requirements, assesses risks and dependencies, creates phased step-by-step implementation plan with complexity estimates, and waits for user confirmation before coding.
/planCaptures user intent via 5 structured questions, creates strategic execution plan, saves to .claude/session-plan.md and session-intent.md for review.
Interactive QA-based planning to build product vision and populate spec-graph.
Main Objective
Guide user through interactive QA sessions to build a complete product vision with technical decisions, generating both documentation artifacts and populating spec-graph.json.
Prerequisites
Graph Operations (CRITICAL)
All spec-graph modifications MUST use know CLI commands. Never edit spec-graph.json directly.
Adding Features - Use /know:add:
/know:add <feature-name>
This triggers the full feature workflow: duplicate check → HITL clarification → scaffolding → registration → graph linking.
Scaffolding Code-Link Placeholders
After adding each feature and component to the spec-graph, create placeholder code-link refs:
# Feature placeholder (status: planned — AI fills in during /know:build)
know -g .ai/know/spec-graph.json add code-link <feature>-code '{"modules":[],"classes":[],"packages":[],"status":"planned"}'
know -g .ai/know/spec-graph.json link feature:<name> code-link:<feature>-code
# Component placeholder (repeat for each component)
know -g .ai/know/spec-graph.json add code-link <component>-code '{"modules":[],"classes":[],"packages":[],"status":"planned"}'
know -g .ai/know/spec-graph.json link component:<name> code-link:<component>-code
These placeholders ensure know graph cross coverage shows 0% instead of missing entities.
Track coverage: know graph cross coverage --spec-only
Adding Other Entities - Use know add:
know -g .ai/know/spec-graph.json add user <key> '{"name":"...","description":"..."}'
know -g .ai/know/spec-graph.json add objective <key> '{"name":"...","description":"..."}'
know -g .ai/know/spec-graph.json add component <key> '{"name":"...","description":"..."}'
know -g .ai/know/spec-graph.json add action <key> '{"name":"...","description":"..."}'
know -g .ai/know/spec-graph.json add operation <key> '{"name":"...","description":"..."}'
know -g .ai/know/spec-graph.json add interface <key> '{"name":"...","description":"..."}'
know -g .ai/know/spec-graph.json add requirement <key> '{"name":"...","description":"..."}'
Linking Dependencies - Use know link:
know -g .ai/know/spec-graph.json link user:developer objective:manage-data
know -g .ai/know/spec-graph.json link objective:manage-data feature:data-import
know -g .ai/know/spec-graph.json link feature:data-import action:upload-file
know -g .ai/know/spec-graph.json link action:upload-file component:file-processor
Validation - Always validate after modifications:
know -g .ai/know/spec-graph.json validate
Exploration Strategy When existing code is found, use parallel exploration to understand the codebase:
thoroughness: "medium")/know:prepare command should leverage parallel exploration extensivelyMaturity Assessment
Before starting, assess project maturity to determine which modes to run:
Decision Tree:
IF code exists AND no spec-graph:
→ Inform: "Found code but no spec-graph"
→ Ask: "Run /know:prepare first to analyze code and create graphs?"
→ If YES: Delegate to /know:prepare, then continue with QA refinement
→ If NO: Start from Discovery mode
IF code exists AND sparse spec-graph (< 5 total entities OR missing user/objective):
→ Ask: "Spec-graph exists but incomplete. Enrich from code first?"
→ If YES: Delegate to /know:prepare
→ If NO: Continue with QA sessions
IF no code AND no spec-graph:
→ Start from ALL modes (greenfield planning)
IF no code AND has spec-graph:
→ Skip to Architect → PM modes (planned but not built)
IF code AND complete spec-graph (>10 entities, has users/objectives/features):
→ Ask: "What do you want to improve/add?"
→ Run specific modes or use /know:add for new features
After the maturity assessment, generate a deep question bank before running any modes. This front-loads the discovery work and avoids drip-feeding 5 questions at a time across 10 sessions.
Target: 35+ questions covering all software dimensions.
Step 1 — Gather context for agents:
# If spec-graph exists:
know -g .ai/know/spec-graph.json list --type user
know -g .ai/know/spec-graph.json list --type objective
know -g .ai/know/spec-graph.json list --type feature
know -g .ai/know/spec-graph.json check stats
Also read .ai/know/input.md and any existing README/docs.
Step 2 — Launch 8 Task agents in a SINGLE message (parallel):
Agent 1 — Users & Objectives (→ user:*, objective:* entities)
"You are helping build a spec-graph for a project described as: '[project description]'. Generate 5 questions whose answers will directly become
userandobjectivegraph entities. Auserentity is a distinct persona with a name likeuser:developer,user:admin,user:end-user. Anobjectiveis what that user wants to accomplish, named likeobjective:manage-data,objective:monitor-status. Ask about: who are the distinct types of people using this system, what does each user type want to accomplish (their top 1-2 objectives), how do their goals or access levels differ, what does success look like for each user type, and are there secondary or system-level actors (e.g. cron jobs, webhooks) that need modeling. Format as a numbered list. Do NOT ask about scale, load, or performance."
Agent 2 — Features (→ feature:* entities)
"You are helping build a spec-graph for a project described as: '[project description]'. Generate 5 questions whose answers will directly become
featureentities. Afeatureis a named system capability that serves user objectives, likefeature:user-auth,feature:data-import,feature:report-generation. Ask about: what are the top-level capabilities this system must provide, which capabilities are distinct enough to be separate named features, which features are essential vs optional for v1, which objectives does each feature serve, and are any features prerequisites for others. Format as a numbered list. Do NOT ask about scale, load, or performance."
Agent 3 — Actions (→ action:* entities)
"You are helping build a spec-graph for a project described as: '[project description]'. Generate 5 questions whose answers will directly become
actionentities. Anactionis a discrete, named thing a user does within a feature, likeaction:login,action:upload-file,action:approve-request,action:export-report. Ask about: for each major feature, what specific things does a user do step by step, what triggers each action (button click, form submit, schedule), what is the primary action of each feature, what are the supporting or secondary actions, and are any actions shared across multiple features. Format as a numbered list. Do NOT ask about scale, load, or performance."
Agent 4 — Components (→ component:* entities)
"You are helping build a spec-graph for a project described as: '[project description]'. Generate 5 questions whose answers will directly become
componententities. Acomponentis a distinct implementation responsibility, likecomponent:auth-handler,component:file-processor,component:report-builder,component:notification-sender. Ask about: what are the distinct implementation responsibilities this system needs, which responsibilities are isolated enough to be named components, what does each component receive as input and produce as output, which components are shared infrastructure vs feature-specific, and which components have side effects (external calls, writes, notifications). Format as a numbered list. Do NOT ask about scale, load, or performance."
Agent 5 — Data Models (→ data-model:* references)
"You are helping build a spec-graph for a project described as: '[project description]'. Generate 5 questions whose answers will directly become
data-modelreference entries. Ask about: what are the primary data entities (name each and list 3-5 key fields), what are the relationships between those entities (one-to-one, one-to-many, many-to-many), what data must be persisted vs can be computed on the fly, which data entity is the most central and what is its lifecycle (created → updated → deleted/archived), and what data is owned by one feature vs shared across features. Format as a numbered list. Do NOT ask about scale, load, or performance."
Agent 6 — Interfaces & API Contracts (→ interface:*, api_contract:* references)
"You are helping build a spec-graph for a project described as: '[project description]'. Generate 5 questions whose answers will directly become
interfaceandapi_contractreference entries. Ask about: what are the main screens or views (name each and describe its primary content and user goal), what are the main API endpoints (path, method, key request/response fields), what data does each screen display and where does it come from, what forms exist and what fields do they contain, and how does the system expose or consume any external APIs. Format as a numbered list. Do NOT ask about scale, load, or performance."
Agent 7 — Business Logic & Security (→ business_logic:*, security-spec:* references)
"You are helping build a spec-graph for a project described as: '[project description]'. Generate 5 questions whose answers will directly become
business_logicandsecurity-specreference entries. Ask about: what are the non-obvious domain rules that govern system behavior (validation rules, approval gates, state machine transitions), who can access each feature and under what conditions (role-based, ownership-based), what data is sensitive and how must it be handled or protected, what audit trail or activity log is required, and what are the edge cases in the most complex user workflow. Format as a numbered list. Do NOT ask about scale, load, or performance."
Agent 8 — Configuration & Constraints (→ configuration:*, constraint:*, acceptance_criterion:* references)
"You are helping build a spec-graph for a project described as: '[project description]'. Generate 5 questions whose answers will directly become
configuration,constraint, andacceptance_criterionreference entries. Ask about: what runtime settings or environment variables does the system need, what feature flags or toggles are anticipated, what are the hard invariants that must never be violated (data integrity rules, required fields, state preconditions), what does a working v1 look like from each user's perspective (acceptance criteria), and what are the deployment or environment assumptions. Format as a numbered list. Do NOT ask about scale, load, or performance."
Step 3 — Collect and write to .ai/know/qa/plan-questions.md:
# Plan QA: [project name]
_Each answer maps to a graph entity or reference. See type hints per section._
## 1. Users & Objectives [→ user:*, objective:*]
1. ...
## 2. Features [→ feature:*]
6. ...
## 3. Actions [→ action:*]
11. ...
## 4. Components [→ component:*]
16. ...
## 5. Data Models [→ data-model:*]
21. ...
## 6. Interfaces & API Contracts [→ interface:*, api_contract:*]
26. ...
## 7. Business Logic & Security [→ business_logic:*, security-spec:*]
31. ...
## 8. Configuration & Constraints [→ configuration:*, constraint:*, acceptance_criterion:*]
36. ...
---
_Answers:_
Step 4 — Present to user:
"I've generated [N] questions about your project across 8 domains. Please answer as many as you can — paste answers after each question in the file, or answer in chat. The more you answer, the less I'll need to guess later."
Show the full file contents in chat.
Step 5 — Iterate:
Planning Modes
Modes now focus on graph-building and artifact generation — not question-asking. QA is complete before modes run. Each mode:
.ai/know/qa/plan-questions.mdWhen to run: Greenfield projects, no existing documentation
Questions to ask:
Outputs:
.ai/know/input.md - Initial user prompt.ai/know/revised-input.md - Refined visionWhen to run: New projects, or when user/objective entities missing
Questions to ask (5-10 questions):
Surface Assumptions: Before proceeding to Architect mode, state any assumptions about scope, technical approach, or existing system behavior. For each assumption: confidence ≥95% → state and proceed. <95% → ask user. Assumption economics: -5 if wrong, +1 if right, 0 if ask.
Outputs:
Files:
.ai/know/qa/discovery.md - QA session log.ai/know/product/user-stories.md.ai/know/product/requirements.md.ai/know/product/features.md.ai/know/product/critical-path.md.ai/know/references.md - Research papers, specs, API docs, prior art (if provided).ai/know/flows/user.md - User journey diagram.ai/know/flows/system.md - System interaction diagram.ai/know/flows/biz.md - Business process diagramSpec-graph entities (WITH CONFIRMATION):
user:* entities (e.g., user:developer, user:end-user)objective:* entities (e.g., objective:manage-data, objective:generate-reports)feature:* entities (high-level features)user → objective, objective → featurereferences.documentation entries for research papers, specs, API docs (if provided)Graph Commands to Execute:
# Add users
know -g .ai/know/spec-graph.json add user developer '{"name":"Developer","description":"..."}'
know -g .ai/know/spec-graph.json add user end-user '{"name":"End User","description":"..."}'
# Add objectives
know -g .ai/know/spec-graph.json add objective manage-data '{"name":"Manage Data","description":"..."}'
# Link users to objectives
know -g .ai/know/spec-graph.json link user:developer objective:manage-data
# Add features via /know:add (triggers full workflow)
/know:add data-import
/know:add user-auth
# Validate
know -g .ai/know/spec-graph.json validate
When to run: After Discovery, or when component entities missing
Questions to ask (5-10 questions):
Outputs:
Files:
.ai/know/qa/architect.md - QA session log.ai/know/tech-ideas.md - Architecture options.ai/know/models/[model-name].md - Data model specs.ai/know/components/[component-name].md - Component specs.ai/know/flows/control.md - Execution flow.ai/know/flows/data.md - Data flow.ai/know/flows/error.md - Error handling.ai/know/flows/auth.md - Security flow.ai/know/flows/event.md - Event flow.ai/know/flows/integration.md - Integration flow.ai/know/flows/state.md - State management.ai/know/flows/logic.md - Business logic.ai/know/stack.md - Final tech stack decisionsSpec-graph entities (WITH CONFIRMATION):
component:* entities (e.g., component:auth-handler, component:data-processor)action:* entities (e.g., action:login, action:export-report)operation:* entities (e.g., operation:validate-token, operation:parse-csv)interface:* entities (e.g., interface:api-gateway)requirement:* entities (e.g., requirement:auth, requirement:audit)feature → action → component → operationGraph Commands to Execute:
# Add components
know -g .ai/know/spec-graph.json add component auth-handler '{"name":"Auth Handler","description":"..."}'
know -g .ai/know/spec-graph.json add component data-processor '{"name":"Data Processor","description":"..."}'
# Add actions
know -g .ai/know/spec-graph.json add action login '{"name":"Login","description":"..."}'
know -g .ai/know/spec-graph.json add action export-report '{"name":"Export Report","description":"..."}'
# Add operations
know -g .ai/know/spec-graph.json add operation validate-token '{"name":"Validate Token","description":"..."}'
# Add interfaces
know -g .ai/know/spec-graph.json add interface api-gateway '{"name":"API Gateway","description":"..."}'
# Add requirements
know -g .ai/know/spec-graph.json add requirement auth '{"name":"Authentication","description":"..."}'
# Link the chain: feature → action → component → operation
know -g .ai/know/spec-graph.json link feature:user-auth action:login
know -g .ai/know/spec-graph.json link action:login component:auth-handler
know -g .ai/know/spec-graph.json link component:auth-handler operation:validate-token
# Validate
know -g .ai/know/spec-graph.json validate
Spec-graph references:
business_logic:* - Business rulesdata-models:* - Core data structurestech-stack:* - Technology choicesWhen to run: Web services, REST/GraphQL APIs, or when interface entities sparse
Questions to ask:
Outputs:
Files:
.ai/know/qa/api.md - QA session log.ai/know/api/[segment-name].md - API specs per segmentSpec-graph entities (WITH CONFIRMATION):
interface:* entities (e.g., interface:rest-api, interface:graphql-endpoint)interface → actionGraph Commands to Execute:
# Add API interfaces
know -g .ai/know/spec-graph.json add interface rest-api '{"name":"REST API","description":"..."}'
know -g .ai/know/spec-graph.json add interface graphql-endpoint '{"name":"GraphQL Endpoint","description":"..."}'
# Link interfaces to actions
know -g .ai/know/spec-graph.json link interface:rest-api action:login
know -g .ai/know/spec-graph.json link interface:rest-api action:export-report
# Validate
know -g .ai/know/spec-graph.json validate
When to run: User-facing applications, or when UI navigation unclear
Questions to ask:
Outputs:
Files:
.ai/know/qa/ui.md - QA session log.ai/know/ui.md - UI specificationSpec-graph entities (WITH CONFIRMATION):
interface:* entities for screens (e.g., interface:dashboard, interface:settings)interface → actionGraph Commands to Execute:
# Add UI interfaces (screens)
know -g .ai/know/spec-graph.json add interface dashboard '{"name":"Dashboard","description":"Main overview screen"}'
know -g .ai/know/spec-graph.json add interface settings '{"name":"Settings","description":"User preferences screen"}'
# Link screens to actions they enable
know -g .ai/know/spec-graph.json link interface:dashboard action:view-reports
know -g .ai/know/spec-graph.json link interface:settings action:update-profile
# Validate
know -g .ai/know/spec-graph.json validate
When to run: Novel/risky requirements need validation
Questions to ask:
Outputs:
Files:
.ai/know/qa/prototyping.md - QA session log.ai/know/experiments.md - Validation experimentsSpec-graph: Updates to requirement:* with risk notes
When to run: After architecture defined, before PM
Questions to ask:
Outputs:
Files:
.ai/know/qa/quality.md - QA session log.ai/know/testing.md - Test planSpec-graph references:
test-strategy:* - Testing approach per componentWhen to run: After architecture and PM complete
Questions to ask:
Outputs:
Files:
.ai/know/qa/documentation.md - QA session log.ai/know/docs/INSTALL.md.ai/know/docs/DEVELOPMENT.md.ai/know/docs/DEPLOY.md.ai/know/docs/[other].mdSpec-graph: None directly
When to run: Final mode, after all technical decisions made
Questions to ask:
Outputs:
Files:
.ai/know/qa/pm.md - QA session log.ai/know/plan/1-[task].md, .ai/know/plan/2-[task].md, etc. - Granular implementation tasks.ai/know/todo.md - Checklist with links.ai/know/file-index.md - Proposed file structureSpec-graph updates (WITH CONFIRMATION):
meta.phases_metadata with phase definitions:
"phases_metadata": {
"I": {"name": "Foundation", "description": "Core architecture"},
"II": {"name": "Features", "description": "Main functionality"},
"III": {"name": "Polish", "description": "Optimizations"}
}
meta.phases with I, II, III phasesWhen to run: After all modes complete
Workflow:
.ai/know/flows/ files for accuracyOutputs:
.ai/know/project.md - Derived from spec-graph queries.ai/know/qa/plan-questions.md
b. Generate mode artifacts (markdown files in .ai/know/)
c. Prepare graph commands (see "Graph Commands to Execute" in each mode)
d. Show user the exact commands to be executed
e. Ask for confirmation before executing
f. Execute commands:
/know:add <feature-name> (full workflow; qa.md already answered)know -g .ai/know/spec-graph.json add <type> <key> '{...}'know -g .ai/know/spec-graph.json link <from> <to>
g. Validate graph: know -g .ai/know/spec-graph.json validateUser: /know:plan
Assistant: Assessing project maturity...
Found code in src/ but no spec-graph.json
I recommend running /know:prepare first to analyze your code
and create initial graphs. Then we can refine with QA sessions.
Run /know:prepare first? [Yes/No]
.ai/know/ directory structurer2 - QA Batch Generation phase: 8 parallel Task agents → 35+ questions → plan-questions.md → iterate; modes now consume answers instead of re-asking; Workflow Execution updated
r1 - Added explicit Graph Operations section with know CLI commands; added "Graph Commands to Execute" examples to Modes 2-5; updated Workflow Execution to specify /know:add for features vs know CLI for other entities