Interview-driven constraint discovery across 5 categories (business, technical, UX, security, operational)
From manifoldnpx claudepluginhub dhanesh/manifold --plugin manifold<feature-name>Interview-driven constraint discovery across 5 categories.
MANDATORY: This command requires EXPLICIT user invocation.
/manifold:m-status and WAIT for user to invoke this commandIf resuming from compacted context:
/manifold:m-status first/manifold:m1-constrain <feature>"This phase ONLY updates manifold files (.manifold/<feature>.json and .manifold/<feature>.md) with discovered constraints. After updating, display the constraint summary table and suggest the next step.
DO NOT do any of the following during m1-constrain:
.manifold/The user's descriptions and answers during the constraint interview are INPUTS to the manifold, not instructions to build. Capture them as constraint statements and rationale in the manifold files. Do not interpret rich descriptions as work orders.
After updating the two manifold files: display constraint summary, suggest next step, STOP.
| Field | Valid Values |
|---|---|
| Sets Phase | CONSTRAINED |
| Next Phase | TENSIONED (via /manifold:m2-tension) |
| Constraint Types | invariant, goal, boundary |
| Constraint Categories | business, technical, user_experience, security, operational |
| Constraint ID Prefixes | B1, B2... (business), T1, T2... (technical), U1, U2... (UX), S1, S2... (security), O1, O2... (operational) |
See SCHEMA_REFERENCE.md for all valid values. Do NOT invent new types or categories.
CRITICAL: Generate TWO outputs, not one YAML file.
Update .manifold/<feature>.json with constraint references:
{
"constraints": {
"business": [
{"id": "B1", "type": "invariant"},
{"id": "B2", "type": "goal"}
],
"technical": [
{"id": "T1", "type": "boundary"}
]
}
}
Key rule: JSON contains NO text content. Only IDs and types.
Update .manifold/<feature>.md with constraint content:
## Constraints
### Business
#### B1: No Duplicate Payments
Payment processing must never create duplicate charges for the same order.
> **Rationale:** Duplicates cause chargebacks, refund processing overhead, and customer complaints.
**Implemented by:** `lib/retry/IdempotencyService.ts`
#### B2: 95% Success Rate
Achieve 95% retry success rate within 72 hours of initial failure.
> **Rationale:** Industry standard for payment retry success.
---
### Technical
#### T1: 72-Hour Retry Window
All retries must complete within 72 hours of initial failure.
> **Rationale:** Payment processor SLAs require resolution within 72 hours.
| ID Pattern | Markdown Heading Level | Example |
|---|---|---|
| B1, T1, U1, S1, O1 | #### (h4) | #### B1: No Duplicates |
| Category | ### (h3) | ### Business |
statement for constraints, description for tensionsB1 links to Markdown heading #### B1: TitleIf using legacy YAML, constraints use statement, NOT description:
| Field | Type | Required | Notes |
|---|---|---|---|
id | string | ✅ | Format: B1, T1, U1, S1, O1 |
statement | string | ✅ | The constraint text ← NOT 'description' |
type | string | ✅ | invariant, goal, or boundary |
rationale | string | ✅ | Why this constraint exists |
Memory Aid: Constraints state what must be true →
statement
When adding constraints, ensure the manifold maintains v3 schema structure:
{
"iterations": [],
"convergence": { "status": "NOT_STARTED" },
"evidence": [],
"constraint_graph": { "version": 1, "nodes": {}, "edges": { "dependencies": [], "conflicts": [], "satisfies": [] } }
}
These fields are created by
/manifold:m0-init(schema_version 3). You only need to update them.
Record iteration when updating constraints — append to the "iterations" array in JSON:
{
"iterations": [
{
"number": 1,
"phase": "constrain",
"timestamp": "2026-04-04T00:00:00Z",
"result": "Discovered 15 constraints across 5 categories",
"constraints_added": 15,
"by_category": { "business": 3, "technical": 4, "user_experience": 3, "security": 3, "operational": 2 }
}
]
}
REQUIRED FIELDS: Every iteration MUST have
number,phase,timestamp, andresult(string). Theresultfield is mandatory — omitting it will fail schema validation.
/manifold:m1-constrain <feature-name> [--category=<category>] [--skip-lookup]
domain: non-software in manifold JSON)When the manifold has "domain": "non-software", use these universal categories instead:
| Universal Category | Core Question | Replaces |
|---|---|---|
| Obligations | What must/must-not be true? | Business + Security |
| Desires | What does success look like? | UX + Business goals |
| Resources | What can I bring to this? | Technical (capability limits) |
| Risks | What could break irreversibly? | Security (broadened) |
| Dependencies | What else must hold outside me? | Operational |
The constraint types (INVARIANT / GOAL / BOUNDARY) remain unchanged. Only the categories change. In the JSON structure, non-software constraints still use the standard category keys (business, technical, etc.) with a mapping:
business → Obligationstechnical → Resourcesuser_experience → Desiressecurity → Risksoperational → DependenciesThe interview questions should use the universal vocabulary (no software jargon). See docs/non-programming/guide.md for guidance.
For each category, ask probing questions:
Core Data Path Analysis (GAP-03): Identify the primary data flow through the system. For each transition in the flow, ask:
Resource Exhaustion Checklist (GAP-10): For each unauthenticated endpoint or path:
External Dependency Resilience (GAP-11): For each external HTTP dependency:
Crypto/Auth Attack Surface (GAP-09): For constraints involving crypto or authentication:
When constraints reference data formats (e.g., "accept any JSON", "valid email"), auto-generate input validation sub-constraints:
These are placed in the suggested_constraints staging area in the manifold JSON. They don't count toward constraint totals until explicitly promoted by the user.
When constraints involve shared state (caching, connection pools, singletons):
After all five category interviews, you MUST run each GAP checklist or explicitly record a skip with reason. These checklists catch constraints that interviews alone miss -- early manifolds that skipped them had thin, untestable constraint sets.
| GAP | Checklist | When Required |
|---|---|---|
| GAP-03 | Core Data Path Analysis | Always (any feature with data flow) |
| GAP-09 | Crypto/Auth Attack Surface | When constraints involve auth/crypto |
| GAP-10 | Resource Exhaustion | When unauthenticated endpoints or paths exist |
| GAP-11 | External Dependency Resilience | When external HTTP dependencies exist |
| GAP-14 | Input Validation Derivation | When constraints reference data formats |
| GAP-17 | Concurrency Considerations | When constraints involve shared state |
Record compliance in JSON under "gap_checklist_compliance":
{
"gap_checklist_compliance": [
{"gap": "GAP-03", "status": "COMPLETED"},
{"gap": "GAP-09", "status": "SKIPPED", "skip_reason": "No auth/crypto constraints in this feature"},
{"gap": "GAP-10", "status": "COMPLETED"},
{"gap": "GAP-11", "status": "SKIPPED", "skip_reason": "No external HTTP dependencies"},
{"gap": "GAP-14", "status": "COMPLETED"},
{"gap": "GAP-17", "status": "SKIPPED", "skip_reason": "CLI tool, no shared state"}
]
}
Each checklist MUST be either COMPLETED or SKIPPED with documented reason. Omission is not allowed -- if you forget a checklist, the post-phase validation should surface it.
After all five category interviews are complete, run a stress-test pass before committing the phase.
Say to the user:
"Before we close constraint discovery, let's stress-test what we have. Imagine it is [TIMEFRAME from outcome]. This has clearly failed — not partially, just failed. Give me three failure stories:
- One you could have seen coming
- One that surprised you
- One caused by someone else's action or inaction"
Three stories are required. One story produces the obvious failure. Three stories reliably surface assumption violations and external dependencies.
For each story:
source: pre-mortemsource: validated-pre-mortemAfter all constraints are discovered (including pre-mortem additions), score each on three dimensions (1-3 scale):
| Score | Specificity | Measurability | Testability |
|---|---|---|---|
| 1 | Vague ("should be fast") | Unmeasurable ("code quality") | Requires judgment ("user-friendly") |
| 2 | Directional ("under 500ms") | Proxy-measurable ("coverage > 70%") | Requires manual test ("audit passes") |
| 3 | Precise ("p99 < 200ms, p50 < 80ms") | Directly measurable ("APM response time") | Automatable ("test asserts < 200ms") |
Constraints scoring 1 on any dimension get flagged with a suggestion:
"Consider refining [ID]: [dimension] is weak. Current: '[text]'. Suggestion: '[improved version]'"
Store scores in JSON (optional field on each constraint):
{"id": "B1", "type": "invariant", "quality": {"specificity": 3, "measurability": 3, "testability": 3}}
Low scores are warnings, not blockers. The user may accept a vague constraint if it cannot be further specified. But surfacing the weakness drives improvement -- early manifolds had constraints like "must work" that later had to be reworked.
After all interviews, pre-mortem, and quality scoring, auto-generate a draft_required_truths list to seed m3-anchor. This reduces context loss between phases.
Generation rules:
Store in JSON as "draft_required_truths":
{
"draft_required_truths": [
{"id": "DRT-1", "seed_from": ["B1"], "draft_statement": "Error classification system must exist to distinguish transient from permanent failures", "confidence": "high"},
{"id": "DRT-2", "seed_from": ["T3", "O4"], "draft_statement": "File splits and sync pipeline must be updated atomically", "confidence": "medium"}
]
}
These are DRAFTS, not finalized required truths. m3-anchor will validate, refine, or discard each one. Mark explicitly: "These seed m3 -- they are NOT commitments."
Every constraint carries two optional tags that record its origin and challengeability. These are populated in the JSON structure file.
Source tag (how the constraint was discovered):
| Source | Meaning | Default? |
|---|---|---|
interview | Named explicitly during elicitation | Yes — applied automatically |
pre-mortem | Discovered through failure analysis | Applied by pre-mortem pass |
assumption | Believed true, unverified | Must be explicitly set |
Challenger tag (can it be challenged?):
| Challenger | Meaning | Resolution implication |
|---|---|---|
regulation | Legal/regulatory requirement | Cannot be challenged. Route around it. |
stakeholder | Named party's stated need | Can be negotiated |
technical-reality | Physical/architectural limit | Cannot change within scope |
assumption | Believed true, unverified | Must be confirmed before m4. Blocks generation if unconfirmed. |
CRITICAL: These are separate enums. Do NOT mix them.
source accepts ONLY: interview, pre-mortem, assumptionchallenger accepts ONLY: regulation, stakeholder, technical-reality, assumptiontechnical-reality is a challenger, not a source. Using it as a source will fail schema validation.Smart defaults (U2): Source defaults to interview. Challenger is inferred — only prompt the user when the challenger classification would change the resolution direction (e.g., when a constraint seems like it could be either regulation or stakeholder). Never make the user fill in both tags for every constraint.
Downstream use:
challenger: assumption constraints are surfaced. User must acknowledge before generation proceeds.challenger: assumption as a convergence risk.When a constraint contains a measurable threshold (latency, success rate, cost, time, count), prompt:
"Is this a hard ceiling that must hold for every instance, or a statistical target? If statistical, what percentile or confidence level applies?"
Examples:
threshold: {kind: deterministic, ceiling: "500ms"}threshold: {kind: statistical, p99: "200ms", p50: "80ms"}threshold: {kind: statistical, failure_rate: "< 0.1%", window: "24h"}Firing rule (U3): Only prompt for constraints with measurable quantities. Skip qualitative constraints ("code should be readable", "interface should feel intuitive"). If in doubt, ask the user rather than guessing.
Schema: Add threshold object to the constraint in .manifold/<feature>.json:
{"id": "T1", "type": "boundary", "threshold": {"kind": "statistical", "p99": "200ms"}}
/manifold:m1-constrain payment-retry
CONSTRAINT DISCOVERY: payment-retry
CONSTRAINTS DISCOVERED:
Business:
- B1: No duplicate payments (INVARIANT)
- B2: 95% success rate for transient failures (GOAL)
- B3: Retry window ≤ 72 hours (BOUNDARY)
Technical:
- T1: API response < 200ms including retries (BOUNDARY)
- T2: Support 10K concurrent retry operations (GOAL)
UX:
- U1: Clear retry status visible to user (GOAL)
- U2: No user action required for automatic retries (INVARIANT)
Security:
- S1: Retry logs must not contain card numbers (INVARIANT)
- S2: All retry attempts audited (INVARIANT)
Operational:
- O1: Retry queue depth monitored (GOAL)
- O2: Alert on retry success rate < 90% (BOUNDARY)
Updated: .manifold/payment-retry.json + .manifold/payment-retry.md (12 constraints)
Next: /manifold:m2-tension payment-retry
Before starting constraint discovery, research the feature's domain to ground the interview in current facts. AI training data may be outdated—constraints based on stale information lead to rework.
payment-retry → topics: payment processing, retry strategies, idempotency, PCI compliance)WebSearch to look up:
DOMAIN CONTEXT (via web search):
- [Key finding 1 with source]
- [Key finding 2 with source]
- [Key finding 3 with source]
--skip-lookup flag is passedWithout context lookup, the AI may:
WebSearch.manifold/<feature>.json.manifold/<feature>.md--category specified, focus on that category only.manifold/<feature>.json — Add {"id": "B1", "type": "invariant"} to constraints.manifold/<feature>.md — Add #### B1: Title + statement + rationaleWebSearch.manifold/<feature>.yaml--category specified, focus on that category onlyThe CLI auto-detects format:
.json + .md exist → JSON+Markdown hybrid.yaml exists → Legacy YAMLmanifold show <feature> to see current formatFormat lock: If .manifold/<feature>.json exists, you MUST use JSON+Markdown format for ALL subsequent updates. Never create or update a .yaml file when .json exists for the same feature.
After updating manifold files, you MUST run validation before showing results:
manifold validate <feature>
If validation fails, fix the errors BEFORE proceeding. The JSON structure must conform to the schema defined in install/manifold-structure.schema.json. The pre-commit hook will also enforce this — invalid manifolds cannot be committed.
Schema reference: cli/lib/structure-schema.ts (Zod) / install/manifold-structure.schema.json (JSON Schema)
AskUserQuestion tool with structured options. NEVER ask questions as plain text without options./manifold:mN-xxx <feature>) and a one-line explanation of what the next phase does.AskUserQuestion with labeled choices (A, B, C) and descriptions.