From agent-workflows
Infers draft specifications from examples (API calls, test cases, user stories, input/output pairs) using N parallel agents to derive rules, constraints, invariants, edge cases. Outputs synthesized spec with confidence annotations.
npx claudepluginhub sjarmak/agent-workflowsThis skill uses the workspace's default tool permissions.
Specification Generation from Examples. Takes a set of examples (API calls, test cases, user stories, input/output pairs) and spawns N agents to independently INFER the specification that would produce those examples. Each agent works bottom-up: what rules, constraints, invariants, and edge cases does this behavior imply? Agents don't see each other's inferences. Synthesize by comparing: where ...
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Specification Generation from Examples. Takes a set of examples (API calls, test cases, user stories, input/output pairs) and spawns N agents to independently INFER the specification that would produce those examples. Each agent works bottom-up: what rules, constraints, invariants, and edge cases does this behavior imply? Agents don't see each other's inferences. Synthesize by comparing: where agents infer the same rule, it's likely correct; where they diverge, the examples are ambiguous or underconstrained. Output is a draft specification with confidence annotations.
$ARGUMENTS — format: [N] [path/to/examples or inline examples] where N is optional agent count (default: 3, min 2, max 5)
Extract:
If the input is a file path (contains / or ends in a common extension), read the file. Otherwise, treat the entire remaining argument as inline examples.
If no input is provided, ask the user to provide examples before proceeding.
Launch all N agents in parallel using the Agent tool. Each agent receives the same example set and a unique inference methodology -- a distinct approach to extracting specifications from behavior.
Assign each agent a different methodology. Select N from the following (fill slots in order):
Each agent MUST:
subagent_type: "general-purpose" (they may need web search for domain knowledge)Agent prompt template (customize the methodology per agent):
You are a specification inference agent. Your job is to reverse-engineer a specification from a set of examples.
## Examples
{classified_examples}
## Your Methodology: {methodology_name}
{methodology_description}
## Instructions
Study these examples carefully. Your goal is to infer the SPECIFICATION that would produce this behavior. Work bottom-up: start from what you observe, then generalize.
Critical distinction:
- OBSERVED: directly demonstrated by an example
- INFERRED: logically implied by the examples but not directly shown
- ASSUMED: requires an assumption beyond what examples support (flag these clearly)
Do NOT invent requirements that the examples don't support. If the examples are ambiguous on a point, say so explicitly rather than guessing.
## Output Format
### Inference Report: {methodology_name}
#### Inferred Rules / Invariants
For each rule:
- **Rule**: [statement of the rule]
- **Evidence**: [which examples support this]
- **Status**: Observed / Inferred / Assumed
- **Confidence**: High / Medium / Low
#### Inferred Constraints / Boundaries
- [What values, states, or behaviors are forbidden or limited]
- [Evidence and confidence for each]
#### Edge Cases Implied
- [Boundary conditions the examples hint at but don't fully cover]
- [What behavior would you EXPECT at these edges based on the patterns]
#### Gaps: What the Examples Don't Cover
- [Areas where the examples are silent]
- [Questions that cannot be answered from these examples alone]
- [Specific examples that SHOULD be added to resolve ambiguity]
#### Draft Spec Fragment
Write a structured specification fragment covering everything you inferred:
- Use declarative language ("The system SHALL...", "When X, the system MUST...")
- Mark each requirement with its confidence level
- Group by functional area
After ALL agents return, compare inferred specifications across all agents. Produce the following synthesis:
Rules or invariants that all agents independently inferred. These are the most reliable parts of the specification.
For each consensus rule:
Rules where agents disagreed -- different agents inferred different or conflicting specifications for the same behavior. These indicate that the examples are ambiguous or underconstrained.
For each contested rule:
Rules that only one agent caught. These may be genuine insights from that methodology, or they may be over-inferences.
For each unique inference:
Aggregate all gaps identified by all agents:
Compile the final draft specification:
Save the full output to contract_{slugified_topic}.md in the working directory.
Present the draft specification to the user. Then:
/replicate for validation?Sits before /replicate as a spec-generation step:
examples -> /contract (infer spec) -> draft spec -> /replicate (validate spec) -> validated spec
Use /contract when you have behavior examples but no formal specification. Use /replicate after /contract to validate the generated spec by having agents implement it independently and checking for convergence.
Can also follow /brainstorm or /diverge when those workflows produce example-heavy output that needs to be formalized into a specification.