From jam
Use when building features, creating projects, or implementing anything where multiple approaches could work. Brings diverse perspectives to both design exploration and evaluation, then synthesizes learnings from all variants into the winner. Triggers on "jam on", "jam on this", "let's jam", "can we jam", "jam this", "jam session", "build", "create", "implement", "diverse approaches", "explore options".
npx claudepluginhub 2389-research/claude-plugins --plugin jamThis skill uses the workspace's default tool permissions.
Parallel exploration framework powered by diverse perspectives. Instead of one mind generating options and picking a winner, Jam dispatches independent agents with different worldviews to both **propose approaches** and **evaluate implementations**, then **synthesizes the best of everything** into the final result.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Automates semantic versioning and release workflow for Claude Code plugins: bumps versions in package.json, marketplace.json, plugin.json; verifies builds; creates git tags, GitHub releases, changelogs.
Parallel exploration framework powered by diverse perspectives. Instead of one mind generating options and picking a winner, Jam dispatches independent agents with different worldviews to both propose approaches and evaluate implementations, then synthesizes the best of everything into the final result.
The jam was all of us together.
A single agent generating "multiple approaches" is still one mind imagining what different people would think. The perspectives cluster, biases leak through, and the agent converges to its own preference. Jam makes diversity real by dispatching independent agents who reason separately.
digraph jam {
rankdir=TB;
"Build/Create request" -> "Quick context (1-2 questions)";
"Quick context (1-2 questions)" -> "Identify architectural slots";
"Identify architectural slots" -> "Generate diverse perspective panel";
"Generate diverse perspective panel" -> "Dispatch panel agents to propose slot fills";
"Dispatch panel agents to propose slot fills" -> "Synthesize into 3-5 distinct variants";
"Synthesize into 3-5 distinct variants" -> "Present variants, user approves";
"Present variants, user approves" -> "Implement all variants in parallel (worktrees)";
"Implement all variants in parallel (worktrees)" -> "Generate review panel for this domain";
"Generate review panel for this domain" -> "Review panel evaluates ALL variants";
"Review panel evaluates ALL variants" -> "Pick winner based on panel findings";
"Pick winner based on panel findings" -> "Synthesize: fold best insights from ALL variants into winner";
"Synthesize: fold best insights from ALL variants into winner" -> "Ship the improved winner";
}
Gather just enough context to understand the problem space. Ask 1-2 questions max.
Identify architectural slots — decisions where multiple approaches are genuinely viable:
| Type | Examples | Worth exploring? |
|---|---|---|
| Architectural | Storage engine, framework, auth method, API style, rendering strategy | Yes |
| Trivial | File location, naming conventions, config format | No |
Only architectural decisions become slots. Cap at 2-3 slots to avoid combinatorial explosion.
This is what makes Jam different from plain brainstorming.
Analyze the domain and generate 3-6 personas with genuinely different worldviews about the problem. These are NOT pre-defined templates — they emerge from the specific problem being solved.
Rules for good panels:
Example — building a bookmark CLI tool:
Example — writing a blog post about a developer tool:
Present the panel to the user for approval. They can add, remove, or adjust personas.
Then dispatch ALL panel agents in a single message using background agents:
Agent(persona-1, run_in_background: true, prompt="You are [NAME]... propose how to approach [SLOTS]...")
Agent(persona-2, run_in_background: true, prompt="You are [NAME]... propose how to approach [SLOTS]...")
...all in one message block...
Each agent receives:
You are [NAME], [DESCRIPTION].
[1-2 sentences about your worldview and what you optimize for.]
A user wants to [PROBLEM DESCRIPTION].
The key architectural decisions are:
- [SLOT 1]: [options or open-ended]
- [SLOT 2]: [options or open-ended]
Propose YOUR preferred approach. For each decision:
1. What you'd choose and why (from YOUR perspective)
2. What risks you see with other approaches
3. What you'd want to verify before committing
Be opinionated. Don't hedge. Advocate for what YOU believe is right.
After all panel agents return, synthesize their proposals into 3-5 distinct variants:
For each approved variant:
git worktree add .worktrees/variant-<slug> -b jam/<feature>/variant-<slug>
.worktrees/ is in .gitignoreDispatch ALL implementation agents in a single message for true parallelism:
Agent(variant-1, run_in_background: true, isolation: "worktree", prompt="Implement variant-1...")
Agent(variant-2, run_in_background: true, isolation: "worktree", prompt="Implement variant-2...")
...all in one message block...
Each agent receives:
docs/plans/<feature>/
context.md # Problem description and slots
panel/
personas.md # The perspective panel used
proposals/ # Raw proposals from each persona
variants/
variant-<slug>/
approach.md # Variant philosophy and design
result.md # Final comparison and winner
.worktrees/
variant-<slug>/ # Implementation worktrees
Do NOT evaluate from a single perspective. Generate a new review panel appropriate for the domain.
Just like Phase 2, the review panel is domain-specific and dynamically generated. Analyze what matters for THIS project and create reviewers accordingly.
Rules:
Example review panel for a CLI tool:
Example review panel for a blog post:
Present the review panel to the user before dispatching — same as Phase 2. User can add, remove, or adjust reviewers. Do NOT skip this step.
Dispatch all reviewers against each variant. If reviewers need browser access, run them sequentially per variant. Code-only reviewers can run in parallel.
Each reviewer reports:
Compile findings into a cross-variant comparison:
## Jam Evaluation: <feature>
### Variant Scorecard
| Criterion | variant-a | variant-b | variant-c |
|-----------|-----------|-----------|-----------|
| [Reviewer 1 focus] | findings | findings | findings |
| [Reviewer 2 focus] | findings | findings | findings |
| Tests passing | Y/N | Y/N | Y/N |
### Per-Variant Strengths (PRESERVE THESE FOR SYNTHESIS)
**variant-a:** [what reviewers loved]
**variant-b:** [what reviewers loved]
**variant-c:** [what reviewers loved]
### Per-Variant Weaknesses
**variant-a:** [what reviewers flagged]
**variant-b:** [what reviewers flagged]
**variant-c:** [what reviewers flagged]
### Winner: variant-X
[Why, based on panel findings]
Elimination rules:
This is the phase that makes Jam more than a competition.
The losing variants are not waste — they are learning. The review panels identified what EACH variant did well. Now fold the best insights into the winner.
Go through every "strength" flagged by reviewers for losing variants:
# Jam Results: <feature>
## Perspective Panel
[Who proposed approaches and why]
## Variants Explored
| Variant | Philosophy | Tests | Result |
|---------|-----------|-------|--------|
| variant-a | ... | PASS | WINNER |
| variant-b | ... | PASS | Insights incorporated |
| variant-c | ... | FAIL | Eliminated |
## Review Panel
[Who evaluated and their key findings]
## Winner: variant-a
[Why it won]
## Synthesis: What We Learned From Everyone
| Source | Insight | Incorporated? | How |
|--------|---------|---------------|-----|
| variant-b | Better error messages | Yes | Ported error handling pattern |
| variant-b | GraphQL subscriptions | No | Over-complex for current needs |
| variant-c | Single-binary deploy | Yes | Adopted static linking approach |
| Reviewer X | Missing input validation | Yes | Added to all endpoints |
## The Jam Was All of Us Together
[Brief narrative of how the final result is better than any single variant]
git worktree remove .worktrees/variant-<slug>
git branch -D jam/<feature>/variant-<slug>
| Mistake | Why it's wrong | Fix |
|---|---|---|
| One mind imagining multiple perspectives | Perspectives cluster, biases leak through, agent converges to its own preference | Dispatch independent agents who can't see each other |
| Pre-defined persona templates | Generic personas miss domain-specific insights | Generate personas from the problem domain |
| Developer-only perspectives | Misses end-user, ops, business, and domain expert viewpoints | Span beyond developer archetypes |
| Single-perspective evaluation | One evaluator can't escape its own biases | Review panel with independent reviewers |
| Discarding loser insights | Competition produced learning, not just a winner | Active synthesis phase |
| Documenting insights "for later" | "Later" never comes — insights rot in backlogs | Incorporate improvements NOW in the synthesis phase |
| Over-synthesizing | Frankensteining the winner into a mess | Each incorporation must be justified and approved |
| Skipping user approval | User should see panels and synthesis plan before execution | Present and get approval at every gate |
| All-measurable review panels | Panels skew toward checkable criteria (correctness, compliance) and miss subjective qualities (taste, feel, delight) | Ensure the panel covers both measurable and subjective dimensions |
If you catch yourself doing any of these, you're bypassing Jam's value: