From magi-researchers
Generates tailored explanations of concepts in physics, AI/ML, statistics, math, or papers using parallel Gemini/Codex MAGI exploration, synthesized by Claude.
npx claudepluginhub axect/magi-researchers --plugin magi-researchersThis skill uses the workspace's default tool permissions.
Generates high-quality explanations of concepts using Gemini and Codex in parallel (Phase 1: MAGI strategy exploration), then synthesizes a single-voice explanation with Claude (Phase 2: convergent generation).
Explains concepts via Socratic dialogue with reflective questions, step-by-step reasoning, and one simple analogy. Activates on requests to explain, teach, or understand ideas.
Teaches concepts adaptively: assesses learner level, scaffolds from known to unknown using Zone of Proximal Development, employs Socratic questioning, adapts to feedback. For 'how does X work?' queries revealing gaps or failed prior explanations.
Creates self-explanation prompts that deepen understanding of worked examples, texts, or diagrams. Use when students read material passively without engaging underlying principles.
Share bugs, ideas, or general feedback.
Generates high-quality explanations of concepts using Gemini and Codex in parallel (Phase 1: MAGI strategy exploration), then synthesizes a single-voice explanation with Claude (Phase 2: convergent generation).
/research-explain "concept" [--domain physics|ai_ml|statistics|mathematics|paper] [--audience general-public|high-school|undergraduate|phd-student|researcher|expert|"free text"] [--weights '{"clarity":0.2,"accuracy":0.2}'] [--depth low|medium|high|max] [--personas N] [--claude-only] [--substitute "Gemini -> Opus"]
$ARGUMENTS — The concept to explain and optional flags:
--domain — Knowledge domain (physics, ai_ml, statistics, mathematics, paper). Auto-inferred if omitted.--audience — Target audience (default: phd-student):
general-public — No assumed technical backgroundhigh-school — Basic math/science literacyundergraduate — Introductory college-level knowledge in the domainphd-student — Graduate-level domain knowledge (default)researcher — Active researcher familiar with the fieldexpert — Deep specialist in the exact sub-field"free text" — Any custom audience description (e.g., "medical doctors learning ML")--weights — JSON object of scoring weights for explanation quality ranking. Keys: clarity, accuracy, depth, accessibility, completeness, engagement. Values must sum to 1.0. If omitted, Claude analyzes the prompt and audience to recommend adaptive weights for user confirmation (see Step 0a).--depth — Controls explanation pipeline depth (default: medium):
low — Skip Phase 1 entirely; Claude generates explanation directlymedium — Full MAGI (parallel brainstorm + cross-review) → explanationhigh — MAGI + adversarial debate → explanation with misconceptions sectionmax — Hierarchical MAGI-in-MAGI: N persona subagents → meta-review + debate → multi-perspective deep dive--personas N|auto — Number of explanation-specialist subagents for --depth max (default: auto, range: 2-4). When auto, Claude analyzes the concept to determine the optimal persona count. Ignored for other depth levels.--claude-only — Replace all Gemini/Codex MCP calls with Claude Agent subagents. Use when external model endpoints are unavailable or for a Claude-only workflow. Two subagents with distinct cognitive styles (Creative-Divergent and Analytical-Convergent) ensure perspective diversity.--substitute "Agent -> Opus" — Replace a specific MAGI agent with Claude (Opus). Accepted: "Gemini -> Opus", "Codex -> Opus". Can be specified multiple times. If both substituted, equivalent to --claude-only.Shared rules: Read
${CLAUDE_PLUGIN_ROOT}/shared/rules.mdbefore starting. §MCP, §Claude-Only, §LaTeX, §Substitute apply to this skill. Inline fallback (if shared rules unavailable): Gemini models: gemini-3.1-pro-preview → gemini-2.5-pro → Claude. Codex: gpt-5.4. All math in LaTeX only (no Unicode: σ₁→$\sigma_1$). Use@filepathfor MCP file refs; subagents useReadtool.
See §MCP in shared rules. Additionally:
mcp__codex-cli__ask-codex for analysis/review.See §Claude-Only and §Substitute in shared rules. This skill uses Teacher/Critic asymmetric roles (see Step 0b).
See §LaTeX in shared rules.
| ID | Name | Purpose |
|---|---|---|
| T1 | Audience Weight Defaults | Per-audience baseline weights table |
| T2 | Step 0a Procedure | Full adaptive weight recommendation logic (signal table, normalization, save format) |
| T3 | explanation.md Template | Section structure, word count targets, quality checklist |
| T4 | Output File Trees | Expected artifact layout per depth level |
Read references/templates.md for full definitions.
When this skill is invoked, follow these steps exactly:
$ARGUMENTS. If a --domain flag is provided, note the domain (physics, ai_ml, statistics, mathematics, paper). Otherwise, infer the domain from the concept.{output_dir} was provided by the calling context and .workspace.json already exists at the output root, skip directory creation and write to {output_dir}/explain/ instead of creating a new versioned directory.outputs/{sanitized_concept}_{YYYYMMDD}_v{N}/explain/
outputs/{sanitized_concept}_{YYYYMMDD}_v*/ and set N = max existing + 1 (start at v1)..workspace.json at the output directory root:
{
"output_dir": "{absolute_path}",
"skill": "research-explain",
"concept": "{original_concept}",
"domain": "{domain}",
"audience": "{audience}",
"depth": "{depth}",
"created_at": "{ISO-8601}"
}
${CLAUDE_PLUGIN_ROOT}/templates/domains/{domain}.md, read it for context.--audience: Accept general-public, high-school, undergraduate, phd-student, researcher, expert, or any quoted free-text string (default: phd-student). The audience propagates into every prompt, persona casting, weight defaults, and the final explanation.--weights:
--weights is explicitly provided: Validate that keys are a subset of {clarity, accuracy, depth, accessibility, completeness, engagement} and values sum to 1.0. Save immediately to explain/weights.json with metadata:
{
"weights": { "<user-provided weights>" },
"_meta": {
"method": "explicit",
"domain": "<detected domain>",
"audience": "<detected audience>"
}
}
Skip Step 0a entirely.--weights is not provided: Load the audience baseline from T1 in references/templates.md as a reference only (do not save yet — Step 0a handles saving after user confirmation).--depth: Accept low, medium, high, or max (default: medium).
low — Skip Phase 1 entirely; Claude generates explanation directly (jump to Step 2)medium — Full MAGI + one-shot cross-review → strategy synthesis → explanationhigh — Full MAGI + cross-review + adversarial debate → strategy synthesis → explanation with misconceptionsmax — Hierarchical MAGI-in-MAGI pipeline (Steps 1-max-a through 1-max-d replace Steps 1a/1b/1b+/1c)--personas N|auto: Accept integer 2-4 or the string auto (default: auto). Only used when --depth max; ignored otherwise.
auto: Defer persona count determination to Step 0b, where Claude analyzes the concept's complexity, number of distinct pedagogical angles, and audience needs to select the optimal N (2-4).--claude-only: Boolean flag (default: false). When present, all Gemini/Codex MCP calls are replaced with Claude Agent subagents. See the Claude-Only Mode section above for the replacement table and cognitive style definitions.If
--weightswas explicitly provided: Skip this step entirely (weights already saved in Step 0). Full procedure: Read T2 inreferences/templates.mdfor the signal detection table, normalization rules, user confirmation flow, andweights.jsonsave format.
When --weights is omitted: detect concept/audience signals → adjust baseline weights → present comparison table → ask user for confirmation (Accept recommended / Use audience defaults / Custom JSON) → save to explain/weights.json.
After setup, Claude analyzes the concept, domain, and audience to assign specialist personas:
For --depth low: Skip this step entirely.
For --depth medium|high (2 personas — asymmetric Teacher/Critic roles):
--claude-only: Relabel the personas in explain/personas.md:
explain/personas.md.For --depth max (N personas):
Read
references/depth_max.md— "Step 0b (depth max)" section for the full N-persona casting procedure (N selection heuristic, persona archetypes N=2/3/4, definition requirements, claude-only relabeling).
--depth medium|high|max)If
--depth low: Skip this step entirely and proceed to Step 2. If--depth max: Skip this step — use Steps 1-max-a through 1-max-d instead (readreferences/depth_max.md).
Execute these two calls simultaneously (in the same message). Prepend the assigned persona from explain/personas.md to each prompt.
Agent A — Teacher Draft (Gemini):
mcp__gemini-cli__ask-gemini(
prompt: "[Persona: {teacher_persona_name} — {teacher_persona_expertise}]
Guiding question: {teacher_guiding_question}
Target audience: {audience}
Domain context: @{domain_template_path}
Concept to explain: {concept}
You are a master explainer. Generate a comprehensive draft explanation of this concept for the specified audience. Your explanation should:
1. **Core Explanation**: Build understanding from first principles appropriate to the audience level. Use analogies, examples, and progressive complexity.
2. **Key Intuitions**: What are the 2-3 most important intuitions the audience must grasp?
3. **Mathematical Formalism** (if applicable): Include relevant equations with clear notation explanations. Follow LaTeX formatting rules: inline math with $...$ and display equations with $$ on separate lines.
4. **Concrete Examples**: Provide 2-3 worked examples or real-world applications.
5. **Connections**: How does this concept relate to concepts the audience likely already knows?
Write for maximum clarity and understanding. Use the persona's communication style.",
model: "gemini-3.1-pro-preview" // fallback: "gemini-2.5-pro" → Claude
)
Note: Omit the
Domain context: @{domain_template_path}line from the prompt when no domain template exists.
Agent B — Critic Analysis (Codex):
mcp__codex-cli__ask-codex(
prompt: "[Persona: {critic_persona_name} — {critic_persona_expertise}]
Guiding question: {critic_guiding_question}
Target audience: {audience}
Domain context: @{domain_template_path}
Concept to explain: {concept}
You are a pedagogical analyst and explanation critic. Generate a comprehensive critical analysis covering:
1. **Prerequisites Map**: What concepts must the audience understand before this one? List in dependency order. For each, note whether the audience level likely already has it.
2. **Common Misconceptions** (at least 5): For each misconception:
- State the misconception clearly
- Explain why it is plausible (what leads people to believe it)
- Explain precisely why it is wrong
- Provide a corrective reframing
3. **Confusion Neighbors**: Concepts that are commonly confused with this one. For each pair:
- This concept IS NOT [confused concept]
- Key distinguishing feature
4. **Precision-Accessibility Tradeoffs**: Where must an explanation sacrifice precision for accessibility at this audience level? What simplifications are acceptable vs. dangerous?
5. **Calibration Questions** (5-10): Questions that test genuine understanding (not just recall). Include expected correct answers and common wrong answers with explanations of what each wrong answer reveals about the student's misunderstanding.",
model: "gpt-5.4"
)
Note: Omit the
Domain context: @{domain_template_path}line from the prompt when no domain template exists.
Note: If Codex MCP is unavailable, fall back to
mcp__gemini-cli__ask-geminiwith the Gemini fallback chain and critic-focused framing.
If
--claude-only: Per §SubagentExec, spawn simultaneously:
- A (CD, Teacher): Draft explanation of {concept} for {audience} using persona. Read domain template. Deliverables: 1.Core Explanation (first principles, analogies, progressive complexity), 2.Key Intuitions (2-3), 3.Mathematical Formalism (LaTeX), 4.Concrete Examples (2-3 worked), 5.Connections to audience's existing knowledge. Use persona's communication style. Save to
explain/gemini_ideas.md.- B (AC, Critic): Critical analysis for {concept}/{audience} using persona. Deliverables: 1.Prerequisites Map (dependency order; note which audience likely has), 2.Common Misconceptions (≥5; each: statement → why plausible → why wrong → corrective reframing), 3.Confusion Neighbors (per pair: "IS NOT" + key distinguishing feature), 4.Precision-Accessibility Tradeoffs, 5.Calibration Questions (5-10; each: question + correct answer + common wrong answers + what each wrong answer reveals). Save to
explain/codex_ideas.md.
Save results to:
explain/gemini_ideas.md — Teacher's (or Subagent A's) draft explanation with header noting source, persona, and timestampexplain/codex_ideas.md — Critic's (or Subagent B's) analysis with header noting source, persona, and timestamp--depth medium or --depth high only)If
--depth low: Skip this step entirely and proceed to Step 2.
Pre-check: Verify gemini_ideas.md and codex_ideas.md both exist and are non-empty before proceeding. If either is missing, re-run only the failed agent from Step 1a.
After both Phase 1a results are saved, execute these two calls simultaneously. Prepend the assigned persona to each review prompt:
Teacher reviews Critic's analysis (Round 1):
mcp__gemini-cli__ask-gemini(
prompt: "[Persona: {teacher_persona_name} — {teacher_persona_expertise}]
Target audience: {audience}
Review the following critical analysis of an explanation for: {concept}
@{output_dir}/explain/codex_ideas.md
For each item in the Critic's analysis:
1. **Misconceptions**: Are these real misconceptions at this audience level? Are any missing? Would your explanation actually trigger any of these?
2. **Prerequisites**: Agree/disagree with the prerequisite ordering? Are any prerequisites overestimated or underestimated for this audience?
3. **Confusion Neighbors**: Are these the right confusion neighbors? Suggest additions or removals.
4. **Precision-Accessibility Tradeoffs**: Are the identified tradeoffs fair? Where would you push back?
5. **Calibration Questions**: Would your explanation enable the audience to answer these correctly? Flag any questions that are unfair for the audience level.
Also note: What aspects of the Critic's analysis should change your draft explanation?",
model: "gemini-3.1-pro-preview" // fallback: "gemini-2.5-pro" → Claude
)
Critic reviews Teacher's draft (Round 1):
mcp__codex-cli__ask-codex(
prompt: "[Persona: {critic_persona_name} — {critic_persona_expertise}]
Target audience: {audience}
Review the following draft explanation of: {concept}
@{output_dir}/explain/gemini_ideas.md
Evaluate the Teacher's explanation on these dimensions:
1. **Accuracy**: Are there any incorrect statements, oversimplifications that cross into inaccuracy, or misleading analogies?
2. **Completeness**: Does it cover all essential aspects? What critical gaps exist?
3. **Audience Calibration**: Is the language, depth, and assumed knowledge appropriate for {audience}?
4. **Misconception Risk**: Does any part of the explanation inadvertently reinforce common misconceptions?
5. **Analogy Fidelity**: Do the analogies accurately map to the concept? Where do they break down, and are those breakdown points acknowledged?
6. **Progressive Structure**: Does the explanation build understanding in the right order? Are there logical jumps?
For each issue found, provide:
- The specific problematic passage
- Why it's problematic
- A suggested fix",
model: "gpt-5.4"
)
Note: If Codex MCP is unavailable, fall back to
mcp__gemini-cli__ask-geminiwith the Gemini fallback chain.
If
--claude-only: Per §SubagentExec, spawn simultaneously:
- A (CD, Teacher reviewing Critic): Read
codex_ideas.md. Review all 5 sections: misconceptions (real at this level? missing?), prerequisites (ordering? over/underestimated?), confusion neighbors (additions/removals?), precision-accessibility tradeoffs (fair?), calibration questions (answerable from your explanation?). Also: what should change in your draft? Save toexplain/gemini_review_of_codex.md.- B (AC, Critic reviewing Teacher): Read
gemini_ideas.md. Evaluate 6 dimensions: accuracy, completeness, audience calibration, misconception risk, analogy fidelity (where do analogies break?), progressive structure (logical jumps?). Per issue: specific passage + why problematic + suggested fix. Save toexplain/codex_review_of_gemini.md.
Save results to:
explain/gemini_review_of_codex.mdexplain/codex_review_of_gemini.md--depth high only)If
--depth lowor--depth medium: Skip this step entirely.
After Round 1 cross-review, Claude identifies the top 3 points of disagreement between the Teacher and Critic, focusing on:
Save the disagreement summary to explain/disagreements.md before the debate calls.
Execute Round 2 simultaneously:
Teacher Round 2 — Defend/Concede/Revise:
mcp__gemini-cli__ask-gemini(
prompt: "[Persona: {teacher_persona_name}]
Target audience: {audience}
You (Teacher) and the Critic reviewed each other's work on explaining: {concept}
Here are the top 3 points of disagreement:
@{output_dir}/explain/disagreements.md
For each disagreement:
1. **Defend** your pedagogical choice if you believe it best serves understanding for this audience, providing evidence from learning science or teaching experience
2. **Concede** if the Critic's objection reveals a genuine accuracy or misconception risk, explaining why
3. **Revise** your approach to a new position that balances clarity and accuracy if appropriate
Your original review:
@{output_dir}/explain/gemini_review_of_codex.md
Critic's review of your draft:
@{output_dir}/explain/codex_review_of_gemini.md",
model: "gemini-3.1-pro-preview" // fallback chain applies
)
Critic Round 2 — Defend/Concede/Revise:
mcp__codex-cli__ask-codex(
prompt: "[Persona: {critic_persona_name}]
Target audience: {audience}
You (Critic) and the Teacher reviewed each other's work on explaining: {concept}
Here are the top 3 points of disagreement:
@{output_dir}/explain/disagreements.md
For each disagreement:
1. **Defend** your objection if you believe the accuracy/misconception risk is genuine, providing specific examples of how learners are misled
2. **Concede** if the Teacher's pedagogical choice genuinely aids understanding without significant accuracy cost, explaining why
3. **Revise** your assessment to a new position that respects both rigor and accessibility if appropriate
Your original review:
@{output_dir}/explain/codex_review_of_gemini.md
Teacher's review of your analysis:
@{output_dir}/explain/gemini_review_of_codex.md",
model: "gpt-5.4"
)
If
--claude-only: Per §SubagentExec, spawn simultaneously:
- A (CD, Teacher Round 2): Read
disagreements.md+gemini_review_of_codex.md+codex_review_of_gemini.md. Per disagreement: Defend (with learning science evidence) / Concede (if genuine accuracy or misconception risk) / Revise (balance clarity + accuracy). Save toexplain/debate_round2_gemini.md.- B (AC, Critic Round 2): Read same 3 files. Per disagreement: Defend (with specific examples of how learners are misled) / Concede (if pedagogical choice genuinely aids understanding without accuracy cost) / Revise (balance rigor + accessibility). Save to
explain/debate_round2_codex.md.
Save results to:
explain/debate_round2_gemini.mdexplain/debate_round2_codex.md--depth max only)Skip unless
--depth max. Readreferences/depth_max.mdfor all four steps. These steps replace Steps 1a/1b/1b+/1c entirely when--depth maxis active.
Summary of what runs:
all_conclusions.md consolidated file.explain/synthesis.md with 9-section structure including traceability table.If
--depth max: Skip — synthesis is produced by Step 1-max-d (readreferences/depth_max.md). If--depth low: Skip — proceed directly to Step 2.
gemini_ideas.md, codex_ideas.md--depth medium or high: gemini_review_of_codex.md, codex_review_of_gemini.md--depth high: debate_round2_gemini.md, debate_round2_codex.mdweights.json, personas.mdweights.json and extract the "weights" object. Use the weights to compute a weighted score for each explanation strategy:
score = Σ(weight_i × rating_i)weights.json), including _meta.method--depth high only) — for each of the 3 debated disagreements, document the final resolution: who conceded, what was revised, and the synthesized positionexplain/synthesis.md.Claude reads all Phase 1 artifacts and generates the final explanation. This step is always executed by Claude directly (never delegated to MCP tools) to ensure a single authoritative voice.
For --depth low (no Phase 1 artifacts):
explain/weights.json (if available).explain/explanation.md directly based on the concept, audience, and domain.For --depth medium|high|max (Phase 1 artifacts available):
explain/synthesis.md and all referenced artifacts.explain/weights.json to understand the quality priorities.Generate explain/explanation.md: Read T3 in references/templates.md for the full section structure, word count targets per depth, and quality checklist.
Save to explain/explanation.md.
Present the explanation to the user with:
Wait for user input before proceeding.
See T4 in
references/templates.mdfor full artifact trees per depth level (low,medium,high,max).
Quick summary:
--depth low: weights.json + explanation.md--depth medium: adds personas.md, gemini_ideas.md, codex_ideas.md, cross-reviews, synthesis.md--depth high: adds disagreements.md, debate_round2_*.md--depth max: adds persona_N/ subdirs, all_conclusions.md, meta-reviews, meta-debate files