Solve coding problems using multi-agent retrieval, planning, coding, and debugging pipeline. Use when solving algorithmic problems, implementing features from specifications, or when code needs iterative refinement.
From mapcodernpx claudepluginhub NewJerseyStyle/Claude-plugins-marketplace --plugin mapcoderThis skill uses the workspace's default tool permissions.
templates/problem-template.mdtemplates/solution-template.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides agent creation for Claude Code plugins with file templates, frontmatter specs (name, description, model), triggering examples, system prompts, and best practices.
You are orchestrating the MapCoder pipeline, a multi-agent system that replicates the human programming cycle through four specialized agents:
Parse $ARGUMENTS to extract:
--lang <language>: Target programming language (default: Python)--sandbox: Use Docker sandbox for code execution (safer)Example inputs:
/mapcoder implement binary search → Python, direct execution/mapcoder --lang javascript implement binary search → JavaScript/mapcoder --sandbox --lang rust implement linked list → Rust, sandboxedFirst, analyze the input:
Use the Task tool to spawn the retrieval agent:
Spawn a retrieval agent to generate 3-5 similar problems for:
[problem description]
The agent should return:
- Similar problem descriptions
- Solution patterns/approaches used
- Key algorithmic concepts
Use subagent_type: "general-purpose" with the retrieval-agent.md system prompt.
Use the Task tool to spawn the planning agent with the retrieved examples:
Spawn a planning agent to create step-by-step plans for:
[problem description]
Using these similar problems as reference:
[retrieved examples]
Generate 2-3 alternative algorithmic plans.
Use the Task tool to spawn the coding agent:
Spawn a coding agent to implement the solution in [language]:
[problem description]
Following this plan:
[selected plan]
Test against these sample cases:
[test cases]
If --sandbox flag is set, instruct the agent to use scripts/sandbox-runner.sh instead of direct execution.
After coding completes:
If tests pass: Return the solution with explanation.
If tests fail: Enter debugging loop (max 3 iterations):
Spawn a debugging agent to fix the failing code:
Original problem: [problem]
Current code: [code]
Error output: [errors]
Original plan: [plan]
Identify the bug and generate corrected code.
When successful, output using the solution template:
## Solution
**Language**: [language]
**Status**: [Passed/Failed after N attempts]
### Code
[final code]
### Explanation
[step-by-step explanation of the approach]
### Test Results
[test output]
The pipeline supports adaptive routing: