Internal guidance for composing effective prompts for OpenCode models
From opencodenpx claudepluginhub apoapps/swarm-code-plugin --plugin opencodeThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Configures VPN and dedicated connections like Direct Connect, ExpressRoute, Interconnect for secure on-premises to AWS, Azure, GCP, OCI hybrid networking.
Use this skill when Claude needs to compose a prompt to delegate to OpenCode via the configured model.
The configured model varies by user setup. Prompts should be structured, direct, and focused to get the best results regardless of model.
Use clear sections with markdown headers:
## Task
[What to do — one clear sentence]
## Context
[Relevant code, file paths, error messages]
## Constraints
[Output format, length limits, what NOT to do]
## Expected Output
[Exact shape of the answer you want]
## Task
Review this code diff for bugs, security issues, and code quality problems.
## Code
{diff}
## Output Format
List findings as:
- **[SEVERITY]** file:line — description
Where SEVERITY is CRITICAL, HIGH, MEDIUM, or LOW.
Max 10 findings, ordered by severity.
## Task
Create an implementation plan for: {description}
## Current State
{relevant file structure or code}
## Output Format
1. Files to create/modify (with paths)
2. Key decisions and tradeoffs
3. Step-by-step implementation order
4. Risks and mitigations
Keep under 800 words.
## Task
{question}
## Context
{error message, stack trace, relevant code}
## Output Format
1. Root cause (one sentence)
2. Fix (code snippet)
3. Why this works (one sentence)
Claude has access to 50+ models through OpenCode. Before delegating, Claude should pick the right model for the task. Use --model <id> to override the default.
To check available models cheaply (uses 5-min cache, minimal tokens):
node "${CLAUDE_PLUGIN_ROOT}/scripts/opencode-runner.mjs" models
| Task | Best model tier | Why | Example models |
|---|---|---|---|
| Quick question, explanation | Fast/cheap | Speed over depth | minimax/MiniMax-M2.7-highspeed, free variants |
| Code review (small diff) | Fast/cheap | Structured output, speed | minimax/MiniMax-M2.7, minimax/MiniMax-M2.5 |
| Code review (large diff) | Medium | Needs more context window | openai/gpt-5.2-codex, github-copilot/gpt-5.1-codex |
| Architecture planning | Medium-heavy | Needs reasoning depth | openai/gpt-5-codex, github-copilot/gpt-5.4 |
| Deep debugging | Heavy | Complex multi-step reasoning | openai/gpt-5.1-codex-max, github-copilot/gpt-5.4 |
| Security audit | Heavy | Must not miss vulnerabilities | openai/gpt-5.1-codex, github-copilot/gpt-5.2-codex |
| Boilerplate/scaffolding | Fast/free | Simple pattern matching | opencode/*-free variants |
--model needed)--model openai/gpt-5.1-codex or similar heavy modelopencode/*-free variants to save even moreEvery response from the plugin includes a header:
---
**opencode** | ask | model: `minimax/MiniMax-M2.7` | attempts: 1/3 | OK
---
Claude MUST read this header to know what model produced the output and adjust validation depth accordingly:
opencode-result-handling skill.