From recursive-reasoning
Run multi-model battle with rotating writer and judge models via OpenAI-compatible endpoints. Use when users ask to battle or compare models, run multi-LLM critique, or iteratively improve an answer across models.
npx claudepluginhub lollipopkit/cc-plugins --plugin recursive-reasoningThis skill is limited to using the following tools:
Use the bundled runner:
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Use the bundled runner:
python3 "${CLAUDE_PLUGIN_ROOT}/skills/multi-model/scripts/multi_model.py".
The script searches upward from the current working directory for .env.
Required variables:
ARENA_MODELSARENA_OPENAI_BASE_URL (single endpoint) or ARENA_PROVIDER_<NAME>_BASE_URL (multi-provider)ARENA_OPENAI_API_KEY, ARENA_PROVIDER_<NAME>_API_KEYSingle-endpoint example:
ARENA_MODELS=qwen3:8b,deepseek-r1:14b
python3 "${CLAUDE_PLUGIN_ROOT}/skills/multi-model/scripts/multi_model.py" \
--prompt "<task>" --iters 5 --max-judges 3 --json
Useful flags: --out, --temperature, --max-tokens, --timeout.
--json..env.Model 0, Model 1, ...).