npx claudepluginhub bengous/claude-code-plugins --plugin agents-bridgeThis skill is limited to using the following tools:
Invoke OpenAI Codex CLI for cross-model collaboration.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Share bugs, ideas, or general feedback.
Invoke OpenAI Codex CLI for cross-model collaboration.
!"${CLAUDE_PLUGIN_ROOT}/scripts/codex" --info 2>&1 || echo "(codex CLI not found -- install with: npm install -g @openai/codex)"
Use the configuration above to determine the current default model and available overrides. Do NOT guess model names.
# Run with prompt
"${CLAUDE_PLUGIN_ROOT}/scripts/codex" exec "$ARGUMENTS"
# Or with overrides (use values from Current Configuration above)
CODEX_MODEL=<model> CODEX_REASONING=<level> "${CLAUDE_PLUGIN_ROOT}/scripts/codex" exec "$ARGUMENTS"
# Resume a previous conversation
"${CLAUDE_PLUGIN_ROOT}/scripts/codex" exec resume <SESSION_ID> "Follow up prompt..."
Codex returns a session ID after each run. To continue that conversation:
"${CLAUDE_PLUGIN_ROOT}/scripts/codex" exec resume <SESSION_ID> "Follow up prompt..."
When to resume:
Important: Capture the session ID from the previous run's output to enable resumption.
| Variable | Description |
|---|---|
CODEX_MODEL | Model override |
CODEX_REASONING | Reasoning effort: low, medium, high, xhigh |
CODEX_SANDBOX | Sandbox mode: read-only, workspace-write, danger-full-access (exec only) |