From shipit
Use when code needs review via OpenAI Codex — delegates full or scoped review to Codex MCP, normalizes output into shipit report format
npx claudepluginhub jugrajsingh/skillgarden --plugin shipitThis skill is limited to using the following tools:
Delegate code review to OpenAI Codex via MCP, then normalize output into shipit's standard report format.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Delegate code review to OpenAI Codex via MCP, then normalize output into shipit's standard report format.
This skill requires the Codex MCP server. The shipit plugin ships .mcp.json to register it automatically. If the mcp__codex__codex tool is not available:
$ARGUMENTS may contain flags and paths:
If $ARGUMENTS is empty, ask via AskUserQuestion with options:
Parse $ARGUMENTS for flags. Set defaults:
Based on scope:
full: No diff needed. The cwd is the project root. Codex will explore on its own.
diff: Run git commands to gather changes:
git diff develop...HEAD
git diff develop...HEAD --stat
If diff is empty, report "No changes found" and stop.
files: Validate provided paths exist via Read. Build a file list.
Derive a slug for the report:
Example slug: 2026-02-25-add-auth
Write path: docs/reviews/{slug}-codex-review.md
Construct the mcp__codex__codex tool call:
prompt: Varies by scope:
developer-instructions: Always include:
You are a senior code reviewer. Produce a structured review.
For each finding:
Severity: minor, major, or critical
File and line number (exact)
Description of the issue
Suggested fix (if applicable)
Category: architecture, correctness, security, performance, quality, testing
End with an overall verdict: APPROVE, REQUEST CHANGES, or COMMENT. Group findings by severity (critical first, then major, then minor).
sandbox: "read-only"
approval-policy: "never"
model: Only include if user specified --model.
config: If user specified --effort, include model_reasoning_effort set to the effort value.
Call the tool and capture the response. Store the threadId from the response for potential follow-up.
If the tool call fails or returns an error:
Parse Codex's free-form response and normalize into shipit report format.
Map severity terms:
Extract:
If Codex output cannot be parsed (no clear findings structure), present raw output under a "Raw Review" heading instead.
Critically evaluate Codex's claims. If any finding contradicts your own knowledge, add a note: "Claude note: {disagreement with evidence}".
mkdir -p docs/reviews
Write to docs/reviews/{slug}-codex-review.md with this structure:
# Code Review: {slug}
**Reviewer:** Codex (via OpenAI)
**Date:** {YYYY-MM-DD}
**Scope:** {full | diff | files: path1, path2}
**Verdict:** {APPROVE | REQUEST CHANGES | COMMENT}
## Findings
### ◆◆ Critical
- file:line — description
Suggestion: fix
### ◆ Major
- file:line — description
### ◇ Minor
- file:line — description
## Summary
{normalized summary}
Group by severity — omit sections with zero findings.
Output a summary block:
## Review Summary
Findings: {total}
Critical: {N}
Major: {N}
Minor: {N}
Top Issues:
1. {severity_symbol} {file}:{line} — {description}
2. {severity_symbol} {file}:{line} — {description}
3. {severity_symbol} {file}:{line} — {description}
Full report: docs/reviews/{slug}-codex-review.md
Then ask via AskUserQuestion:
If user chooses follow-up, call mcp__codex__codex-reply with threadId and user's question. Present response and offer same choices again.