From chipdev-method
Routes a methodology compliance review of the user's current changes to Codex via the codex-plugin-cc adversarial-review command. Activate when the user explicitly invokes /chipdev-method:audit, or asks "audit my diff", "审一下我这版改动", "review my methodology compliance", "did I violate any of the principles", or wants an independent adversarial check that their work follows the plugin's six invariants and contract-first discipline.
npx claudepluginhub curryfromuestc/dev-guide --plugin chipdev-methodThis skill is limited to using the following tools:
Use this skill when the user wants an **adversarial methodology review** of
Monitors deployed URLs for regressions after deploys, merges, or upgrades by checking HTTP status, console errors, network failures, performance (LCP/CLS/INP), content, and API health.
Share bugs, ideas, or general feedback.
Use this skill when the user wants an adversarial methodology review of
a code change. The skill builds a structured prompt from the plugin's
methodology and routes it to Codex through codex-plugin-cc.
This skill does not run the audit itself. It assembles context, then hands off to Codex. Codex does the actual review.
Do not autoload — only respond when the user invokes
/chipdev-method:audit or matches the trigger phrases in the description.
The user must have:
codex CLI installed and on PATH.codex-plugin-cc installed (see https://github.com/openai/codex-plugin-cc).If any prerequisite is missing, surface a clear remediation step before proceeding.
When triggered, perform the following steps in order:
command -v codex >/dev/null || echo "MISSING: codex CLI"
git rev-parse --is-inside-work-tree >/dev/null 2>&1 || echo "MISSING: not a git repo"
If codex is missing, instruct the user to install
codex-plugin-cc and ensure
codex is on PATH. Stop.
If not in a git repo, tell the user to invoke audit from the project's
git working tree. Stop.
Default target: the current diff against main (or master).
If the user passed an argument:
HEAD~3) → audit the diff between that ref and HEAD.main..feature/foo) → audit that range.Ask the user to confirm the resolved target before proceeding.
Gather, in order:
git diff <range>) — capped at ~2000 lines; if
larger, summarize per-file change counts and ask the user to narrow.Use the checklist in references/audit-checklist.md as the audit rubric.
Build a structured prompt for Codex with:
INDEX.md).define-contracts).Save the prompt to a temporary file in .humanize/skill/<id>/audit-prompt.md
or pipe directly. Either way, do not put a multi-thousand-line diff into a
single shell argument.
Recommended invocation, via the codex-plugin-cc shipped command:
# Through the codex-plugin-cc /codex:adversarial-review command,
# which the user must have installed.
# Equivalent direct codex call:
codex exec -m gpt-5.4 \
-c model_reasoning_effort=high \
--full-auto \
-C "$(git rev-parse --show-toplevel)" \
- < "$AUDIT_PROMPT_FILE"
If codex-plugin-cc exposes a slash command (e.g.,
/codex:adversarial-review), prefer using it through the parent Claude
Code session rather than calling codex exec directly — that gives the
user the standard codex result-handling UX.
When Codex returns:
Do not restate the entire Codex output verbatim. Compress.
If Codex returned a clean review (no violations), say so plainly with the review's confidence note.
[abstract]The audit covers, in order of priority:
The full rubric is in references/audit-checklist.md.
[abstract]build/ or generator outputs out before invoking.diagnose — the sister skill, for build / sim / test failure triage.define-contracts — the contracts the audit checks against.references/audit-checklist.md — the full audit rubric.references/audit-prompt-template.md — the Codex prompt template.