From critic
Review the full manuscript using Claude (with rejection pass), Codex, and an adversarial model. Cross-review, synthesis, save. Use when the user wants honest, agency-level feedback on the whole book.
npx claudepluginhub jdpedrie/critic --plugin criticThis skill uses the workspace's default tool permissions.
The vault path for all tool calls is: ${user_config.vault_path}
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
The vault path for all tool calls is: ${user_config.vault_path}
Run the full manuscript review pipeline non-interactively. Two primary reviewers (Claude with rejection pass, Codex), one adversarial reviewer, cross-review, synthesis, save.
$ARGUMENTS is optional. If provided, it can specify focus areas.
Run all steps without stopping for user input.
Call these in parallel:
summarize-review with prefix "manuscript-critic" and the vault pathnext-review-number with the vault pathHold the prior review synthesis (if any) and the review number for subsequent steps.
Call ALL available review tools in parallel:
review-manuscript-claude-rejection with the vault path (and prior_review_summary if available)review-manuscript-codex with the vault path (and prior_review_summary if available)review-manuscript-grok with the vault path (and prior_review_summary if available)review-manuscript-adversarial with the vault path (no prior_review_summary — it gets the raw text only)Claude rejection tool returns JSON with review, rejection, and session_id. Parse all three.
Codex tool returns JSON with review and session_id.
Grok tool returns JSON with review and session_id.
Adversarial tool returns prose directly (no session — one-shot rejection framing).
If any tool errors or is unavailable, proceed with the others. At least two of the three primary reviewers (Claude, Codex, Grok) must succeed to continue. The adversarial rejection pass is valuable but optional.
Call cross-review-manuscript with:
claude_review: Claude's review text (not the rejection pass)codex_review: Codex's review textgemini_review: Grok's review text (pass via the gemini_review param — it's the third reviewer slot)claude_session_id: Claude's session IDcodex_session_id: Codex's session IDgemini_session_id: Grok's session ID (pass via the gemini_session_id param)Do NOT include the adversarial rejection in cross-review. It feeds directly into synthesis as a one-way input.
Call the synthesize tool with:
reviews: JSON object with "claude", "codex", "grok" keys mapping to their review text. Include the adversarial rejection as "adversarial". Include Claude's rejection pass as "claude_rejection".rebuttals: JSON object mapping model names to their rebuttal text from the cross-reviewreview_number: the number from step 0Call save-review with:
vault: the vault pathprefix: "manuscript-critic"content: a single markdown file structured as:[synthesis output here]
<!-- RAW AGENT OUTPUTS BELOW — NOT INCLUDED IN FUTURE REVIEW CONTEXT -->
# Claude Review
[Claude's raw review]
---
# Claude Rejection Pass
[Claude's rejection pass output]
---
# Codex Review
[Codex's raw review]
---
# Grok Review
[Grok's constructive review, if available]
---
# Adversarial Rejection
[Adversarial rejection output, if available]
---
# Cross-Review
[full cross-review output]
Use H1 (#) for agent section headings. The sentinel MUST be included exactly as shown.
Tell the user where the file was saved and the review number. Present the synthesis in conversation. After the synthesis, separately highlight the rejection pass findings — these are the most important corrective signal.