From humanize
Runs RLCR loops for AI implementation planning, iterative code review with Codex, and automates GitHub PR fixes from Claude/Codex bot reviews.
npx claudepluginhub polyarch/humanize --plugin humanizeThis skill uses the workspace's default tool permissions.
Humanize creates a feedback loop where AI implements your plan while another AI independently reviews the work, ensuring quality through continuous refinement.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides MCP server integration in Claude Code plugins via .mcp.json or plugin.json configs for stdio, SSE, HTTP types, enabling external services as tools.
Humanize creates a feedback loop where AI implements your plan while another AI independently reviews the work, ensuring quality through continuous refinement.
The installer hydrates this skill with an absolute runtime root path:
{{HUMANIZE_RUNTIME_ROOT}}
All command examples below use {{HUMANIZE_RUNTIME_ROOT}}.
Iteration over Perfection: Instead of expecting perfect output in one shot, Humanize leverages an iterative feedback loop where:
The RLCR (Ralph-Loop with Codex Review) loop has two phases:
Phase 1: Implementation
Phase 2: Code Review
codex review --base <branch> checks code quality[P0-9] severity markers{{HUMANIZE_RUNTIME_ROOT}}/scripts/rlcr-stop-gate.sh to enforce hook-equivalent transitions and blockingAutomates handling of GitHub PR reviews from remote bots:
--claude and/or --codex)Transforms a rough draft document into a structured implementation plan with:
# With a plan file
"{{HUMANIZE_RUNTIME_ROOT}}/scripts/setup-rlcr-loop.sh" path/to/plan.md
# Or without plan (review-only mode)
"{{HUMANIZE_RUNTIME_ROOT}}/scripts/setup-rlcr-loop.sh" --skip-impl
# For each round, run the RLCR gate (required)
"{{HUMANIZE_RUNTIME_ROOT}}/scripts/rlcr-stop-gate.sh"
Common Options:
--max N - Maximum iterations before auto-stop (default: 42)--codex-model MODEL:EFFORT - Codex model and reasoning effort for codex exec (default: gpt-5.4:high)codex review uses gpt-5.4:high--codex-timeout SECONDS - Timeout for each Codex review (default: 5400)--base-branch BRANCH - Base branch for code review (auto-detects if not specified)--full-review-round N - Interval for full alignment checks (default: 5)--skip-impl - Skip implementation phase, go directly to code review--track-plan-file - Enforce plan-file immutability when tracked in git--push-every-round - Require git push after each round--claude-answer-codex - Let Claude answer Codex Open Questions directly (default is AskUserQuestion)--agent-teams - Enable Agent Teams mode--yolo - Skip Plan Understanding Quiz and enable --claude-answer-codex--skip-quiz - Skip the Plan Understanding Quiz only"{{HUMANIZE_RUNTIME_ROOT}}/scripts/cancel-rlcr-loop.sh"
# or force cancel during finalize phase
"{{HUMANIZE_RUNTIME_ROOT}}/scripts/cancel-rlcr-loop.sh" --force
# Monitor claude[bot] reviews
"{{HUMANIZE_RUNTIME_ROOT}}/scripts/setup-pr-loop.sh" --claude
# Monitor chatgpt-codex-connector[bot] reviews
"{{HUMANIZE_RUNTIME_ROOT}}/scripts/setup-pr-loop.sh" --codex
# Monitor both
"{{HUMANIZE_RUNTIME_ROOT}}/scripts/setup-pr-loop.sh" --claude --codex
Common Options:
--max N - Maximum iterations (default: 42)--codex-model MODEL:EFFORT - Codex model for validation (default: gpt-5.4:medium)--codex-timeout SECONDS - Timeout for Codex validation (default: 900)"{{HUMANIZE_RUNTIME_ROOT}}/scripts/cancel-pr-loop.sh"
"{{HUMANIZE_RUNTIME_ROOT}}/scripts/validate-gen-plan-io.sh" --input path/to/draft.md --output path/to/plan.md
Then follow the workflow in this skill to generate the structured plan content.
"{{HUMANIZE_RUNTIME_ROOT}}/scripts/ask-codex.sh" [--codex-model MODEL:EFFORT] [--codex-timeout SECONDS] "your question"
A good plan file should include:
# Plan Title
## Goal Description
Clear description of what needs to be accomplished
## Acceptance Criteria
- AC-1: First criterion
- Positive Tests (expected to PASS):
- Test case that should succeed
- Negative Tests (expected to FAIL):
- Test case that should fail
## Path Boundaries
### Upper Bound (Maximum Scope)
Most comprehensive acceptable implementation
### Lower Bound (Minimum Scope)
Minimum viable implementation
### Allowed Choices
- Can use: technologies, approaches allowed
- Cannot use: prohibited technologies
## Dependencies and Sequence
### Milestones
1. Milestone 1: Description
- Phase A: ...
- Phase B: ...
## Implementation Notes
- Code should NOT contain plan terminology like "AC-", "Milestone", "Step"
The RLCR loop uses a Goal Tracker to prevent goal drift:
scripts/rlcr-stop-gate.sh instead of manual phase controlcodex - OpenAI Codex CLI (for review)gh - GitHub CLI (for PR loop)Humanize stores all data in .humanize/:
.humanize/
├── rlcr/ # RLCR loop data
│ └── <timestamp>/
│ ├── state.md
│ ├── goal-tracker.md
│ ├── round-N-summary.md
│ ├── round-N-review-result.md
│ ├── finalize-state.md
│ ├── finalize-summary.md
│ └── complete-state.md
├── pr-loop/ # PR loop data
│ └── <timestamp>/
│ ├── state.md
│ └── resolution-N.md
└── skill/ # One-shot skill results
└── <timestamp>/
├── input.md
├── output.md
└── metadata.md
Use the monitor script to track loop progress:
source "{{HUMANIZE_RUNTIME_ROOT}}/scripts/humanize.sh"
humanize monitor rlcr # Monitor RLCR loop
humanize monitor pr # Monitor PR loop
0 - Success1 - Validation error124 - Timeout0 - Success1 - Input file not found2 - Input file is empty3 - Output directory does not exist4 - Output file already exists5 - No write permission6 - Invalid arguments7 - Plan template file not found