From idh
Check a branch's diff against project rules. Mechanical-first — runs hygiene tests + grep ratchet before falling back to LLM. Emits suggested tests for any semantic finding so the LLM surface shrinks over time.
npx claudepluginhub minhhaduong/imperialdragonharnessThis skill uses the workspace's default tool permissions.
Enforce the project's `.claude/rules/*.md` conventions on a branch or PR. **Prefer tests
Provides Ktor server patterns for routing DSL, plugins (auth, CORS, serialization), Koin DI, WebSockets, services, and testApplication testing.
Conducts multi-source web research with firecrawl and exa MCPs: searches, scrapes pages, synthesizes cited reports. For deep dives, competitive analysis, tech evaluations, or due diligence.
Provides demand forecasting, safety stock optimization, replenishment planning, and promotional lift estimation for multi-location retailers managing 300-800 SKUs.
Enforce the project's .claude/rules/*.md conventions on a branch or PR. Prefer tests
over LLM checks. If a rule can be mechanized, the skill's job is to run the test or grep.
An LLM subagent is the fallback for semantic residue only.
Every time the semantic fallback flags a violation, the skill emits a suggested_test
entry proposing how to mechanize that rule. Over time the LLM surface shrinks. New test →
rule is permanently enforced → never needs LLM again.
/verify as part of phase 2.One argument: a branch name or PR number. Resolve PR → branch via gh pr view --json headRefName.
Run the hygiene + discipline tests. These are cheap and definitive:
uv run python -m pytest \
tests/test_script_hygiene.py \
tests/test_io_discipline.py \
tests/test_schema_contracts.py \
-q
Plus any other test files whose names start with test_hygiene_ or test_discipline_.
Failures here are blocking. Record each as {test_id, rule_ref, file:line}.
Run a bank of rg patterns. Each pattern corresponds to a single rule and carries the
rule reference explicitly. Run against the diff (not the whole repo) via
git diff main...HEAD --name-only | xargs rg -n PATTERN.
Minimum starting bank (extend as rules get mechanized):
| Rule | Pattern | Rule ref |
|---|---|---|
fig.savefig( forbidden (use save_figure) | rg 'fig\.savefig\(' | architecture.md rule 5 |
| Direct corpus reads forbidden | rg 'pd\.read_(csv|feather).*refined_(works|embeddings|citations)' | architecture.md rule 9 |
| Hardcoded random seed | rg '(seed\s*=\s*42|RandomState\(42\))' | architecture.md rule 7 |
print( in non-script modules | rg 'print\(' scripts/_ | coding.md logging rule |
Bare logging.getLogger | rg 'logging\.getLogger' | coding.md logging rule |
Hardcoded "data/catalogs" paths | rg '"data/catalogs/' | architecture.md data location |
Failures here are blocking. Record as {pattern, rule_ref, file:line}.
Only runs if any .claude/rules/*.md file changed OR if the diff touches architectural
concerns not covered by phases 1–2. Spin one subagent with:
.claude/rules/*.md files.Output: {rule_section, concern, file:line, severity, suggested_test}.
Hard constraints on the subagent:
suggested_test for every semantic finding — no exceptions.adherence: PASS | FAIL
mechanical_failures:
- test_or_grep: <id>
rule_ref: <.claude/rules/foo.md#section>
file: <path>
line: <n>
semantic_findings:
- rule_section: <.claude/rules/architecture.md#phase-2-rule-4>
concern: <one sentence>
file: <path>
line: <n>
severity: blocking | nit
suggested_test: <grep pattern or pytest snippet>
untested_rules:
- rule: <.claude/rules/foo.md#bar>
suggested_test: <code>
After each run, if semantic_findings is non-empty:
/verify or author) opens a small follow-up ticket: "Mechanize adherence
rule X per suggested_test."tests/test_hygiene_<rule>.py./verify-adherence, the rule is caught by phase 1 instead of
phase 3. LLM surface shrinks permanently.This ratchet is the whole point. Do not accept semantic_findings as a steady state.
file:line or suggested_test → reject the output and
flag as adherence-infrastructure bug. Don't silently accept./review-pr-prose and config/ai-tells.yml).