From grimoire
Reviews existing security findings against best practices for title clarity, description completeness, recommendation objectivity, severity accuracy, and reference validity. Validates structure with bash script.
npx claudepluginhub joranhonig/grimoireThis skill uses the workspace's default tool permissions.
Review and harden existing security findings against best practices.
Teaches structure, YAML frontmatter fields, required sections, philosophy, and best practices for writing self-contained security findings in markdown files.
Verifies code-review and security-review findings for false positives using deep codebase tracing, framework-aware analysis, and web research. Produces a .verified.md report.
Runs parallel specialized reviewers for code, content, strategy, or plan validation using presets like code, security, content, full, with optional mmbridge security integration.
Share bugs, ideas, or general feedback.
Review and harden existing security findings against best practices.
Before starting, read skills/finding/SKILL.md to understand finding structure, best
practices, and conventions. That skill defines the format, quality standards, and principles
that this workflow evaluates against.
When this skill is activated, create a todo list from the following steps. Mark each task in_progress before starting it and completed when done.
- [ ] 0. Load finding knowledge (read skills/finding/SKILL.md)
- [ ] 1. Load and validate finding
- [ ] 2. Analyze content
- [ ] 3. Present review and offer updates
Read skills/finding/SKILL.md to internalize finding structure, best practices, and
conventions. This is required before proceeding — the base skill defines the standards you
will evaluate against.
Read the target finding file. Parse frontmatter and sections.
Run structural validation:
bash skills/finding/scripts/validate-finding.sh <path-to-finding>
Report any schema violations — missing frontmatter fields, missing required sections, invalid severity values.
Evaluate the finding against the guidelines in the finding skill and
skills/finding/references/finding-best-practices.md:
Title — does it satisfy the where/how/what rule? If not, propose an improved title.
Description — is it self-contained? Could a reader unfamiliar with the codebase understand the vulnerability from this section alone? Is impact clearly stated?
Details — if present, does it add value beyond the description? If absent, should it be added?
Recommendation — is it objective? Does it avoid non-trivial code suggestions? Is it actionable by a project maintainer?
Severity — is the estimate reasonable given the described impact, exploitability, and preconditions?
Preconditions & attacker class — does the finding enumerate every prerequisite needed to exploit, and name the minimum attacker class (any user, privileged role, governance majority, etc.)? Missing or vague preconditions are a review failure — they are what the reader uses to decide whether the finding applies to their deployment.
PoC reference — does the @-referenced file exist? Is it correctly formatted?
References — are all cited references real and relevant? Are claims fact-checked?
Familiar agent check — Invoke the familiar agent in finding triage mode (Mode 1) on the finding being reviewed. The familiar produces a triangulated assessment across Impact, Feasibility (with an attacker class and prerequisite predicate), Design Intent, and Scope Cross-Reference. Include the familiar's verdict (Confirmed, Severity Adjusted, Uncertain, Possibly By Design, or Dismissed) and each of these four sections in the review output. Compare the familiar's feasibility analysis against the finding's stated preconditions — discrepancies are the review's most important signal.
Librarian agent check — Use the librarian agent to verify cited references exist and are accurate. Ask the librarian to search for additional relevant references (prior findings for similar flaws, specification clauses, security advisories) that could strengthen the finding.
Present the review in structured sections:
Each item should cite the guideline it checks against.
Ask the user: "Apply recommended changes? [y/n]"
If yes, apply edits to the finding file. Re-run validation. Present the updated finding.
Suggest follow-ups:
/finding-dedup if the project has multiple findings