From ai-brain-starter
Reads Meta/RESOLVER.md, parses rules table and YAML frontmatter, scores token overlap against natural-language query to surface matches. Invoke via /resolver-query <question>.
npx claudepluginhub adelaidasofia/ai-brain-starter<natural-language-question> [--vault-root PATH] [--limit N]This skill uses the workspace's default tool permissions.
Look up which rule in `Meta/RESOLVER.md` applies to a natural-language question. The skill itself does NOT call an LLM. It reads RESOLVER.md, parses every rule row, and returns either a decisive match, a ranked candidate list, or "no rule matches." The host Claude session does the natural-language understanding on top of the structured output.
Generates draft hookify rules, skills, and CLAUDE.md from Slack exports, Notion, GDocs, or markdown vaults for ai-brain-starter onboarding.
Scans installed skills to extract cross-cutting principles appearing in 2+ skills and distills them into rules by appending, revising, or creating rule files. Use for periodic rules maintenance.
Scans installed skills to extract principles shared across 2+ skills and distills them into rules by appending, revising, or creating rule files.
Share bugs, ideas, or general feedback.
Look up which rule in Meta/RESOLVER.md applies to a natural-language question. The skill itself does NOT call an LLM. It reads RESOLVER.md, parses every rule row, and returns either a decisive match, a ranked candidate list, or "no rule matches." The host Claude session does the natural-language understanding on top of the structured output.
Meta/RESOLVER.md from the vault root.## Rules table into a structured record.rule_id, source path, skill link, and source-file H1/topic when available.
c. If exactly one rule scores >= the decisive threshold, return that rule directly.
d. Otherwise return up to --limit rules ranked by score.
e. If no rule scores above zero, return no rule matches this query.python3 skills/resolver-query/query.py "How do we handle pricing exceptions?" \
--vault-root <vault>
Output is a JSON document on stdout with shape:
{
"question": "...",
"vault_root": "...",
"resolver_built_at": "...",
"rule_count": 17,
"match_kind": "decisive | ranked | none",
"matched_rules": [
{
"rule_id": "...",
"type": "decision | workflow | exception | fact",
"status": "active | stale | superseded | under-review | unknown",
"last_verified": "...",
"source_path": "...",
"skill_link": "...",
"score": 0.0
}
],
"summary": "..."
}
If the match kind is decisive, the host session opens the rule's source file (source_path). If ranked, the host session presents the top candidates to the operator and lets them pick. If none, the host session tells the operator no rule applies and offers to draft one (which would route to synth-thread-to-sop or synth-pr-to-sop).
RESOLVER.md or any source file.query.py.match_kind == none, the skill returns the empty matched_rules list and the host session tells the operator no rule matches; it does not invent one./synth-pr-to-sop and /synth-thread-to-sop write new rules.scripts/resolver-build.py rebuilds RESOLVER.md.scripts/resolver-conflict-report.py surfaces conflicts in JSON.scripts/resolver-branch-merge-prompt.py drafts merge prompts.