From maister
Discovers coding standards, conventions, and best practices from project configs, code patterns, docs, PRs, and CI/CD sources. Aggregates with confidence scores for user approval and application.
npx claudepluginhub skillpanel/maister --plugin maisterThis skill uses the workspace's default tool permissions.
Analyzes multiple project sources in parallel to discover coding standards, conventions, and best practices. Aggregates findings with confidence scoring, presents for user approval, and applies approved standards via `docs-manager` skill.
Discovers coding standards from project configs, code patterns, docs, PRs, and CI/CD sources. Aggregates confidence-scored findings for review, approval, and application via docs-manager.
Generates or updates docs/code-standards.md by scanning codebase conventions like naming, imports, testing patterns. Produces example-rich standards for consistent LLM code.
Reviews code changes for project standards compliance using CLAUDE.md, rules/*.md, and stack references; finds duplicates and promotes reuse. Supports Java/Kotlin, TypeScript/React, Python.
Share bugs, ideas, or general feedback.
Analyzes multiple project sources in parallel to discover coding standards, conventions, and best practices. Aggregates findings with confidence scoring, presents for user approval, and applies approved standards via docs-manager skill.
| Parameter | Default | Description |
|---|---|---|
--scope | full | Discovery scope: full, quick, or any category name (baseline: global, frontend, backend, testing; custom categories also supported) |
--confidence | 60 | Minimum confidence threshold (0-100) for displaying findings |
--auto-apply | false | Auto-apply standards with confidence >= 90% without asking |
--skip-external | false | Skip GitHub PR analysis and CI/CD sources |
--pr-count | 20 | Number of recent merged PRs to analyze |
Scope determines which phases run:
| Scope | Config (P1) | Code (P2) | Docs (P3) | External (P4) |
|---|---|---|---|---|
full | Yes | Yes | Yes | Yes |
global | Yes | Yes (limited) | Yes | Yes |
frontend | FE configs | FE files | Yes | Yes |
backend | BE configs | BE files | Yes | Yes |
testing | Test configs | Test files | Yes | Yes |
quick | Yes | No | No | No |
[custom] | Relevant configs | Filtered files | Yes | Yes |
Custom scope values are matched against existing .maister/docs/standards/*/ directories and filter analysis to relevant files.
| Phase | Subject | activeForm |
|---|---|---|
| 1 | Plan discovery scope | Planning discovery scope |
| 2 | Analyze configuration files | Analyzing configuration files |
| 3 | Mine code patterns | Mining code patterns |
| 4 | Extract documentation standards | Extracting documentation standards |
| 5 | Analyze external sources | Analyzing external sources |
| 6 | Aggregate & deduplicate findings | Aggregating findings |
| 7 | Review findings with user | Reviewing findings |
| 8 | Apply approved standards | Applying standards |
| 9 | Generate summary report | Generating summary |
Task Tracking: At start of Phase 1, use TaskCreate for all phases above (pending). Set dependencies: Phases 2-5 blocked by Phase 1 (they run in parallel after planning). Phase 6 blocked by Phases 2-5. Phases 7-9 sequential. At each phase start: TaskUpdate to in_progress. At each phase end: TaskUpdate to completed. For phases skipped due to scope (e.g., Phases 3-4 when --scope=quick), mark completed with metadata: {skipped: true, reason: "scope=quick"}.
.maister/docs/ exists. If not, offer to run /maister:init first.maister/docs/INDEX.md to identify updates vs creates and avoid duplicatesCRITICAL: Launch all applicable subagents in ONE message for parallel execution.
Step 1: Determine which phases to run based on scope and flags.
Step 1.5: Create temp output directory — Run mktemp -d via Bash to create a unique temp directory for this invocation. Store the path (e.g., /tmp/abc123). Each subagent will write its results to a dedicated file in this directory: {tmpdir}/config.yml, {tmpdir}/code.yml, {tmpdir}/docs.yml, {tmpdir}/external.yml.
Step 2: Read prompt templates
STOP — Do NOT skip this step. Do NOT write prompts from memory.
Use the Read tool to load ONLY the reference files for phases you will execute:
| Phase | Condition | Read This File |
|---|---|---|
| 2: Config Analysis | Always | references/config-analyzer-prompt.md |
| 3: Code Patterns | scope != quick | references/code-pattern-prompt.md |
| 4: Documentation | scope != quick | references/docs-extractor-prompt.md |
| 5: External Sources | --skip-external not set | references/external-analyzer-prompt.md |
SELF-CHECK: Did you read the template files with the Read tool? If not, go back and read them now.
Step 3: Adapt templates — Replace [scope], [confidence], and other placeholders with actual values. Replace the [output_file] placeholder in each template with the actual temp file path for that phase (e.g., {tmpdir}/config.yml).
Step 4: Launch subagents in parallel — Use the Task tool with subagent_type: general-purpose for each phase.
❌ WRONG — launching one agent per message, waiting for result, then launching the next. ✅ CORRECT — launching ALL applicable agents (2–4 Task calls) in a SINGLE message.
Step 5: Wait for ALL subagents to complete, then read each temp file using the Read tool to collect findings.
Step 6: Display progress — Show count of findings per phase.
Read references/aggregation-strategy.md for confidence scoring methodology.
category + standard_name — merge evidence and sources--confidence thresholdDisplay aggregation summary: total raw findings, unique standards, conflicts detected.
Step 1: Present full summary table — Before any approval prompts, output ALL findings in a table grouped by confidence level. Each group has a header with count:
### High Confidence (>=80%) — 5 standards
| # | Standard | Category | Score | Sources | Description |
|---|----------|----------|-------|---------|-------------|
| 1 | no-semicolons | global | 92 | config, code, docs | Omit semicolons in all JS/TS files |
| 2 | ... | ... | ... | ... | ... |
### Medium Confidence (60-79%) — 3 standards
...
### Low Confidence (<60%) — 2 standards
...
### Conflicts — 1 detected
| # | Standard | Conflict | Sources A | Sources B |
The Sources column lists all contributing sources for each finding (config, code, docs, PRs, CI, pre-commit). This gives users full visibility before making decisions.
Step 2: Approval flow — After the summary table:
High confidence (>= 80%): Use AskUserQuestion offering batch approval ("Apply all N high-confidence standards") or individual drill-down review. For drill-down, show full detail per finding: all evidence items with source attribution, examples (preferred/avoid), and confidence score breakdown (which factors contributed how many points).
Medium confidence (60-79%): Present each individually with full detail (evidence, examples, confidence breakdown). Use AskUserQuestion with Accept/Modify/Skip options per finding.
Low confidence (< threshold): Show the summary table rows only. Offer to expand details or skip all.
Conflicts: Present each conflict showing both sides with their evidence and sources. Use AskUserQuestion to resolve (pick side A, pick side B, skip, or custom).
If --auto-apply is set, automatically approve findings with confidence >= 90% and only prompt for the rest.
DELEGATION REQUIRED: Do NOT write standard files directly using Write/Edit tools. ALL file operations MUST go through the
docs-operatorsubagent (Task tool).SELF-CHECK before each file operation: "Am I about to write a file directly? STOP — invoke docs-operator via Task tool instead."
For each approved standard:
### heading with 1-10 lines description (excluding code snippets). Group related standards into the same topic file. Add brief code examples only when they clarify the practice.docs-operator subagent via Task tool (subagent_type: maister:docs-operator) — Pass prepared content. For creates: new file. For updates: merge new findings with existing. Wait for completion, then continue with the next standard.docs-operator subagent via Task tool to regenerate INDEX.md. Wait for completion, then continue with step 5.docs-operator subagent via Task tool to verify CLAUDE.md integration — ensure standards directory is referenced. Wait for completion, then display the application summary.Display application summary: created count, updated count, total active.
Display final results:
| Situation | Strategy |
|---|---|
.maister/docs/ missing | Offer /maister:init, abort if declined |
| gh CLI unavailable | Skip PR analysis, continue with other sources |
| GitHub API rate limit | Skip PR analysis, note in report |
| Config file parse error | Skip that file, log warning, continue |
| No standards found | Suggest lowering threshold or checking specific scope |
| docs-manager fails | Offer retry/skip/cancel per standard |
| Subagent returns empty | Note in report, proceed with available findings |
| Integrates With | How |
|---|---|
docs-manager skill | Creates/updates standard files, regenerates INDEX.md |
implementation-plan-executor skill | Discovered standards immediately available via INDEX.md |
standards-update command | Complementary: discover = automated bulk, update = manual single |
# Full discovery (default)
/maister:standards-discover
# Quick scan (config files only, ~30-60s)
/maister:standards-discover --scope=quick
# Frontend standards only
/maister:standards-discover --scope=frontend
# High confidence, auto-apply
/maister:standards-discover --confidence=80 --auto-apply
# Skip external analysis (offline/no GitHub)
/maister:standards-discover --skip-external