From blueprint-mode
Check code against documented specs, patterns, anti-patterns, and ADR decisions. Use when the user wants to verify consistency, audit the codebase, check spec compliance, or find violations.
npx claudepluginhub rickardp/blueprint-mode --plugin blueprint-modeThis skill is limited to using the following tools:
Check codebase against documented specs, patterns, anti-patterns, and architectural decisions using parallel sub-agents.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Check codebase against documented specs, patterns, anti-patterns, and architectural decisions using parallel sub-agents.
Invoked by: /blueprint:validate
TOOL USAGE: You MUST invoke the AskUserQuestion tool for scope selection if not specified.
When you see JSON examples in this skill, they are parameters for the AskUserQuestion tool — invoke it, don't output the JSON as text.
FIRST ACTION: Enter plan mode by calling the EnterPlanMode tool.
Gather repo structure and blueprint inventory in parallel. Use Glob and Read directly (not sub-agents) — this should be fast.
1a. Blueprint inventory (parallel Glob calls):
docs/specs/*.md and docs/specs/**/*.md — specs (tech-stack, product, boundaries, features, NFRs)docs/adrs/*.md — architecture decision recordspatterns/bad/**/*.md and patterns/good/**/*.md — documented patternsIf no blueprint files exist: "No Blueprint structure found. Run /blueprint:onboard first." and stop.
1b. Repo structure (parallel Glob calls):
**/*.md — all markdown files (for documentation drift detection).github/workflows/*.yml or .gitlab-ci.yml or Jenkinsfile or bitbucket-pipelines.yml — CI/CDinfra/**/* or terraform/**/* or cdk/**/* or pulumi/**/* or sst.config.* or Dockerfile* or docker-compose* — infrastructurepackage.json or requirements.txt or go.mod or Cargo.toml or pyproject.toml — dependency manifestssrc/, lib/, app/, functions/, common/)1c. Read blueprint files: Read all discovered spec, ADR, boundary, and pattern files. Extract key validation rules into a structured context block:
=== BLUEPRINT CONTEXT ===
TECH STACK (from docs/specs/tech-stack.md):
- Runtime: [X]
- Framework: [X]
- Database: [X]
- Commands: install=[X], dev=[X], test=[X], lint=[X]
BOUNDARIES (from docs/specs/boundaries.md):
- Always: [rule1], [rule2], ...
- Never: [rule1], [rule2], ...
- Ask First: [rule1], [rule2], ...
ADR DECISIONS:
- ADR-001 [title]: Chose [X], rejected [Y, Z]
- ADR-002 [title]: Chose [X], rejected [Y, Z]
- ...
PATTERNS:
- Anti-patterns: [name1: description], [name2: description], ...
- Good patterns: [name1: key elements], ...
FEATURES (from docs/specs/features/):
- [name] (status: Active|Planned|Deprecated, maturity: Exploring|Building|Hardening|Stable, module: path, ADRs: [...])
- ...
=== END CONTEXT ===
Branch Detection:
git branch --show-currentmain/master, run git diff --name-only main...HEAD 2>/dev/null || git diff --name-only master...HEAD 2>/dev/nullIf on a feature branch with changes, use AskUserQuestion:
{
"questions": [{
"question": "You're on branch '[branch-name]' with [N] changed files vs main. What should I validate?",
"header": "Scope",
"options": [
{"label": "Branch changes (Recommended)", "description": "Only validate files changed on this branch vs main/master"},
{"label": "All source", "description": "Validate entire codebase excluding node_modules, dist, build"},
{"label": "Specific directory", "description": "I'll specify a path to validate"}
],
"multiSelect": false
}]
}
If on main/master or no branch changes:
{
"questions": [{
"question": "What should I validate?",
"header": "Scope",
"options": [
{"label": "All source (Recommended)", "description": "Validate entire codebase excluding node_modules, dist, build"},
{"label": "Recent changes", "description": "Only files modified in last commit or uncommitted"},
{"label": "Specific directory", "description": "I'll specify a path to validate"}
],
"multiSelect": false
}]
}
Based on discovery results, launch one Task agent per validation domain — all in a single message so they run concurrently. Use subagent_type: "Explore" and run_in_background: true for each.
Every agent prompt MUST include:
Launch when: Source directories exist.
Prompt must instruct the agent to:
patterns/good/.Launch when: docs/specs/features/*.md exist.
Prompt must instruct the agent to:
Exploring with substantial code and tests → suggest advancing to Building/HardeningStable with many open TODOs or missing tests → flag as inconsistentmaturity field → flag as needing updateLaunch when: Markdown files exist outside docs/specs/, docs/adrs/, patterns/ OR any blueprint files exist.
Prompt must instruct the agent to:
.md files outside the Blueprint structure (CLAUDE.md, README.md, guides, .claude/*.md, etc.).npm install when ADR chose Bun).docs/specs/features/docs/specs/non-functional/docs/adrs/Launch when: CI/CD config files detected (.github/workflows/, .gitlab-ci.yml, etc.).
Prompt must instruct the agent to:
Launch when: Infrastructure files detected (infra/, terraform/, Dockerfile, sst.config.*, etc.).
Prompt must instruct the agent to:
Launch when: Source directories exist.
Prompt must instruct the agent to:
docs/specs/features/. Code that implements user-facing behavior should trace to a feature requirement, not only to an ADR. ADRs capture why a technical choice was made, but the what (functional behavior) belongs in a feature spec.related_adrs linking back.## User Stories section## Requirements section<!-- TODO: --> or "TBD" markers indicating incomplete sectionsrelated_adrs when the feature clearly depends on architectural decisionsdocs/specs/non-functional/ exists and covers key categories (performance, security, scalability, reliability). Flag missing categories as Low if the codebase is small, Medium if infrastructure or deployment configs exist.Read the output from each background agent using the Read tool on their output files. Collect all findings into a unified list.
Present findings in a unified report ranked by severity:
| Severity | Description |
|---|---|
| Critical | Security vulnerabilities, boundary "Never Do" violations, secrets in config |
| High | Tech stack mismatches, stale agent instructions (CLAUDE.md), ADR violations |
| Medium | Pattern inconsistencies, undeclared dependencies, doc drift in guides, misclassified content (e.g., requirements in ADRs, architectural decisions in feature specs), ADR-only features lacking a feature spec, missing NFR categories (when infra exists) |
| Low | Style preferences, minor terminology drift, missing test coverage, incomplete feature spec sections (TBD markers), missing NFR categories (small codebase) |
Report format:
## Blueprint Validation Report
### Source Code
**Tech Stack:** [findings]
**Boundary Compliance:** [findings]
**Pattern Violations:** [findings]
**ADR Compliance:** [findings]
### Features
**Feature Coverage:**
| Feature | Spec Status | Module | Evidence |
|---------|-------------|--------|----------|
| ... | ... | ... | ... |
**Orphaned Modules:** [findings]
### Documentation Drift
| File | Issue | Severity | Blueprint Source |
|------|-------|----------|-----------------|
| ... | ... | ... | ... |
### Content Classification
| File | Misplaced Content | Should Be In | Severity |
|------|-------------------|--------------|----------|
| ... | ... | ... | ... |
### CI/CD
[findings if agent was launched, otherwise "No CI/CD config detected"]
### Infrastructure
[findings if agent was launched, otherwise "No infrastructure config detected"]
### Requirements Gaps
**Unspecified Implementations:**
| Module | Has Feature Spec | Has ADR Only | Suggested Action |
|--------|-----------------|--------------|------------------|
| ... | ... | ... | ... |
**Incomplete Feature Specs:**
| Feature | Missing Section | Severity |
|---------|----------------|----------|
| ... | ... | ... |
**NFR Coverage:**
| Category | Status |
|----------|--------|
| Performance | Documented / Missing |
| Security | Documented / Missing |
| Scalability | Documented / Missing |
| Reliability | Documented / Missing |
### Summary
- Critical: [N] | High: [N] | Medium: [N] | Low: [N]
- Agents run: [list of domains scanned]
- Domains skipped: [list not applicable to this repo]
If the Requirements Gaps agent found gaps, interview the user using AskUserQuestion.
For each unspecified implementation or ADR-only feature found, present a batch (up to 5 at a time):
{
"questions": [{
"question": "These modules have no feature spec. Which ones should get one?",
"header": "Unspecified Implementations",
"options": [
{"label": "[module1]", "description": "Currently only referenced by ADR-NNN"},
{"label": "[module2]", "description": "No blueprint reference at all"},
{"label": "Skip for now", "description": "I'll handle these later"}
],
"multiSelect": true
}]
}
For selected modules, create feature specs using the template from _templates/TEMPLATES.md with TBD markers for unknown sections. Link any related ADRs in the related_adrs frontmatter field.
For incomplete feature specs (missing user stories, requirements, or acceptance criteria), ask:
{
"questions": [{
"question": "[Feature] is missing [sections]. Want to fill these in now?",
"header": "Incomplete Feature Specs",
"options": [
{"label": "Yes, interview me", "description": "I'll answer questions to complete the spec"},
{"label": "Add TBD markers", "description": "Mark sections as TODO for later"},
{"label": "Skip", "description": "Leave as-is for now"}
],
"multiSelect": false
}]
}
If the user chooses "Yes, interview me", ask targeted content questions about user stories, requirements, and acceptance criteria for that feature. Create or update the spec with their answers.
For missing NFR categories, ask:
{
"questions": [{
"question": "No NFR specs found for these categories. Which should I create?",
"header": "Missing Non-Functional Requirements",
"options": [
{"label": "Performance", "description": "Latency, throughput, response times"},
{"label": "Security", "description": "Auth, encryption, data protection"},
{"label": "Scalability", "description": "Load handling, growth capacity"},
{"label": "Reliability", "description": "Uptime, recovery, fault tolerance"},
{"label": "Skip for now", "description": "I'll handle NFRs later"}
],
"multiSelect": true
}]
}
For selected NFR categories, create files in docs/specs/non-functional/ using the template with TBD markers.
/blueprint:decide to document the actual choice/blueprint:good-pattern/blueprint:validate → Full validation (all domains in parallel)/blueprint:validate specs → Source code + features agents only/blueprint:validate docs → Documentation drift agent only/blueprint:validate features → Features agent only/blueprint:validate adrs → ADR compliance checks across all domainssrc/auth/