From nw
Supplies critique dimensions for architecture quality reviews, detecting biases, validating ADRs, quality attributes, feasibility, and priorities in docs.
npx claudepluginhub nwave-ai/nwave --plugin nwThis skill uses the workspace's default tool permissions.
Pattern: tech chosen by preference, not requirements. Detection: ADR lacks comparison matrix, choice not mapped to requirements, justified only as "best practice." Severity: HIGH.
Provides critique dimensions for architecture quality in ADR peer reviews: bias detection, completeness, feasibility, testability, and priority validation.
Delivers systematic architecture reviews across 7 dimensions with scored reports, highlighting risks and recommendations.
Standardized architecture review process with consistent criteria. Evaluate systems against principles, quality attributes, and operational readiness. Use when conducting architecture reviews.
Share bugs, ideas, or general feedback.
Pattern: tech chosen by preference, not requirements. Detection: ADR lacks comparison matrix, choice not mapped to requirements, justified only as "best practice." Severity: HIGH.
Pattern: complex/trendy tech without requirement justification. Examples: microservices for 3-person team, Kafka for 100 req/day, service mesh without complexity. Detection: complexity exceeds team size/requirements, tech adds resume value not solves problem. Severity: CRITICAL.
Pattern: unproven tech (<6 months, small community) for production. Detection: check maturity, community, LTS, fallback plan. Severity: HIGH.
ADR lacks business problem, technical constraints, or quality attribute requirements. Future maintainers cannot validate. Severity: HIGH.
No alternatives (min 2 required). Each must be evaluated against requirements with rejection rationale. Severity: HIGH.
Omits positive/negative consequences and trade-offs. Quality attribute impact not analyzed. Severity: MEDIUM.
Architecture doesn't address required attributes. Verify: performance (latency, throughput) | scalability | security (auth, data protection) | maintainability (modularity, testability) | reliability (fault tolerance, recovery) | observability (logging, monitoring, alerting). Severity: CRITICAL.
Performance requirements exist but no optimization strategy (caching, indexing, rate limiting, CDN). Severity: CRITICAL.
Requires expertise team lacks. Verify learning curve reasonable, training plan exists. Severity: HIGH.
Infrastructure costs exceed budget. Verify cost estimate exists and aligns. Severity: HIGH.
Architecture prevents effective testing. Components must enable isolated testing with ports/adapters. Severity: CRITICAL.
Validate roadmap addresses largest bottleneck.
Q1: Largest bottleneck? (timing data must confirm primary problem) Q2: Simpler alternatives considered? (rejected alternatives required) Q3: Constraint prioritization correct? (quantified by impact, constraint-free first) Q4: Data-justified? (key decision with quantitative data)
Failure: Q1=NO (wrong problem) | Q2=MISSING (no alternatives) | Q3=INVERTED (>50% solution for <30% problem) | Q4=NO_DATA for performance
review_id: "arch_rev_{timestamp}"
reviewer: "solution-architect-reviewer"
artifact: "docs/architecture/architecture.md, docs/adrs/*.md"
iteration: {1 or 2}
strengths:
- "{Positive decision with ADR reference}"
issues_identified:
architectural_bias:
- issue: "{pattern detected}"
severity: "critical|high|medium|low"
location: "{ADR or section}"
recommendation: "{actionable fix}"
decision_quality:
- issue: "{ADR quality issue}"
severity: "high"
location: "ADR-{number}"
recommendation: "{add missing section}"
completeness_gaps:
- issue: "{quality attribute not addressed}"
severity: "critical"
recommendation: "{add architecture section}"
implementation_feasibility:
- issue: "{capability, budget, testability concern}"
severity: "high"
recommendation: "{simplify or add mitigation}"
priority_validation:
q1_largest_bottleneck:
evidence: "{data or NOT PROVIDED}"
assessment: "YES|NO|UNCLEAR"
q2_simple_alternatives:
assessment: "ADEQUATE|INADEQUATE|MISSING"
q3_constraint_prioritization:
assessment: "CORRECT|INVERTED|NOT_ANALYZED"
q4_data_justified:
assessment: "JUSTIFIED|UNJUSTIFIED|NO_DATA"
approval_status: "approved|rejected_pending_revisions|conditionally_approved"
critical_issues_count: {number}
high_issues_count: {number}