From architecture-review
Evaluate architecture proposals using quality attribute analysis. Use this skill whenever the user asks for a design review, wants feedback on an architecture, is assessing trade-offs between approaches, or needs a pre-implementation design check -- even if they just say "take a look at this design" or "what do you think about this approach."
npx claudepluginhub crazymeal/claude-architect-marketplace --plugin architecture-reviewThis skill is limited to using the following tools:
Provide rigorous, constructive evaluation of architectural proposals.
Provides Ktor server patterns for routing DSL, plugins (auth, CORS, serialization), Koin DI, WebSockets, services, and testApplication testing.
Conducts multi-source web research with firecrawl and exa MCPs: searches, scrapes pages, synthesizes cited reports. For deep dives, competitive analysis, tech evaluations, or due diligence.
Provides demand forecasting, safety stock optimization, replenishment planning, and promotional lift estimation for multi-location retailers managing 300-800 SKUs.
Provide rigorous, constructive evaluation of architectural proposals.
Ask these questions before forming opinions -- jumping to critique without context produces generic feedback that wastes everyone's time:
Reference shared/core-knowledge.md when discussing quality attribute metrics and trade-off pairs.
Key questions per attribute:
Every architectural decision trades one quality for another. Surface these explicitly:
When quality attributes conflict, document the tension explicitly: state which attribute wins, why it wins in this context, and suggest fitness functions to monitor the attribute that lost.
Fitness functions are automated checks that validate architectural characteristics over time -- they catch architectural drift before it becomes a crisis. Suggest concrete, measurable checks like:
Read references/fitness-functions-examples.md for a catalog organized by quality attribute.
Quick review: Strengths → Concerns → Questions → Suggestions
Detailed review: QA analysis → Trade-offs → Risks → Prioritized recommendations
Write review artifacts to docs/reviews/ following shared/output-conventions.md.
| Anti-Pattern | Why It Matters | How to Spot It |
|---|---|---|
| Distributed monolith | Worst of both worlds: distributed complexity without independent deployability | Services that must deploy together or share databases |
| Missing failure handling | Systems that only work on the happy path collapse under real conditions | No circuit breakers, no retry policies, no fallback behavior |
| Implicit QA assumptions | Unstated assumptions become unpleasant surprises in production | No explicit SLAs, no capacity planning, "it should be fast enough" |
| Undocumented trade-offs | Future teams re-litigate decisions without context | Decisions made but no ADRs explaining why |
| Over-engineering | Complexity without corresponding benefit wastes effort and creates maintenance burden | Abstractions with one implementation, microservices for a 3-person team |
| Missing observability | Can't fix what you can't see | No structured logging, no distributed tracing, no alerting strategy |
| Tight external coupling | External changes break your system | Direct integration without anti-corruption layers |
Read references/anti-patterns-guide.md for detailed detection heuristics and remediation strategies.