From architect
Evaluate a technology, framework, or tool against weighted criteria. Produces a structured comparison with scored trade-offs and a recommendation. Use when choosing between technologies or assessing fitness of a single option.
npx claudepluginhub hpsgd/turtlestack --plugin architectThis skill is limited to using the following tools:
Evaluate $ARGUMENTS for fitness against the project's needs.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Analyzes BMad project state from catalog CSV, configs, artifacts, and query to recommend next skills or answer questions. Useful for help requests, 'what next', or starting BMad.
Evaluate $ARGUMENTS for fitness against the project's needs.
Before researching, establish what matters. Start with these defaults and adjust for the specific evaluation:
| Criterion | Weight (1–5) | Why it matters |
|---|---|---|
| Maturity / stability | Production readiness, breaking change history | |
| Community / ecosystem | Long-term support, plugins, answers on Stack Overflow | |
| Team familiarity | Ramp-up cost, hiring pool | |
| Performance | Meets non-functional requirements under load | |
| Maintenance burden | Ongoing cost of ownership, upgrade friction | |
| Lock-in risk | Can we migrate away? Standard interfaces or proprietary? | |
| Cost | Licensing, infrastructure, operational spend | |
| Integration | Works with existing stack, language, CI/CD |
Rules for criteria:
Output: A completed criteria table with weights and justifications.
For each technology under evaluation, produce a structured research brief:
### [Technology name]
**What:** [One-sentence description — what it is and what problem it solves]
**Version:** [Current stable version, release date]
**License:** [License type — MIT, Apache 2.0, BSL, proprietary, etc.]
**Notable adopters:** [3–5 companies or projects using it in production]
#### Maturity signals
- First stable release: [date]
- Release cadence: [frequency]
- Breaking changes in last 2 major versions: [count and severity]
- Open issues / PRs: [count, trend]
#### Community signals
- GitHub stars: [count] | npm/PyPI weekly downloads: [count]
- Stack Overflow questions: [count, answer rate]
- Active maintainers: [count, bus factor]
- Discord/forum activity: [qualitative — active, moderate, quiet]
#### Known limitations
- [Limitation 1 — sourced from issues, forums, or migration guides]
- [Limitation 2]
- [Search: "[tech] problems", "[tech] limitations", "[tech] vs alternatives"]
Output: One research brief per option.
Rate each option against each criterion on a 1–5 scale. Multiply by weight for a weighted score.
| Criterion | Weight | Option A | | Option B | |
|---|---|---|---|---|---|
| | | Raw (1–5) | Weighted | Raw (1–5) | Weighted |
| Maturity / stability | 5 | 4 | 20 | 3 | 15 |
| Community / ecosystem | 3 | 5 | 15 | 4 | 12 |
| ... | | | | | |
| **Total** | | | **X** | | **Y** |
Rules for scoring:
Output: Completed scoring matrix with justifications.
No technology wins on every dimension. Explicitly state:
### Trade-offs
| Choosing Option A means... | Choosing Option B means... |
|---|---|
| [Advantage A has over B] | [Advantage B has over A] |
| [Cost of choosing A] | [Cost of choosing B] |
### Risks to monitor post-adoption
| Risk | Trigger signal | Mitigation |
|---|---|---|
| [Risk 1] | [What you'd see if this risk materialises] | [What you'd do] |
| [Risk 2] | [Trigger] | [Mitigation] |
Output: Trade-off comparison and risk register.
State the recommendation clearly:
### Recommendation
**Choose: [Option]**
**Primary reason:** [The single most important factor]
**Secondary reasons:** [Supporting factors]
**What we sacrifice:** [Explicit trade-off acknowledgement]
**Reconsideration triggers:** [Conditions that would change this recommendation]
If the evaluation warrants a formal record, suggest writing an ADR using /architect:write-adr.
If neither option is clearly better, say so — recommend a time-boxed spike or prototype instead of a forced choice.
# Technology Evaluation: [subject]
## Evaluation Criteria
[Weighted criteria table from Step 1]
## Research
### [Option A]
[Research brief]
### [Option B]
[Research brief]
## Scoring Matrix
[Weighted scoring table from Step 3]
## Trade-offs
[Trade-off table from Step 4]
## Risks
[Risk register from Step 4]
## Recommendation
[Recommendation from Step 5]
## Recommended Follow-ups
- [ ] ADR for this decision (if applicable)
- [ ] Spike/prototype for low-confidence areas
- [ ] Re-evaluate on [date or trigger condition]