This skill should be used when the user asks to "explain system tradeoffs", "analyze architecture tradeoffs", "what tradeoffs does this system make", "reverse-engineer design decisions", "audit distributed system design", or "explain the design choices in this codebase". Also triggers when the user mentions a tradeoff axis by name (e.g., "consistency vs availability", "latency vs throughput", "CAP theorem", "PACELC", "sharding tradeoffs", "resilience patterns", "data distribution strategy"). Supports analyzing all six axes at once or focusing on a single axis.
npx claudepluginhub florianbuetow/claude-code --plugin explain-system-tradeoffsThis skill uses the workspace's default tool permissions.
Reverse-engineer distributed system tradeoffs from code, configuration, deployment
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Generates images from structured JSON prompts via Python script execution. Supports reference images and aspect ratios for characters, scenes, products, visuals.
Reverse-engineer distributed system tradeoffs from code, configuration, deployment manifests, and architecture artifacts. Produce an evidence-based report that explains what the system prioritizes, what it sacrifices, where choices appear deliberate versus accidental, and what risks or misalignments deserve attention.
Every distributed system encodes its design tradeoffs in artifacts hiding in plain sight — configuration files, schema definitions, deployment manifests, timeout values, retry policies, and code patterns. This skill reads those artifacts like an architectural blueprint.
When evaluating evidence, use three tiers to weigh confidence:
When indicators disagree, prefer artifacts closest to runtime behaviour (Tier C and B) over architecture documentation that may be stale (Tier A language in old design docs).
Request a full analysis or focus on a single tradeoff axis:
| Command Pattern | Axis | Reference |
|---|---|---|
explain-system-tradeoffs | All six axes | All references |
explain-system-consistency-tradeoffs | Consistency & Availability | references/consistency.md |
explain-system-latency-tradeoffs | Latency & Throughput | references/latency.md |
explain-system-data-tradeoffs | Data Distribution | references/data-distribution.md |
explain-system-transaction-tradeoffs | Transaction Boundaries & Coordination | references/transactions.md |
explain-system-resilience-tradeoffs | Resilience & Failure Isolation | references/resilience.md |
explain-system-operations-tradeoffs | Observability, Security & Cost | references/operations.md |
When no subcommand is specified, default to analyzing all six axes. When a tradeoff axis is mentioned by name or concept (even without the command prefix), match it to the appropriate subcommand.
When a single axis is requested (e.g., explain-system-consistency-tradeoffs),
execute the analysis directly in the main agent:
When all six axes are requested (explain-system-tradeoffs), use parallel
subagents to analyze each axis concurrently. This is faster and produces
better results because each subagent can focus deeply on one axis.
CRITICAL — How parallel execution works: The Task tool runs subagents in parallel ONLY when multiple Task tool calls appear in the SAME response message. If you emit them across separate messages, they run sequentially. You MUST include all six Task tool calls in a single response to get concurrency.
Determine what code, configuration, or architecture to analyze:
Resolve the target to a concrete set of paths before launching subagents. This MUST be done before Step 2 — subagents get their own isolated context window and cannot see the conversation history or resolve ambiguous targets.
Emit exactly six Task tool calls in a single response message. This is what triggers concurrent execution. Do NOT emit them one at a time.
Technical requirements for each Task call:
subagent_type: "general-purpose"description: Short label (e.g., "Analyze consistency tradeoffs")prompt: A fully self-contained prompt (see template below). Each
subagent gets its own 200k context window and cannot see the main
conversation, so the prompt must include everything it needs.Each subagent prompt must include:
The six subagents and their reference files:
| Subagent | Reference to read | Focus |
|---|---|---|
| Consistency & Availability | references/consistency.md | CAP/PACELC position, replication, quorum, cache freshness, conflict resolution |
| Latency & Throughput | references/latency.md | GC tuning, thread pools, batching, deadlines, hedging, storage engines, rate limiting |
| Data Distribution | references/data-distribution.md | Shard keys, partition strategies, replication topology, data sovereignty |
| Transaction Boundaries | references/transactions.md | Monolith vs microservices, sagas, outbox, schema evolution, API contracts, dependencies |
| Resilience & Failure Isolation | references/resilience.md | Circuit breakers, retries, bulkheads, chaos engineering, progressive delivery, service mesh |
| Observability, Security & Cost | references/operations.md | Tracing, SLOs, mTLS, audit trails, compliance, cost/reliability topology |
Subagent prompt template (adapt the axis name, reference path, and focus for each of the six — but keep the structure identical):
Analyze the distributed system tradeoffs for the CONSISTENCY & AVAILABILITY axis
in the codebase at: <TARGET_PATHS>
STEP 1: Read the reference file at:
<ABSOLUTE_PATH_TO_SKILL_DIR>/references/consistency.md
STEP 2: Scan the target codebase for indicators described in the reference.
Search configuration files, code patterns, deployment manifests, and schema
definitions. Use Glob, Grep, and Read tools to find evidence.
STEP 3: For each piece of evidence found, classify it:
- What: The specific artifact (file path, config key, code pattern)
- Tier: A (hard commitment — SLA language, quorum rules, schema invariants),
B (mechanism evidence — protocols, configs, GC flags, compaction),
or C (operational signature — dashboards, alerts, SLOs, runbooks)
- Reveals: Which end of the tradeoff spectrum the system leans toward
- Deliberate vs Default: Whether intentional (asymmetric config, tuned values)
or accidental (framework defaults, copy-pasted settings)
STEP 4: Produce your findings in EXACTLY this format:
## Consistency & Availability
**Position:** [Where the system sits on the consistency/availability spectrum]
**Confidence:** HIGH | MEDIUM | LOW
### Evidence
[Numbered list of evidence items with Tier, File, and Detail for each]
### Assessment
[1-2 paragraphs on the tradeoff position and whether it appears deliberate]
### Risks & Recommendations
[Any risks found, each with: Severity (HIGH/MEDIUM/LOW), Location, Issue,
Recommendation. If no risks found, state "No significant risks identified."]
IMPORTANT: Return ONLY the per-axis report above. Do NOT produce a cross-axis
summary or tradeoff profile — the main agent handles cross-axis synthesis.
CRITICAL — Do NOT continue analysis while subagents are running. After launching the six subagents, your ONLY job is to wait for their results. Do NOT:
The subagents are doing the analysis. You are the synthesizer. Wait for all six to return before proceeding to Step 4.
If a subagent fails or returns an error, note the failure and proceed with the remaining results. Do NOT redo the failed subagent's work yourself — report that the axis could not be analyzed and suggest re-running it as a single-axis command.
After all six subagents return their results, the main agent:
Look across the four planes where distributed system tradeoffs surface:
For each tradeoff axis, collect concrete evidence from the codebase using the indicators in the reference files. For each piece of evidence, note:
## [AXIS NAME]
**Position:** Where the system sits on this tradeoff spectrum.
**Confidence:** HIGH | MEDIUM | LOW (based on evidence tier and consistency)
### Evidence
1. **[Artifact]** — [What it reveals]
Tier: A/B/C | File: `path/to/file`, lines ~XX-YY
Detail: Specific explanation of what this artifact tells us.
### Assessment
[1-2 paragraphs explaining the tradeoff position, whether it appears deliberate,
and how it interacts with other axes.]
After presenting each axis, flag issues using this structure:
**Risk — Severity: HIGH | MEDIUM | LOW**
Location: `filename` or `service/module`, lines ~XX-YY
Issue: What appears accidental, misaligned, or risky about this tradeoff position.
Recommendation: Concrete change to align the configuration with the system's
stated or inferred goals.
Severity guidelines:
After all axes, the main agent produces a cross-axis synthesis:
| Axis | HIGH | MEDIUM | LOW |When the user asks to "explain this tradeoff further", "what should we change", or "how do we fix this", provide:
Tradeoffs are decisions, not violations. Apply judgment:
User: explain-system-consistency-tradeoffs (with a codebase directory)
Claude:
references/consistency.mdUser: explain-system-tradeoffs (with a full system)
Claude: