From thinking-frameworks-skills
Structures reasoning across 30K ft strategic, 3K ft tactical, and 300 ft operational levels using top-down decomposition, bottom-up aggregation, cross-layer translation, and constraint propagation. For hierarchical system design and multi-abstraction explanations.
npx claudepluginhub lyndonkl/claude --plugin thinking-frameworks-skillsThis skill uses the workspace's default tool permissions.
---
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
When: Starting from vision/principles, deriving concrete actions
Structure:
Example: Product strategy
Process: (1) Define strategic layer invariants, (2) Derive tactical options that satisfy invariants, (3) Select tactics, (4) Design operational procedures implementing tactics, (5) Validate operational layer doesn't violate strategic constraints
When: Starting from observations/data, building up to principles
Structure:
Example: Engineering postmortem
Process: (1) Collect operational data, (2) Identify patterns and group, (3) Formulate hypotheses at tactical layer, (4) Validate with more data, (5) Distill strategic principles
When: Explaining same concept to different audiences (CEO, manager, engineer)
Technique: Translate preserving core meaning while adjusting abstraction
Example: Explaining tech debt
Process: (1) Identify audience's layer, (2) Extract core message, (3) Translate using concepts/metrics relevant to that layer, (4) Maintain causal links across layers
When: High-level constraints must guide low-level decisions
Mechanism: Strategic constraints flow down, narrowing options at each layer
Example: Healthcare app design
Guardrail: Lower layers cannot violate upper constraints (e.g., operational decision to skip encryption violates strategic constraint)
When: Lower-layer interactions create unexpected upper-layer behavior
Example: Team structure
Process: (1) Observe operational behavior, (2) Identify emerging patterns at tactical layer, (3) Recognize strategic implications, (4) Adjust strategy if needed
When: Validating that all layers align (no contradictions)
Check types:
Example inconsistency: Strategy says "Move fast," tactics say "Extensive approval process," operations say "3-week release cycle" → Contradiction
Fix: Align layers. Either (1) change strategy ("Move carefully"), (2) change tactics ("Lightweight approvals"), or (3) change operations ("Daily releases")
Use this structured approach when applying layered reasoning:
□ Step 1: Identify relevant layers and abstraction levels
□ Step 2: Define strategic layer (principles, invariants, constraints)
□ Step 3: Derive tactical layer (approaches that satisfy strategy)
□ Step 4: Design operational layer (concrete actions implementing tactics)
□ Step 5: Validate consistency across all layers
□ Step 6: Translate between layers for different audiences
□ Step 7: Iterate based on feedback from any layer
□ Step 8: Document reasoning at each layer
Step 1: Identify relevant layers and abstraction levels (details) Determine how many layers needed (typically 3-5). Map layers to domains: business (vision/strategy/execution), technical (architecture/design/code), organizational (mission/goals/tasks).
Step 2: Define strategic layer (details) Establish high-level principles, invariants, and constraints that must hold. These are non-negotiable and guide all lower layers.
Step 3: Derive tactical layer (details) Generate approaches/policies/architectures that satisfy strategic constraints. Multiple tactical options may exist; choose based on tradeoffs.
Step 4: Design operational layer (details) Create specific procedures, implementations, or actions that realize tactical choices. This is where execution happens.
Step 5: Validate consistency across all layers (details) Check upward (do ops implement tactics?), downward (can strategy be executed?), and lateral (do parallel choices conflict?) consistency.
Step 6: Translate between layers for different audiences (details) Communicate at appropriate abstraction level for each stakeholder. CEO needs strategic view, engineers need operational detail.
Step 7: Iterate based on feedback from any layer (details) If operational constraints make tactics infeasible, adjust tactics or strategy. If strategic shift occurs, propagate changes downward.
Step 8: Document reasoning at each layer (details) Write explicit rationale at each layer explaining how it relates to layers above/below. Makes assumptions visible and aids future iteration.
Danger: Strategic goals contradict operational reality, or implementation violates principles
Guardrail: Regularly check upward, downward, and lateral consistency. Propagate changes bidirectionally (strategy changes → update tactics/ops; operational constraints → update tactics/strategy).
Red flag: "Our strategy is X but we actually do Y" signals layer mismatch
Danger: Jumping from 30K to 300 ft confuses audiences, loses context
Guardrail: Move through layers sequentially. If explaining to executive, start 30K → 3K (stop there unless asked). If explaining to engineer, provide 30K context first, then dive to 300 ft.
Test: Can listener answer "why does this matter?" (links to upper layer) and "how do we do this?" (links to lower layer)
Danger: Layers that only make sense when combined, not standalone
Guardrail: Strategic layer should guide decisions even without seeing operations. Tactical layer should be understandable without code. Operational layer should be executable without re-deriving strategy.
Principle: Good layers can be consumed independently by different audiences
Danger: Too many layers create overhead; too few lose nuance
Guardrail: For most domains, 3 layers sufficient (strategy/tactics/operations or architecture/design/code). Complex domains may need 4-5 but rarely more.
Rule of thumb: Can you name each layer clearly? If not, you have too many.
Danger: Treating layers as independent rather than hierarchical
Guardrail: Strategic layer sets constraints ("must be HIPAA compliant"). Tactical layer chooses approaches within constraints ("encryption + audit logs"). Operational layer implements ("AES-256 + CloudTrail"). Cannot violate upward.
Anti-pattern: Operational decision ("skip encryption for speed") violating strategic constraint ("HIPAA compliance")
Danger: Strategic shift without updating tactics/ops, or operational constraint discovered but strategy unchanged
Guardrail: Top-down: Strategy changes → re-evaluate tactics → adjust operations. Bottom-up: Operational constraint → re-evaluate tactics → potentially adjust strategy.
Example: Strategy shift to "privacy-first" → Update tactics (end-to-end encryption) → Update ops (implement encryption). Or: Operational constraint (performance) → Tactical adjustment (different approach) → Strategic clarification ("privacy-first within performance constraints")
Danger: Implicit assumptions lead to inconsistency when assumptions violated
Guardrail: Document assumptions at each layer. Strategic: "Assuming competitive market." Tactical: "Assuming cloud infrastructure." Operational: "Assuming Python 3.9+."
Benefit: When assumptions change, know which layers need updating
Danger: Focusing only on designed properties, missing unintended consequences
Guardrail: Regularly observe bottom layer, look for emerging patterns at middle layer, consider strategic implications. Emergent properties can invalidate strategic assumptions.
Example: Microservices (operational) → Coordination overhead (tactical emergence) → Slower feature delivery (strategic failure if goal was speed)
| Domain | Layer 1 (30K ft) | Layer 2 (3K ft) | Layer 3 (300 ft) |
|---|---|---|---|
| Business | Vision, mission | Strategy, objectives | Tactics, tasks |
| Product | Market positioning | Feature roadmap | User stories |
| Technical | Architecture principles | System design | Code implementation |
| Organizational | Culture, values | Policies, processes | Daily procedures |
| Check Type | Question |
|---|---|
| Upward | Do these operations implement the tactics? Do tactics achieve strategy? |
| Downward | Can this strategy be executed with available tactics? Can tactics be implemented operationally? |
| Lateral | Do parallel tactical choices contradict each other? Do operational procedures conflict? |
| Audience | Layer | Focus | Metrics |
|---|---|---|---|
| CEO / Board | 30K ft | Why, outcomes, risk | Revenue, market share, strategic risk |
| VP / Director | 3K ft | What, approach, resources | Team velocity, roadmap, budget |
| Manager / Lead | 300-3K ft | How, execution, timeline | Sprint velocity, milestones, quality |
| Engineer | 300 ft | Implementation, details | Code quality, test coverage, performance |
30K (Strategic): "Become the easiest CRM for small businesses" (positioning)
3K (Tactical): "Simple UI, 5-minute setup, mobile-first, $20/user pricing, self-serve onboarding"
300 ft (Operational): "React app, OAuth for auth, Stripe for billing, onboarding flow: signup → import contacts → send first email"
Consistency check: Does $20 pricing support "easiest" (yes, low barrier)? Does 5-minute setup work with current implementation (measure in practice)? Does mobile-first align with React architecture (yes)?
30K: "Highly available system with <1% downtime, supports 10× traffic growth"
3K: "Multi-region deployment, auto-scaling, circuit breakers, blue-green deployments"
300 ft: "AWS multi-AZ, ECS Fargate with target tracking, Istio circuit breakers, CodeDeploy blue-green"
Emergence: Observed: cross-region latency 200ms → Tactical adjustment: regional data replication → Strategic clarification: "High availability within regions, eventual consistency across regions"
30K: "Build customer-centric culture where customer feedback drives decisions"
3K: "Monthly customer advisory board, NPS surveys after each interaction, customer support KPIs in exec dashboards"
300 ft: "Schedule CAB meetings first Monday monthly, automated NPS via Delighted after ticket close, Looker dashboard with CS CSAT by rep"
Consistency: Does monthly CAB support "customer-centric" (or too infrequent)? Do support KPIs incentivize right behavior (check for gaming)? Does automation reduce personal touch (potential conflict)?