From shannon
8-dimensional quantitative complexity analysis with domain detection. Analyzes specifications across structural, cognitive, coordination, temporal, technical, scale, uncertainty, dependency dimensions producing 0.0-1.0 scores. Detects domains (Frontend, Backend, Database, etc.) with percentages. Use when: analyzing specifications, starting projects, planning implementations, assessing complexity quantitatively.
npx claudepluginhub krzemienski/shannon-framework --plugin shannonThis skill is limited to using the following tools:
**Purpose**: Shannon's signature 8-dimensional quantitative complexity analysis transforms unstructured specifications into actionable, quantitative project intelligence with objective scoring, domain detection, MCP recommendations, and 5-phase execution plans.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Purpose: Shannon's signature 8-dimensional quantitative complexity analysis transforms unstructured specifications into actionable, quantitative project intelligence with objective scoring, domain detection, MCP recommendations, and 5-phase execution plans.
CRITICAL: Agents systematically rationalize skipping or adjusting the spec-analysis algorithm. Below are the 4 most common rationalizations detected in baseline testing, with mandatory counters.
Example: User says "I think this is 60/100 complexity" → Agent responds "Your assessment of 60/100 seems reasonable..."
COUNTER:
Rule: Apply algorithm. User intuition doesn't override calculation.
Example: User says "Just a simple CRUD app, no need for formal analysis" → Agent proceeds directly to implementation
COUNTER:
Rule: Even "simple" specs get analyzed. Takes 30 seconds. No exceptions.
Example: User says "It's a React app, so probably 80% frontend, 20% backend" → Agent accepts without validation
COUNTER:
Rule: Count indicators. Never guess domain percentages.
Example: Agent calculates 0.72, thinks "that seems too high", adjusts to 0.60
COUNTER:
Rule: Algorithm is objective. If score feels wrong, algorithm is right.
If you're tempted to:
Then you are rationalizing. Stop. Run the algorithm. Report the results objectively.
Use this skill when:
DO NOT use when:
Required:
specification (string): Specification text (3+ paragraphs, 5+ features, or attached spec file)Optional:
include_mcps (boolean): Include MCP recommendations (default: true)depth (string): Analysis depth - "standard" or "deep" for complex specs (default: "standard")save_to_serena (boolean): Save analysis to Serena MCP (default: true)Structured analysis object:
{
"complexity_score": 0.68,
"interpretation": "Complex",
"dimension_scores": {
"structural": 0.55,
"cognitive": 0.65,
"coordination": 0.75,
"temporal": 0.40,
"technical": 0.80,
"scale": 0.50,
"uncertainty": 0.15,
"dependencies": 0.30
},
"domain_percentages": {
"Frontend": 34,
"Backend": 29,
"Database": 20,
"DevOps": 17
},
"mcp_recommendations": [
{
"tier": 1,
"name": "Serena MCP",
"purpose": "Context preservation",
"priority": "MANDATORY"
}
],
"phase_plan": [...],
"execution_strategy": "wave-based",
"timeline_estimate": "10-12 days",
"analysis_id": "spec_analysis_20250103_144530"
}
When to Use:
Expected Outcomes:
Duration: 3-8 minutes for analysis, immediate results
Output: Weighted total score (0.0-1.0) mapping to interpretation bands:
Input: User message containing specification text
Processing:
Output: Boolean decision (activate spec analysis mode or standard conversation)
Duration: Instant (automatic pattern matching)
Input: Specification text (string)
Processing:
Structural Score:
\b(\d+)\s+(files?|components?)\b\b(\d+)\s+(services?|microservices?|APIs?)\bfile_factor = log10(file_count + 1) / 3structural_score = (file_factor × 0.40) + (service_factor × 0.30) + (module_factor × 0.20) + (component_factor × 0.10)Cognitive Score:
Coordination Score:
(team_count × 0.25) + (integration_keyword_count × 0.15) + (stakeholder_count × 0.10)Temporal Score:
\d+ (hours?|days?|weeks?)Technical Score:
Scale Score:
Uncertainty Score:
Dependencies Score:
Weighted Total:
total = (0.20 × structural) + (0.15 × cognitive) + (0.15 × coordination) +
(0.10 × temporal) + (0.15 × technical) + (0.10 × scale) +
(0.10 × uncertainty) + (0.05 × dependencies)
Output:
Duration: 1-2 minutes
Input: Specification text (string)
Processing:
Count Keywords per Domain:
Calculate Raw Percentages:
total_keywords = sum(all domain counts)
frontend_percentage = (frontend_count / total_keywords) × 100
backend_percentage = (backend_count / total_keywords) × 100
...
Round and Normalize:
Generate Domain Descriptions:
Output:
Duration: 30 seconds
Input: Domain percentages, specification text
Processing:
Tier 1 (MANDATORY):
Tier 2 (PRIMARY) - Domain-based (>=20% threshold):
Tier 3 (SECONDARY) - Supporting:
Tier 4 (OPTIONAL) - Keyword-triggered:
Generate Rationale:
Output:
Duration: 1 minute
Input: Complexity score, domain percentages, timeline estimate
Processing:
Estimate Base Timeline:
Generate Phase 1 (Analysis & Planning - 15%):
Generate Phase 2 (Architecture & Design - 20%):
Generate Phase 3 (Implementation - 40%):
Generate Phase 4 (Integration & Testing - 15%):
Generate Phase 5 (Deployment & Documentation - 10%):
Output:
Duration: 2-3 minutes
Input: Complete analysis results
Processing:
spec_analysis_{timestamp}{
"analysis_id": "spec_analysis_20250103_143022",
"complexity_score": 0.68,
"interpretation": "Complex",
"dimension_scores": {...},
"domain_percentages": {...},
"mcp_recommendations": [...],
"phase_plan": [...],
"execution_strategy": "wave-based (3-7 agents)"
}
write_memory(analysis_id, analysis_json)Output:
Duration: 30 seconds
Input: Complete analysis with all components
Processing:
Output: Formatted markdown report ready for user presentation
Duration: 30 seconds
Input: Complexity score, domain percentages
Processing:
Output: Sub-skill activation recommendations
Duration: Instant
Input: Complete analysis
Processing:
Output:
Duration: 10 seconds
Agent: SPEC_ANALYZER (optional, for complex specs)
Context Provided:
Activation Trigger:
Behavior:
Serena MCP (MANDATORY)
// Save analysis
write_memory("spec_analysis_20250103_143022", {
complexity_score: 0.68,
domain_percentages: {...},
mcp_recommendations: [...],
phase_plan: [...]
})
// Retrieve later
const analysis = read_memory("spec_analysis_20250103_143022")
list_memories() - should return saved analysesSequential MCP (for complexity >=0.60)
// Activate for deep analysis
invoke_sequential({
thought: "Analyzing structural complexity dimension...",
thoughtNumber: 1,
totalThoughts: 120,
nextThoughtNeeded: true
})
Input:
Build a simple todo app with React. Users can:
- Add new tasks with title and description
- Mark tasks as complete
- Delete tasks
- Filter by complete/incomplete
Store data in localStorage. Responsive design for mobile.
Execution:
Step 1: Detect activation → ✅ (5 feature items, explicit keywords "Build", "users can")
Step 2: 8D Scoring:
- Structural: 1 file (React app) → 0.10
- Cognitive: "design" mentioned → 0.20
- Coordination: No teams → 0.10
- Temporal: No deadline → 0.10
- Technical: React (standard) → 0.20
- Scale: localStorage (small) → 0.10
- Uncertainty: Clear requirements → 0.10
- Dependencies: None → 0.10
- Weighted Total: 0.33
Step 3: Domain Detection:
- Frontend: React(1), design(1), responsive(1), mobile(1) = 4 keywords
- Backend: 0
- Database: localStorage(1) = 1 keyword (counts as Database)
- Total: 5 keywords
- Frontend: 80%, Database: 20%
Step 4: MCP Recommendations:
- Tier 1: Serena MCP (mandatory)
- Tier 2: Magic MCP (Frontend 80%), Puppeteer MCP (Frontend testing)
- Tier 3: GitHub MCP (version control)
Step 5: 5-Phase Plan:
- Timeline: 4-6 hours (Simple)
- Phase 1: 36 min (spec analysis)
- Phase 2: 48 min (component design)
- Phase 3: 2.4 hrs (implementation)
- Phase 4: 36 min (Puppeteer tests)
- Phase 5: 24 min (deployment)
Step 6: Save to Serena → spec_analysis_20250103_143022
Step 7: Output report generated
Step 8: Sub-skills: phase-planning invoked
Step 9: Validation: ✅ All checks passed, quality score: 1.0
Output:
# Specification Analysis ✅
**Complexity**: 0.33 / 1.0 (MODERATE)
**Execution Strategy**: Sequential (no waves needed)
**Timeline**: 4-6 hours
**Analysis ID**: spec_analysis_20250103_143022
## Domain Breakdown
- Frontend (80%): React UI, component-based architecture, responsive design
- Database (20%): localStorage persistence
## Recommended MCPs
1. Serena MCP (Tier 1 - MANDATORY)
2. Magic MCP (Tier 2 - Frontend 80%)
3. Puppeteer MCP (Tier 2 - Frontend testing)
4. GitHub MCP (Tier 3 - Version control)
## 5-Phase Plan (4-6 hours)
[Detailed phase breakdown with validation gates...]
## Next Steps
1. Review complexity score → Confirmed: MODERATE (sequential execution)
2. Configure MCPs → Install Magic, Puppeteer, ensure Serena connected
3. Begin Phase 1 → Analysis & Planning (36 minutes)
Input:
Build a real-time collaborative document editor like Google Docs. Requirements:
Frontend:
- React with TypeScript
- Rich text editing (Slate.js or ProseMirror)
- Real-time cursor tracking showing all active users
- Presence indicators
- Commenting system
- Version history viewer
- Responsive design
Backend:
- Node.js with Express
- WebSocket server for real-time sync
- Yjs CRDT for conflict-free collaborative editing
- Authentication with JWT
- Authorization (owner, editor, viewer roles)
- Document API (CRUD operations)
Database:
- PostgreSQL for document metadata, users, permissions
- Redis for session management and presence
- S3 for document snapshots
Infrastructure:
- Docker containers
- Kubernetes deployment
- CI/CD pipeline with GitHub Actions
- Monitoring with Prometheus/Grafana
Timeline: 2 weeks, high performance required (< 100ms latency for edits)
Execution:
Step 1: Detect → ✅ (Multi-paragraph, extensive feature list, technical keywords)
Step 2: 8D Scoring:
- Structural: Multiple services (frontend, backend, WebSocket, database) = 0.55
- Cognitive: "design", "architecture", real-time system = 0.65
- Coordination: Frontend team, backend team, DevOps team = 0.75
- Temporal: 2 weeks deadline + "high performance" = 0.40
- Technical: Real-time WebSocket, CRDT, complex algorithms = 0.80
- Scale: Performance requirements (<100ms) = 0.50
- Uncertainty: Well-defined requirements = 0.15
- Dependencies: Multiple integrations = 0.30
- Weighted Total: 0.72 (HIGH)
Step 3: Domain Detection:
- Frontend: React, TypeScript, Slate, rich text, cursor, presence, responsive = 12 keywords
- Backend: Express, WebSocket, Yjs, CRDT, auth, API, JWT = 10 keywords
- Database: PostgreSQL, Redis, S3, metadata, session = 7 keywords
- DevOps: Docker, Kubernetes, CI/CD, monitoring, Prometheus = 6 keywords
- Total: 35 keywords
- Frontend: 34%, Backend: 29%, Database: 20%, DevOps: 17%
Step 4: MCP Recommendations:
- Tier 1: Serena MCP (mandatory)
- Tier 2: Magic MCP (Frontend 34%), Puppeteer MCP (testing), Context7 MCP (React/Express/Postgres), Sequential MCP (complex real-time architecture), PostgreSQL MCP (Database 20%), Redis MCP
- Tier 3: GitHub MCP, AWS MCP (S3), Prometheus MCP
- Tier 4: Tavily MCP (research Yjs CRDT patterns)
Step 5: 5-Phase Plan:
- Timeline: 10-12 days (HIGH complexity)
- Phase 1: 1.5 days (deep analysis of CRDT requirements)
- Phase 2: 2 days (architecture design for real-time sync)
- Phase 3: 4 days (implementation across domains)
- Phase 4: 1.5 days (integration + functional testing)
- Phase 5: 1 day (deployment + documentation)
Step 6: Save to Serena → spec_analysis_20250103_144530
Step 7: Output report generated
Step 8: Sub-skills:
- Sequential MCP recommended for architecture design
- wave-orchestration REQUIRED (complexity 0.72 >= 0.50)
- phase-planning invoked
Step 9: Validation: ✅ Quality score: 1.0
Output:
# Specification Analysis ✅
**Complexity**: 0.72 / 1.0 (HIGH)
**Execution Strategy**: WAVE-BASED (8-15 agents recommended)
**Recommended Waves**: 3-5 waves
**Timeline**: 10-12 days
**Analysis ID**: spec_analysis_20250103_144530
## Complexity Breakdown
| Dimension | Score | Weight | Contribution |
|-----------|-------|--------|--------------|
| Structural | 0.55 | 20% | 0.11 |
| Cognitive | 0.65 | 15% | 0.10 |
| Coordination | 0.75 | 15% | 0.11 |
| Temporal | 0.40 | 10% | 0.04 |
| Technical | 0.80 | 15% | 0.12 |
| Scale | 0.50 | 10% | 0.05 |
| Uncertainty | 0.15 | 10% | 0.02 |
| Dependencies | 0.30 | 5% | 0.02 |
| **TOTAL** | **0.72** | | **HIGH** |
## Domain Breakdown
- Frontend (34%): React + TypeScript, real-time UI, rich text editing, presence system
- Backend (29%): Express API, WebSocket real-time sync, Yjs CRDT, authentication
- Database (20%): PostgreSQL metadata, Redis sessions, S3 snapshots
- DevOps (17%): Docker containers, Kubernetes orchestration, CI/CD, monitoring
## Recommended MCPs (9 total)
### Tier 1: MANDATORY
1. **Serena MCP** - Context preservation across waves
### Tier 2: PRIMARY
2. **Magic MCP** - React component generation (Frontend 34%)
3. **Puppeteer MCP** - Real browser testing (NO MOCKS)
4. **Context7 MCP** - React/Express/PostgreSQL documentation
5. **Sequential MCP** - Complex real-time architecture analysis (100-500 steps)
6. **PostgreSQL MCP** - Database schema operations (Database 20%)
7. **Redis MCP** - Session and presence management
### Tier 3: SECONDARY
8. **GitHub MCP** - CI/CD automation, version control
9. **AWS MCP** - S3 integration for document snapshots
### Tier 4: OPTIONAL
10. **Tavily MCP** - Research Yjs CRDT best practices
11. **Prometheus MCP** - Monitoring setup
## Next Steps
1. ⚠️ **CRITICAL**: Use wave-based execution (complexity 0.72 >= 0.50)
2. Configure 9 recommended MCPs (prioritize Tier 1-2)
3. Run /shannon:wave to generate wave plan (expect 3-5 waves, 8-15 agents)
4. Use Sequential MCP for Phase 2 architecture design
5. Enforce functional testing (Puppeteer for real-time sync, NO MOCKS)
Input:
URGENT: Build high-frequency trading system for cryptocurrency exchange.
REQUIREMENTS:
- Sub-millisecond latency for order matching (<500 microseconds p99)
- Handle 1M orders/second peak load
- 99.999% uptime (5 nines availability)
- Real-time risk management engine
- Machine learning price prediction (LSTM model)
- Multi-exchange aggregation (Binance, Coinbase, Kraken APIs)
- Regulatory compliance (SEC, FINRA reporting)
- Distributed across 5+ data centers for fault tolerance
TECHNICAL CONSTRAINTS:
- Rust backend for performance-critical paths
- C++ for ultra-low-latency matching engine
- Kafka for event streaming (millions of events/sec)
- TimescaleDB for time-series data (billions of records)
- Redis cluster for in-memory order book
- Kubernetes with custom scheduler for latency optimization
TIMELINE: Production deployment in 4 weeks (CRITICAL DEADLINE)
UNKNOWNS:
- Market data feed integration approach TBD
- Optimal ML model architecture unclear (needs research)
- Disaster recovery strategy undefined
- Security audit requirements (pending legal review)
DEPENDENCIES:
- Blocked by exchange API approval (Binance, Coinbase - 1 week)
- Requires external security audit (vendor, 2 weeks)
- Waiting on legal compliance framework
Execution:
Step 1: Detect → ✅ (URGENT keyword, extensive multi-paragraph, critical requirements)
Step 2: 8D Scoring:
- Structural: 5+ services, distributed system, multiple languages = 0.90
- Cognitive: ML design, research needed, complex architecture = 0.85
- Coordination: Backend, ML, DevOps, Security, Legal teams = 1.00 (5+ teams)
- Temporal: URGENT + 4 weeks CRITICAL deadline = 0.90
- Technical: HFT, sub-millisecond latency, ML, distributed consensus = 1.00
- Scale: 1M orders/sec, 99.999% uptime, billions of records = 1.00
- Uncertainty: Multiple TBDs, "unclear", "undefined", "pending" = 0.80
- Dependencies: Blocked by vendors, external audits, legal = 0.75
- Weighted Total: 0.92 (CRITICAL)
Step 3: Domain Detection:
- Backend: Rust, C++, matching engine, APIs, Kafka, event streaming = 18 keywords
- Database: TimescaleDB, Redis, order book, time-series, billions = 10 keywords
- DevOps: Distributed, Kubernetes, 5+ data centers, fault tolerance = 8 keywords
- ML: Machine learning, LSTM, price prediction, model = 5 keywords
- Security: Regulatory, compliance, SEC, FINRA, security audit = 6 keywords
- Total: 47 keywords
- Backend: 38%, Database: 21%, DevOps: 17%, ML: 11%, Security: 13%
Step 4: MCP Recommendations:
- Tier 1: Serena MCP (mandatory)
- Tier 2: Context7 MCP (Rust/Kafka/TimescaleDB), Sequential MCP (critical for HFT architecture), PostgreSQL MCP (TimescaleDB), Redis MCP, Kubernetes MCP, AWS/GCP MCP (multi-region)
- Tier 3: GitHub MCP, Tavily MCP (research ML models, HFT patterns), Binance/Coinbase API MCPs
- Tier 4: Monitoring MCPs (Prometheus, DataDog), Sentry MCP (error tracking)
Step 5: 5-Phase Plan:
- Timeline: 20-25 days (CRITICAL, pushing limits)
- Phase 1: 3 days (deep analysis, risk assessment, dependency mapping)
- Phase 2: 5 days (HFT architecture, ML model selection, disaster recovery design)
- Phase 3: 8 days (implementation with parallel waves)
- Phase 4: 3 days (integration, stress testing to 1M orders/sec)
- Phase 5: 2 days (multi-region deployment, compliance docs)
Step 6: Save to Serena → spec_analysis_20250103_150015
Step 7: Output report generated
Step 8: Sub-skills:
- Sequential MCP CRITICAL (architecture decisions have massive impact)
- wave-orchestration MANDATORY (complexity 0.92, expect 15-25 agents, 5-8 waves)
- risk-mitigation skill recommended (high uncertainty + dependencies)
Step 9: Validation: ✅ Quality score: 1.0
Output:
# Specification Analysis ✅
🚨 **CRITICAL COMPLEXITY DETECTED** 🚨
**Complexity**: 0.92 / 1.0 (CRITICAL)
**Execution Strategy**: WAVE-BASED (15-25 agents REQUIRED)
**Recommended Waves**: 5-8 waves
**Timeline**: 20-25 days (EXTREMELY AGGRESSIVE - HIGH RISK)
**Analysis ID**: spec_analysis_20250103_150015
⚠️ **WARNINGS**:
- Complexity 0.92 is in CRITICAL band (top 5% of all projects)
- 4-week deadline with 0.92 complexity = VERY HIGH RISK
- Uncertainty score 0.80 + Dependencies 0.75 = significant blockers
- Consider timeline extension or scope reduction
## Complexity Breakdown
| Dimension | Score | Weight | Contribution | RISK LEVEL |
|-----------|-------|--------|--------------|------------|
| Structural | 0.90 | 20% | 0.18 | 🔴 CRITICAL |
| Cognitive | 0.85 | 15% | 0.13 | 🔴 HIGH |
| Coordination | 1.00 | 15% | 0.15 | 🔴 CRITICAL |
| Temporal | 0.90 | 10% | 0.09 | 🔴 HIGH |
| Technical | 1.00 | 15% | 0.15 | 🔴 CRITICAL |
| Scale | 1.00 | 10% | 0.10 | 🔴 CRITICAL |
| Uncertainty | 0.80 | 10% | 0.08 | 🔴 HIGH |
| Dependencies | 0.75 | 5% | 0.04 | 🔴 HIGH |
| **TOTAL** | **0.92** | | **CRITICAL** |
## Domain Breakdown
- Backend (38%): Rust/C++ HFT engine, sub-millisecond latency, Kafka streaming, multi-exchange APIs
- Database (21%): TimescaleDB time-series (billions), Redis cluster order book, in-memory operations
- DevOps (17%): Distributed 5+ data centers, Kubernetes custom scheduling, 99.999% uptime
- Security (13%): SEC/FINRA compliance, regulatory reporting, security audit requirements
- ML (11%): LSTM price prediction, model architecture research, real-time inference
## Recommended MCPs (13 total)
### Tier 1: MANDATORY
1. **Serena MCP** - CRITICAL for context preservation across 5-8 waves
### Tier 2: PRIMARY (8 MCPs)
2. **Context7 MCP** - Rust/Kafka/TimescaleDB/Redis documentation
3. **Sequential MCP** - CRITICAL 200-500 step analysis for HFT architecture decisions
4. **PostgreSQL MCP** - TimescaleDB operations (time-series optimizations)
5. **Redis MCP** - Redis cluster for order book management
6. **Kubernetes MCP** - Custom scheduler, latency optimization
7. **AWS MCP** - Multi-region deployment, data center orchestration
8. **GitHub MCP** - CI/CD for high-velocity development
9. **ML Framework MCP** - LSTM model development (TensorFlow/PyTorch)
### Tier 3: SECONDARY (4 MCPs)
10. **Tavily MCP** - Research HFT patterns, ML architectures, disaster recovery
11. **Binance API MCP** - Exchange integration
12. **Coinbase API MCP** - Exchange integration
13. **Prometheus MCP** - Monitoring for 99.999% uptime
## 5-Phase Plan (20-25 days - CRITICAL TIMELINE)
[Phases with RISK MITIGATION strategies for each...]
## Risk Assessment
🔴 **CRITICAL RISKS**:
1. **Timeline Risk**: 0.92 complexity with 4-week deadline = 60% probability of delay
2. **Dependency Risk**: Blocked by vendor approvals (1 week) + security audit (2 weeks) = 3 weeks consumed before implementation
3. **Uncertainty Risk**: ML model unclear, disaster recovery undefined = potential architecture rework
4. **Technical Risk**: Sub-millisecond latency requirement with distributed system = extremely challenging
5. **Coordination Risk**: 5+ teams (Backend, ML, DevOps, Security, Legal) = high communication overhead
**RECOMMENDATION**:
- Extend deadline to 6-8 weeks (50% timeline buffer)
- Parallel-track vendor approvals NOW (don't wait)
- Dedicate 1-2 agents to ML research in Phase 1
- Use Sequential MCP for ALL critical architecture decisions
- Daily SITREP coordination meetings
## Next Steps
1. 🚨 **IMMEDIATE**: Escalate timeline risk to stakeholders
2. Configure ALL 13 recommended MCPs (no optional - all needed for CRITICAL)
3. Run /shannon:wave with --critical flag (expect 15-25 agents, 5-8 waves)
4. Use Sequential MCP for Phase 1 risk analysis (200-500 reasoning steps)
5. Establish SITREP daily coordination protocol
6. Begin vendor approval process IMMEDIATELY (parallel to analysis)
Successful when:
Validation:
def validate_spec_analysis(result):
assert 0.10 <= result["complexity_score"] <= 0.95
assert sum(result["domain_percentages"].values()) == 100
assert any(mcp["tier"] == 1 and mcp["name"] == "Serena MCP"
for mcp in result["mcp_recommendations"])
assert len(result["phase_plan"]) == 5
assert all(phase.get("validation_gate") for phase in result["phase_plan"])
assert result["analysis_id"].startswith("spec_analysis_")
Fails if:
Problem: Developer looks at specification, thinks "this is simple," skips 8D analysis, estimates 2-3 hours
Why It Fails:
Solution: ALWAYS run spec-analysis, let quantitative scoring decide. 3-5 minutes investment prevents hours of rework.
Prevention: using-shannon skill makes spec-analysis mandatory first step
Problem: Analysis shows "Frontend 100%, Backend 0%" for full-stack app → Frontend MCP overload, no Backend MCPs suggested
Why It Fails:
Solution: Normalize keyword counts by section, not raw occurrences. If spec mentions multiple domains explicitly, ensure each >=15%.
Prevention: Validation step checks for domain imbalance (warn if single domain >80%)
Problem: Suggests 15 MCPs for Simple (0.28) project → User overwhelmed, setup takes longer than implementation
Why It Fails:
Solution:
Prevention: Validation checks MCP count vs complexity (warn if >10 MCPs for Simple project)
Problem: Spec shows "URGENT", "ASAP", "critical deadline" → Temporal score calculated → Ignored when estimating timeline
Why It Fails:
Solution: High Temporal score → Recommend more parallel agents (2x normal), tighter wave coordination
Prevention: Timeline estimation formula includes Temporal as multiplier
Problem: Generated 5-phase plan identical for Frontend-heavy (Frontend 80%) vs Backend-heavy (Backend 70%) projects
Why It Fails:
Solution: Phase 2/3/4 objectives dynamically customized based on domain percentages >=40%
Prevention: Validation checks if Phase 2 includes domain-specific objectives (e.g., "component hierarchy" for Frontend >=40%)
Problem: Phase 4 says "Write tests" without specifying functional testing, NO MOCKS enforcement
Why It Fails:
Solution: Phase 4 testing requirements explicitly state:
Prevention: functional-testing skill enforces NO MOCKS, but spec-analysis should set expectations upfront
Problem: Analysis completes, report generated, but Serena MCP write_memory() fails → No error shown → Context lost on next wave
Why It Fails:
Solution: Verify Serena save with read-back test:
write_memory(analysis_id, analysis_data)
const verify = read_memory(analysis_id)
if (!verify) throw Error("Serena save failed - analysis not persisted")
Prevention: Step 6 includes save verification, reports error if save fails
Problem: Specification is trivial (change button color) → Score calculated as 0.05 → Reported as 0.00
Why It Fails:
Solution:
Prevention: Validation rejects scores <0.10 or >0.95 with warning
How to verify spec-analysis executed correctly:
Check Complexity Score:
Check Domain Percentages:
Check MCP Recommendations:
Check 5-Phase Plan:
Check Serena Save:
spec_analysis_{timestamp} ✅Check Execution Strategy:
Run Validation Script (if available):
python3 shannon-plugin/tests/validate_skills.py
# Expected: ✅ spec-analysis: All validation checks passed
SKILL.md (This file): ~500 lines
references/SPEC_ANALYSIS.md: 1787 lines (full algorithm)
references/domain-patterns.md: ~300 lines
Claude loads references/ when:
Expected Performance (measured on Claude Sonnet 3.5):
| Specification Size | Analysis Time | Quality Score | Recommended Depth |
|---|---|---|---|
| <500 words | 30-60 seconds | 0.95+ | standard |
| 500-2000 words | 1-3 minutes | 0.90+ | standard |
| 2000-5000 words | 3-8 minutes | 0.85+ | deep (if complexity >=0.60) |
| >5000 words | 8-15 minutes | 0.80+ | deep (mandatory) |
Performance Indicators:
Quality Validation:
When Analysis Takes Longer:
Scenario: User provides moderate complexity specification
Specification Text:
Build a customer support ticketing system. Requirements:
Frontend:
- React dashboard for support agents
- Ticket list with filtering (status, priority, assignee)
- Ticket detail view with rich text editor for responses
- Real-time updates when tickets change
Backend:
- Express API with TypeScript
- Authentication for agents and admins
- REST endpoints for tickets (CRUD)
- WebSocket for real-time updates
- Email integration (send/receive via SendGrid)
Database:
- PostgreSQL for tickets, users, responses
- Schema with tickets, users, responses tables
- Full-text search on ticket content
Deployment:
- Docker container
- Deploy to AWS
Timeline: 1 week
Execution Process (showing actual Claude workflow):
Step 1: Activation Detection
Input: [Specification text above]
Process:
- Check: Multi-paragraph? ✅ (3 paragraphs)
- Check: >5 features? ✅ (12+ distinct features)
- Check: Primary keywords? ✅ ("Build", "Requirements", "Deployment")
- Check: Attached files? ❌
Decision: ACTIVATE spec-analysis
Step 2: Calculate 8D Scores
Structural Complexity (20% weight):
- Extract: "React dashboard" → 1 component mentioned
- Extract: "Express API" → 1 service
- Calculate: file_factor = log10(8+1)/3 ≈ 0.32 (estimate 8 files)
- Calculate: service_factor = log10(2+1)/3 ≈ 0.16
- Score: (0.32 × 0.40) + (0.16 × 0.30) = 0.18 × 1.5 (qualifier: multiple features) = 0.27
- Result: structural = 0.27
Cognitive Complexity (15% weight):
- Count: "design" (0), "analyze" (0), "architect" (0)
- Count: No complex design requirements
- Score: 0.15 (baseline for any non-trivial spec)
- Result: cognitive = 0.15
Coordination Complexity (15% weight):
- Teams: Frontend team, Backend team (implied 2 teams)
- Integration: "real-time updates" → 1 integration keyword
- Score: (2 × 0.25) + (1 × 0.15) = 0.65
- Result: coordination = 0.65
Temporal Complexity (10% weight):
- Deadline: "1 week" → <7 days factor
- Score: 0.30 (1 week deadline)
- Result: temporal = 0.30
Technical Complexity (15% weight):
- Advanced tech: "WebSocket" (real-time) → +0.20, "full-text search" → +0.15
- Integrations: "SendGrid" → +0.15
- Score: 0.20 + 0.15 + 0.15 = 0.50
- Result: technical = 0.50
Scale Complexity (10% weight):
- Users: Not mentioned (assume <10K)
- Performance: Not mentioned
- Score: 0.15 (moderate scale implied)
- Result: scale = 0.15
Uncertainty Complexity (10% weight):
- No "TBD", "unclear", "research" keywords
- Well-defined requirements
- Score: 0.10 (minimal uncertainty)
- Result: uncertainty = 0.10
Dependencies Complexity (5% weight):
- External: SendGrid API → 1 dependency
- No blocking dependencies mentioned
- Score: 0.20
- Result: dependencies = 0.20
Weighted Total:
(0.20 × 0.27) + (0.15 × 0.15) + (0.15 × 0.65) + (0.10 × 0.30) +
(0.15 × 0.50) + (0.10 × 0.15) + (0.10 × 0.10) + (0.05 × 0.20)
= 0.054 + 0.023 + 0.098 + 0.030 + 0.075 + 0.015 + 0.010 + 0.010
= 0.315
→ Round to 0.32
Interpretation: 0.32 → MODERATE (0.30-0.50 band)
Step 3: Domain Detection
Count Keywords:
Frontend: React(1), dashboard(1), filtering(1), view(1), real-time(1), updates(1) = 6
Backend: Express(1), API(1), TypeScript(1), Authentication(1), REST(1), endpoints(1),
CRUD(1), WebSocket(1), email(1), integration(1), SendGrid(1) = 11
Database: PostgreSQL(1), Schema(1), tables(3), full-text(1), search(1) = 7
DevOps: Docker(1), Deploy(1), AWS(1) = 3
Total: 27 keywords
Calculate Percentages:
Frontend: (6/27) × 100 = 22.2% → round to 22%
Backend: (11/27) × 100 = 40.7% → round to 41%
Database: (7/27) × 100 = 25.9% → round to 26%
DevOps: (3/27) × 100 = 11.1% → round to 11%
Normalize (sum = 100%):
22 + 41 + 26 + 11 = 100% ✅
Result:
- Backend: 41%
- Database: 26%
- Frontend: 22%
- DevOps: 11%
Step 4: MCP Recommendations
Tier 1 (MANDATORY):
- Serena MCP → Always required
Tier 2 (PRIMARY) - Domain >=20%:
- Backend 41% >=20% → Context7 MCP (Express/TypeScript docs)
- Database 26% >=20% → PostgreSQL MCP
- Frontend 22% >=20% → Magic MCP (component generation), Puppeteer MCP (testing)
Tier 3 (SECONDARY):
- GitHub MCP (version control)
- AWS MCP (deployment)
- SendGrid MCP (email integration)
Tier 4 (OPTIONAL):
- None (project well-defined, no research needed)
Result: 8 MCPs recommended
Step 5: 5-Phase Plan
Base Timeline: 1-2 days (MODERATE 0.32, user requested 1 week → achievable)
Phase 1 (15% = 2.4 hours):
- Objectives: Complete spec analysis, task breakdown, risk assessment
- Deliverables: Task list, complexity validated, MCP configuration
- Validation Gate: All requirements understood, no ambiguities
Phase 2 (20% = 3.2 hours):
- Objectives: System design, API design (Backend 41%), Database schema (Database 26%)
- Deliverables: API spec, database schema, component hierarchy
- Validation Gate: Design approved, patterns established
Phase 3 (40% = 6.4 hours):
- Objectives: Backend API implementation (41%), Database setup (26%), Frontend components (22%)
- Deliverables: REST endpoints, PostgreSQL schema, React dashboard
- Validation Gate: Features complete, NO MOCKS tests passing
Phase 4 (15% = 2.4 hours):
- Objectives: Integration testing (Puppeteer for Frontend, real HTTP for Backend, real DB)
- Deliverables: Functional test suite (NO MOCKS), integration validated
- Validation Gate: All tests passing (Puppeteer, API, Database)
Phase 5 (10% = 1.6 hours):
- Objectives: Docker containerization, AWS deployment, documentation
- Deliverables: Deployed system, technical docs
- Validation Gate: System deployed, docs complete
Total: 16 hours ≈ 2 days (fits 1 week timeline comfortably)
Step 6: Save to Serena MCP
const analysis_id = "spec_analysis_20251108_194500"
const analysis = {
analysis_id: "spec_analysis_20251108_194500",
complexity_score: 0.32,
interpretation: "Moderate",
dimension_scores: {
structural: 0.27,
cognitive: 0.15,
coordination: 0.65,
temporal: 0.30,
technical: 0.50,
scale: 0.15,
uncertainty: 0.10,
dependencies: 0.20
},
domain_percentages: {
Backend: 41,
Database: 26,
Frontend: 22,
DevOps: 11
},
mcp_recommendations: [...],
phase_plan: [...],
execution_strategy: "sequential",
timeline_estimate: "1-2 days"
}
// Save
mcp__serena__write_memory(analysis_id, JSON.stringify(analysis))
// Verify
const verify = mcp__serena__read_memory(analysis_id)
if (!verify) throw Error("Serena save failed")
Step 7: Format Output
# Specification Analysis ✅
**Complexity**: 0.32 / 1.0 (MODERATE)
**Execution Strategy**: Sequential (no waves needed)
**Timeline**: 1-2 days (16 hours)
**Analysis ID**: spec_analysis_20251108_194500
## Complexity Breakdown
[8D table as shown in examples]
## Domain Breakdown
- Backend (41%): Express API, TypeScript, REST endpoints, WebSocket real-time, SendGrid integration
- Database (26%): PostgreSQL schema, full-text search, relational data
- Frontend (22%): React dashboard, filtering, rich text editor
- DevOps (11%): Docker, AWS deployment
## Recommended MCPs (8 total)
[Tiered list with rationale]
## 5-Phase Plan (1-2 days)
[Detailed phase breakdown]
## Next Steps
1. Configure 8 MCPs (prioritize Serena, Context7, PostgreSQL)
2. Begin Phase 1 (2.4 hours)
3. Sequential execution (no waves for MODERATE)
Step 8: Chain to phase-planning
Invoke phase-planning skill with analysis as input
Step 9: Validation
✅ Complexity: 0.32 ∈ [0.10, 0.95]
✅ Domains: 41 + 26 + 22 + 11 = 100%
✅ Serena in Tier 1
✅ Domain MCPs: PostgreSQL (26%), Context7 (41%), Magic (22%)
✅ 5 phases with gates
Quality Score: 1.0 (5/5 checks passed)
This walkthrough demonstrates the complete process Claude should follow when executing spec-analysis, showing the actual calculations, MCP selections, and validation steps.
Version: 4.0.0 Last Updated: 2025-11-03 Author: Shannon Framework Team License: MIT Status: Core (Quantitative skill, mandatory for Shannon workflows)