5-check quantitative validation ensuring 90% confidence before implementation. Prevents wrong-direction work through systematic verification: duplicate check (25%), architecture compliance (25%), official docs (20%), working OSS (15%), root cause (15%). Thresholds: ≥90% proceed, ≥70% clarify, <70% STOP. Proven 25-250x token ROI from SuperClaude.
Quantitative 5-check validation ensuring ≥90% confidence before implementation. Prevents wrong-direction work by verifying: no duplicates, architecture compliance, official docs, working OSS, and root cause. Triggers on any implementation request.
/plugin marketplace add krzemienski/shannon-framework/plugin install shannon@shannon-frameworkThis skill is limited to using the following tools:
COMPLETION_REPORT.mdexamples/85_PERCENT_CLARIFY.mdexamples/BASELINE_TEST.mdexamples/PRESSURE_SCENARIOS.mdPurpose: Shannon's quantitative 5-check validation algorithm prevents wrong-direction work by ensuring ≥90% confidence before implementation. Each check contributes weighted points (total 100%) across duplicate verification, architecture compliance, official documentation, working OSS references, and root cause identification.
Critical Role: This skill prevents the most expensive failure mode in software development - building the right thing wrong, or building the wrong thing right. Proven 25-250x token ROI in SuperClaude production use.
Required:
specification (string): Implementation request or feature description from usercontext (object): Optional context from spec-analysis skill (8D complexity scores, phase plan)Optional:
skip_checks (array): List of checks to skip (e.g., ["oss", "root_cause"] for simple tasks)confidence_threshold (float): Override default 0.90 threshold (e.g., 0.85 for fast iterations)WARNING: Agents systematically rationalize skipping confidence checks. Below are the 6 most dangerous rationalizations detected in production, with mandatory counters.
Example: User says "I'm 75% sure this is right" → Agent responds "Let's proceed..."
COUNTER:
Rule: Algorithm score overrides stated confidence. Always calculate objectively.
Example: "Just add a button" → Agent proceeds without checking existing buttons
COUNTER:
Rule: No task too simple to validate. 30-second check prevents 2-hour rework.
Example: Agent uses Redis API from memory without checking current documentation
COUNTER:
Rule: Always verify official docs. Knowledge cutoff and API changes require fresh verification.
Example: "I'll design a real-time sync protocol" without checking Yjs, Automerge, ShareDB
COUNTER:
Rule: Learn from production code. OSS research is mandatory for complex features.
Example: "API slow → Add caching" without profiling actual bottleneck
COUNTER:
Rule: Diagnosis before prescription. No solutions without identified root cause.
Example: Score 85% → Agent thinks "Close enough, let's proceed"
COUNTER:
Rule: 90% means 90.0%. Not 89.9%, not 85%, not "close enough". Exact threshold enforcement.
Example: Senior engineer says "Trust me, I've done this 100 times, skip the checks"
COUNTER:
Rule: No authority exceptions. Algorithm applies universally, from junior to principal.
Example: "Production down! No time for confidence checks, implement OAuth2 now!"
COUNTER:
Rule: Emergencies require faster checks, not skipped checks. Root cause MANDATORY.
Example: "88% is close to 90%, within margin of error, let's proceed"
COUNTER:
if (score >= 0.90) not if (score > 0.88)Rule: Thresholds are exact. 89.9% = CLARIFY, not PROCEED. No rounding.
Example: Found 50-star unmaintained repo, claims 15/15 OSS check passed
COUNTER:
Rule: OSS quality matters. Production-grade (15/15), active lower-quality (8/15), or fail (0/15).
Example: "'Add caching' is a new feature, so root cause check = N/A → 15/15"
COUNTER:
Rule: Root cause MANDATORY for any fix/improvement. Keyword detection enforced.
Example: User provides syntax snippet, agent accepts without verifying against official docs
COUNTER:
Rule: Official docs verification MANDATORY. User input verified, not trusted blindly.
If you're tempted to:
Then you are rationalizing. Stop. Run the 5-check algorithm. Report the score objectively.
Use confidence-check skill when:
DO NOT use when:
Objective scoring across five validation dimensions, each contributing weighted points to total confidence score (0.00-1.00):
Confidence check score informs spec-analysis dimensions:
Input: Implementation request from user
Processing:
confidence_score = 0.00{
duplicate_check: { passed: null, points: 0, max: 25 },
architecture_check: { passed: null, points: 0, max: 25 },
docs_check: { passed: null, points: 0, max: 20 },
oss_check: { passed: null, points: 0, max: 15 },
root_cause_check: { passed: null, points: 0, max: 15 }
}
Output: Initialized assessment structure
Duration: Instant
Purpose: Prevent reimplementing existing functionality
Processing:
Search codebase for similar implementations:
grep -r "LoginButton" src/grep -r "authenticateUser|authenticate.*user" src/Check package.json for existing libraries:
grep -i "jsonwebtoken|jwt" package.jsonReview existing architecture:
src/middleware/auth.js, src/routes/auth.jsScoring:
Output:
duplicate_check.passed: true | falseduplicate_check.points: 0 | 15 | 25duplicate_check.evidence: File paths, code snippets showing existing implementation (if found)Duration: 1-3 minutes
Example:
User: "Build authentication middleware"
Search: grep -r "auth.*middleware" src/
Found: src/middleware/authenticate.js (active, exports authenticateUser)
Result: FAIL (0/25) - Duplicate implementation exists
Evidence: "src/middleware/authenticate.js already implements JWT authentication"
Purpose: Ensure proposed approach aligns with system architecture patterns
Processing:
Identify architecture patterns:
Locate architecture documentation:
ARCHITECTURE.md, CONTRIBUTING.md, docs/architecture/src/ directory structureVerify proposed approach matches patterns:
/components/atoms/ structure?/routes/ → /controllers/ → /services/ layers?Scoring:
Output:
architecture_check.passed: true | falsearchitecture_check.points: 0 | 15 | 25architecture_check.rationale: Explanation of alignment or violationDuration: 2-4 minutes
Example:
User: "Add getUserById() in routes/users.js"
Architecture: Project uses MVC (routes → controllers → services → models)
Proposed: Adding business logic (getUserById) directly in routes
Result: FAIL (0/25) - Violates MVC pattern
Rationale: "getUserById should be in services/userService.js, routes should only handle HTTP"
Purpose: Ensure implementation uses current, official API syntax and patterns
Processing:
Identify required documentation:
Access official documentation:
tavily_search("Redis client.connect() API current syntax")
get_library_docs("/redis/redis", topic: "client connection")
Verify current API syntax:
componentWillMount in React)Scoring:
Output:
docs_check.passed: true | falsedocs_check.points: 0 | 10 | 20docs_check.source: URL or doc reference consulteddocs_check.verification: Specific API syntax confirmedDuration: 2-5 minutes (depending on MCP availability)
Example:
User: "Use Redis client.connect()"
Action: Search redis.io documentation
Found: Redis 4.x requires: await client.connect() (async)
Redis 3.x used: client.connect(callback)
Verification: Project uses Redis 4.x (package.json: "redis": "^4.6.0")
Result: PASS (20/20) - Correct async syntax for Redis 4.x
Source: https://redis.io/docs/latest/develop/connect/clients/nodejs/
Purpose: Learn from production-proven implementations, avoid reinventing solved problems
Processing:
Identify OSS research need:
Search for working implementations:
github_search_repos("real-time collaborative editing", language: "javascript")
// Filter: stars > 1000, recently updated
tavily_search("production WebSocket real-time sync implementation")
Evaluate OSS quality:
Extract learnings:
Scoring:
Output:
oss_check.passed: true | false | null (N/A)oss_check.points: 0 | 8 | 15oss_check.examples: List of OSS repositories with URLs and star countsoss_check.learnings: Key design patterns extracted from OSSDuration: 5-10 minutes (research intensive)
Example:
User: "Build real-time collaborative editing"
Action: Search GitHub for "collaborative editing CRDT"
Found:
1. Yjs (github.com/yjs/yjs) - 13.2k stars, active, used by Google, Microsoft
2. Automerge (github.com/automerge/automerge) - 3.5k stars, active, research-backed
3. ShareDB (github.com/share/sharedb) - 6.1k stars, active, Operational Transforms
Learnings:
- Yjs uses CRDT (Conflict-free Replicated Data Types) for automatic conflict resolution
- WebSocket for real-time sync, with offline support and eventual consistency
- State vector compression reduces bandwidth (only send deltas)
Result: PASS (15/15) - Production OSS researched, design patterns identified
Examples: ["yjs/yjs (13.2k stars)", "automerge/automerge (3.5k stars)"]
Purpose: For fixes/improvements, verify diagnostic evidence of actual problem before implementing solution
Processing:
Determine if root cause check applies:
If applicable, gather diagnostic evidence:
Verify evidence identifies root cause (not symptoms):
❌ Symptom: "API is slow"
✅ Root cause: "Database query takes 2.4s due to missing index on users.email"
❌ Symptom: "Memory leak"
✅ Root cause: "EventEmitter listeners not removed in componentWillUnmount, accumulating 1000+ listeners"
❌ Symptom: "App crashes"
✅ Root cause: "Uncaught promise rejection in async fetchData() when API returns 404"
Validate proposed solution addresses root cause:
Scoring:
Output:
root_cause_check.passed: true | false | null (N/A)root_cause_check.points: 0 | 8 | 15root_cause_check.evidence: Diagnostic data (logs, profiler, metrics)root_cause_check.cause: Identified root causeroot_cause_check.solution_alignment: Does solution address cause?Duration: 3-8 minutes (depending on diagnostic complexity)
Example:
User: "API is slow, add caching"
Action: Request diagnostic evidence
User provides: "Logs show /api/users taking 3.2s average"
Investigation:
- Check: Database query logs
- Found: SELECT * FROM users WHERE email = ? (no index on email column)
- Profiler: 95% of time spent in database query
Root Cause: Missing database index on users.email column (N+1 query problem)
Proposed Solution: "Add caching"
Alignment: MISMATCH - Caching treats symptom, doesn't fix root cause
Better Solution: "Add index on users.email column"
Result: FAIL (0/15) - Solution doesn't address root cause
Evidence: "Database profiler shows 3.1s query time on unindexed email column"
Cause: "Missing index on users.email"
Solution Alignment: "Proposed caching, should add database index instead"
Input: All 5 check results
Processing:
Sum points from all checks:
total_points = duplicate_check.points +
architecture_check.points +
docs_check.points +
oss_check.points +
root_cause_check.points
Calculate confidence score (0.00-1.00):
confidence_score = total_points / 100.0
Determine threshold band:
if (confidence_score >= 0.90) {
decision = "PROCEED"
action = "Begin implementation"
} else if (confidence_score >= 0.70) {
decision = "CLARIFY"
action = "Request missing information before proceeding"
} else {
decision = "STOP"
action = "Too many unknowns, requires deeper analysis or spec revision"
}
Identify missing checks (if <90%):
missing_checks = checks.filter(c => c.points < c.max)
// Example: [{name: "docs", missing: 20}, {name: "oss", missing: 15}]
Output:
confidence_score: 0.00-1.00 (e.g., 0.85)decision: "PROCEED" | "CLARIFY" | "STOP"action: Recommended next stepmissing_checks: List of incomplete checks with missing pointsDuration: Instant (calculation)
Example:
Results:
duplicate_check: 25/25 ✅
architecture_check: 25/25 ✅
docs_check: 20/20 ✅
oss_check: 0/15 ❌ (no OSS researched)
root_cause_check: 15/15 ✅ (N/A, new feature)
Total: 85/100
Confidence: 0.85 (85%)
Decision: CLARIFY
Action: "Request OSS examples before proceeding"
Missing: ["OSS reference (0/15)"]
Input: Complete assessment with decision
Processing:
Format assessment report:
# Confidence Check: [Feature Name]
**Total Confidence**: X.XX (XX%)
**Decision**: PROCEED | CLARIFY | STOP
## 5-Check Results
| Check | Points | Status | Evidence |
|-------|--------|--------|----------|
| Duplicate | XX/25 | ✅/❌ | [Details] |
| Architecture | XX/25 | ✅/❌ | [Details] |
| Docs | XX/20 | ✅/❌ | [Details] |
| OSS | XX/15 | ✅/❌ | [Details] |
| Root Cause | XX/15 | ✅/❌ | [Details] |
## Decision: [PROCEED/CLARIFY/STOP]
[Action description]
## Next Steps
[Specific actions based on decision]
Save to Serena MCP (if available and complexity >=0.50):
serena_write_memory(`confidence_check_${feature_name}_${timestamp}`, {
feature: feature_name,
confidence_score: 0.85,
decision: "CLARIFY",
checks: [...],
missing_checks: [...]
})
Integrate with spec-analysis:
Output: Formatted markdown report with decision and next steps
Duration: 1 minute
Input: Decision (PROCEED | CLARIFY | STOP)
Processing:
If PROCEED (≥90%):
If CLARIFY (70-89%):
If STOP (<70%):
Output: Executed decision with user feedback or implementation start
Duration: Depends on decision path
Confidence score informs spec-analysis dimensions:
// In spec-analysis workflow
const confidence_result = run_confidence_check(feature_request)
// Update Uncertainty dimension (10% weight in 8D)
if (confidence_result.score < 0.70) {
uncertainty_score += 0.30 // Major unknowns
} else if (confidence_result.score < 0.90) {
uncertainty_score += 0.15 // Minor clarifications needed
}
// Update Cognitive dimension (15% weight) if architecture unclear
if (confidence_result.architecture_check.points < 15) {
cognitive_score += 0.20 // Need deeper architectural thinking
}
// Update Technical dimension (15% weight) if no OSS reference
if (confidence_result.oss_check.points === 0) {
technical_score += 0.15 // Increased technical risk without proven patterns
}
// Recalculate total complexity with confidence-adjusted dimensions
total_complexity = calculate_8d_weighted_total()
Result: Confidence check directly impacts project complexity assessment and resource planning.
Structured confidence assessment:
{
"feature": "authentication_middleware",
"timestamp": "2025-11-04T10:30:00Z",
"confidence_score": 0.85,
"decision": "CLARIFY",
"checks": [
{
"name": "duplicate",
"points": 25,
"max": 25,
"passed": true,
"evidence": "No existing auth middleware found in src/"
},
{
"name": "architecture",
"points": 25,
"max": 25,
"passed": true,
"rationale": "Follows MVC pattern: middleware/ directory exists"
},
{
"name": "docs",
"points": 20,
"max": 20,
"passed": true,
"source": "https://expressjs.com/en/guide/writing-middleware.html",
"verification": "Confirmed Express 4.x middleware syntax"
},
{
"name": "oss",
"points": 0,
"max": 15,
"passed": false,
"examples": [],
"reason": "No OSS authentication middleware researched"
},
{
"name": "root_cause",
"points": 15,
"max": 15,
"passed": null,
"note": "N/A - new feature, not a fix"
}
],
"missing_checks": [
{
"name": "oss",
"missing_points": 15,
"recommendation": "Research Passport.js or express-jwt OSS implementations"
}
],
"action": "Request OSS examples before proceeding (need 90% confidence)",
"next_steps": [
"User: Provide working OSS reference for authentication middleware",
"User: Consider using Passport.js (13k stars) or express-jwt (6k stars)",
"Agent: Re-run confidence check after OSS research"
]
}
This skill succeeds if:
Validation:
def validate_confidence_check(result):
assert 0.0 <= result['confidence_score'] <= 1.0
assert result['confidence_score'] == sum(c['points'] for c in result['checks']) / 100.0
# Verify decision thresholds
if result['confidence_score'] >= 0.90:
assert result['decision'] == 'PROCEED'
elif result['confidence_score'] >= 0.70:
assert result['decision'] == 'CLARIFY'
else:
assert result['decision'] == 'STOP'
# Verify all checks present
assert len(result['checks']) == 5
check_names = [c['name'] for c in result['checks']]
assert set(check_names) == {'duplicate', 'architecture', 'docs', 'oss', 'root_cause'}
# Verify missing checks identified if <90%
if result['confidence_score'] < 0.90:
assert len(result['missing_checks']) > 0
Wrong:
User: "I'm 80% sure this is right"
Agent: "Let's proceed with 80% confidence"
Right:
User: "I'm 80% sure this is right"
Agent: "Running 5-check validation..."
Result:
- duplicate: 25/25
- architecture: 0/25 (violates MVC pattern)
- docs: 20/20
- oss: 15/15
- root_cause: N/A
Total: 60/100 (60% confidence)
Decision: STOP (below 70% threshold)
Why: User's stated confidence is subjective. Algorithm score is objective and often reveals hidden gaps.
Wrong:
User: "Just add a login button"
Agent: "Simple task, proceeding..."
[Creates LoginButton.tsx, duplicating existing LoginButton.jsx]
Right:
User: "Just add a login button"
Agent: "Running confidence check..."
Check 1 (Duplicate): Searching for existing login buttons...
Found: src/components/LoginButton.jsx (active)
Result: FAIL (0/25) - Duplicate exists
Decision: STOP (25% confidence)
Recommendation: "Use existing LoginButton.jsx instead of creating new component"
Why: "Simple" tasks often duplicate existing code. 30-second check prevents 2-hour rework.
Wrong:
User: "Use Redis caching"
Agent: "I'll use client.connect(callback)..."
[Uses Redis 3.x syntax in Redis 4.x project → Error]
Right:
User: "Use Redis caching"
Agent: "Checking official Redis documentation..."
Version in package.json: "redis": "^4.6.0"
Redis 4.x docs: client.connect() is async (no callback)
Correct syntax: await client.connect()
Result: docs check PASS (20/20)
Why: APIs change (Redis 3.x → 4.x breaking changes). Always verify current docs.
Wrong:
User: "Build real-time collaborative editing"
Agent: "I'll design a custom sync protocol..."
[Spends 2 weeks, hits race conditions, conflict resolution issues]
Right:
User: "Build real-time collaborative editing"
Agent: "Researching production OSS implementations..."
Found: Yjs (13.2k stars, used by Google/Microsoft)
Learnings: CRDT for conflict-free sync, state vector compression, offline support
Decision: Use Yjs patterns, adapt to our use case
Result: oss check PASS (15/15)
[Saves 2 weeks by learning from production-proven code]
Why: OSS has solved hard problems (race conditions, edge cases). Learn from battle-tested code.
Wrong:
User: "API is slow, add caching"
Agent: "Adding Redis caching..."
[Implements caching, API still slow - root cause was missing database index]
Right:
User: "API is slow, add caching"
Agent: "Requesting diagnostic evidence for root cause check..."
User: [provides profiler data]
Analysis: 95% time in database query (missing index on users.email)
Root Cause: Missing database index
Proposed Solution: "Add caching"
Alignment: MISMATCH
Correct Solution: "Add database index on users.email"
Result: root_cause check adjusted (proposes correct solution)
Why: Diagnosis before prescription. Caching treats symptom; index fixes cause.
Wrong:
Confidence: 85%
Agent: "85% is close to 90%, proceeding..."
Right:
Confidence: 85%
Decision: CLARIFY (70-89% band)
Action: "Missing OSS reference (0/15). Please provide working example to reach 90%."
[Wait for clarification before proceeding]
Why: Thresholds are exact, not approximate. 85% = CLARIFY, not PROCEED.
Input:
User: "Add error logging to API endpoints using Winston library"
Process:
Duplicate Check (25/25):
grep -r "winston" src/Architecture Check (25/25):
Docs Check (20/20):
OSS Check (15/15):
Root Cause Check (15/15):
Output:
{
"feature": "winston_error_logging",
"confidence_score": 1.00,
"decision": "PROCEED",
"checks": [
{"name": "duplicate", "points": 25, "passed": true},
{"name": "architecture", "points": 25, "passed": true},
{"name": "docs", "points": 20, "passed": true},
{"name": "oss", "points": 15, "passed": true},
{"name": "root_cause", "points": 15, "passed": null}
],
"action": "Proceed to implementation with 100% confidence",
"next_steps": [
"Create src/middleware/logger.js using Winston",
"Register middleware in src/app.js",
"Test error logging on sample endpoint"
]
}
Decision: PROCEED ✅ (100% confidence)
Input:
User: "Implement WebSocket real-time notifications"
Process:
Duplicate Check (25/25):
grep -r "websocket\|socket\.io" src/Architecture Check (25/25):
Docs Check (20/20):
OSS Check (0/15):
Root Cause Check (15/15):
Output:
{
"feature": "websocket_notifications",
"confidence_score": 0.85,
"decision": "CLARIFY",
"checks": [
{"name": "duplicate", "points": 25, "passed": true},
{"name": "architecture", "points": 25, "passed": true},
{"name": "docs", "points": 20, "passed": true},
{"name": "oss", "points": 0, "passed": false, "reason": "No OSS researched"},
{"name": "root_cause", "points": 15, "passed": null}
],
"missing_checks": [
{
"name": "oss",
"missing_points": 15,
"recommendation": "Research Socket.io notification patterns from production apps"
}
],
"action": "Request OSS examples before proceeding",
"next_steps": [
"User: Provide working Socket.io notification example from GitHub",
"Suggested repos: socket.io-chat, slack-clone, discord-clone",
"Agent: Review OSS patterns (room management, broadcast strategies, reconnection logic)",
"Agent: Re-run confidence check after OSS research"
]
}
Decision: CLARIFY ⚠️ (85% confidence - need OSS research to reach 90%)
Input:
User: "Fix the memory leak in the dashboard"
Process:
Duplicate Check (25/25):
Architecture Check (0/25):
Docs Check (0/20):
OSS Check (0/15):
Root Cause Check (0/15):
Output:
{
"feature": "memory_leak_fix",
"confidence_score": 0.25,
"decision": "STOP",
"checks": [
{"name": "duplicate", "points": 25, "passed": true},
{"name": "architecture", "points": 0, "passed": false, "reason": "Component not specified"},
{"name": "docs", "points": 0, "passed": false, "reason": "Cannot verify without root cause"},
{"name": "oss", "points": 0, "passed": false, "reason": "Cannot research without specifics"},
{"name": "root_cause", "points": 0, "passed": false, "reason": "No diagnostic evidence"}
],
"missing_checks": [
{"name": "architecture", "missing_points": 25},
{"name": "docs", "missing_points": 20},
{"name": "oss", "missing_points": 15},
{"name": "root_cause", "missing_points": 15}
],
"action": "STOP - Too many unknowns (25% confidence)",
"next_steps": [
"User: Provide diagnostic evidence:",
" 1. Which component has memory leak? (Chrome DevTools Memory profiler)",
" 2. Heap snapshot showing leak growth over time",
" 3. Reproduction steps (actions that trigger leak)",
" 4. Browser console warnings/errors",
"Alternative: Run /shannon:spec for deeper analysis phase",
"Agent: Re-run confidence check after diagnostic evidence provided"
]
}
Decision: STOP 🛑 (25% confidence - critical gaps, requires investigation)
How to verify confidence-check executed correctly:
Check All 5 Checks Executed:
Check Score Calculation:
Check Decision Threshold:
Check Missing Checks Identified (if <90%):
Check Evidence Documented:
Run Validation Script (if available):
python3 shannon-plugin/tests/test_confidence_check.py
# Expected: ✅ All validation checks passed
In SKILL.md (this file): ~1100 lines
In references/ (for advanced usage):
references/CONFIDENCE_ALGORITHM.md: Mathematical formulas, edge casesreferences/OSS_RESEARCH_GUIDE.md: How to evaluate OSS quality, extract learningsreferences/ROOT_CAUSE_PATTERNS.md: Common root cause patterns by domainClaude loads references/ when:
Version: 4.0.0 Last Updated: 2025-11-04 Author: Shannon Framework Team (Adapted from SuperClaude) License: MIT Status: Core (QUANTITATIVE skill, mandatory before implementation) Proven ROI: 25-250x token savings in SuperClaude production use
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.