From cappy-toolkit
Root cause narrative synthesis for CAPPY investigations — verifies remediation steps against Cortex docs and resolved TAC cases, constructs the investigation story from verified claims, prepares the Phase 5 research direction, and produces the root cause statement.
npx claudepluginhub thelightarchitect/cappy-toolkit --plugin cappy-toolkitThis skill uses the workspace's default tool permissions.
<!-- Copyright (C) 2025-2026 Kevin Francis Tan (github.com/theLightArchitect) | SPDX-License-Identifier: AGPL-3.0-or-later -->
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Version: 1.0.0 Purpose: Create investigation narrative and prepare Phase 5 research direction Agent: CAPPY (singleton agent) Created: 2026-02-05
At the very start of execution, output this exact block:
[ Phase 6 — Solution ]
Tools: direct reasoning — narrative compilation
Then before each tool call, output a one-liner:
→ {tool-name} {key parameter}
When to Trigger: After agent synthesizes investigation narrative (informational, not blocking)
Hook Specification:
NarrativeCoherenceHook::trigger(
narrative: String, // Synthesized 3-phase narrative
coherence_score: f32, // Logical flow quality (0.0-1.0)
phase_2_to_3_connection: String, // How triage connects to evidence
phase_3_to_4_connection: String, // How evidence connects to hypothesis
gaps_identified: Vec<String>, // Evidence gaps found during synthesis
phase_5_targets: Vec<String>, // Specific focus areas for validation
details: HashMap {
"narrative_length": usize,
"assumptions_count": usize,
"cited_evidence_count": usize,
"evidence_gaps_count": usize
}
)
Agent Implementation:
Hook Response Handling:
Hook returns: { coherence_feedback: String, gap_severity: Vec<(String, String)> }
Response is INFORMATIONAL ONLY (doesn't block Phase 5):
→ Agent presents feedback to Claude for awareness
→ Claude may use feedback to refine Phase 5 strategy
→ Investigation proceeds regardless
Why This Hook: Synthesis is validation-stage work. Hook provides feedback on narrative quality but doesn't block progress (unlike gates). Helps Claude understand which gaps are most critical for Phase 5.
The /synthesis skill provides narrative structure templates and Phase 5 preparation guidance to:
Agent CAPPY creates investigation narrative and prepares Claude for Phase 5 validation.
Phase 2 → Phase 3 → Phase 4 Narrative Arc:
[BEGINNING] What we found during triage
"During triage, we identified [Pattern Name] (P-###) with [Confidence]% confidence."
Fact: Concrete pattern match from Phase 2
[MIDDLE] What evidence revealed
"Phase 3 evidence analysis confirms this: [Key Evidence Facts]."
Facts: Direct quotes from Phase 3 evidence
Citations: file:line or HAR:entry references
[END] What hypothesis explains
"Our analysis indicates the root cause is: [Root Cause]."
Logic: How evidence points to root cause
Assumptions: What we haven't verified yet
[PHASE 5 FOCUS] What we need to verify
"For Phase 5 validation, we'll focus on: [Key Verification Targets]"
Action: Specific things to research/verify
During triage, we identified pattern P-### (Configuration) with X% confidence.
Phase 3 evidence analysis reveals:
- Configuration setting missing: [Setting Name]
- Or: Configuration has wrong value: [Expected] vs [Actual]
- Or: Feature disabled: [Feature]
- Evidence: HAR shows [error], logs show [message]
Our hypothesis: The root cause is [Setting] is incorrectly configured.
- Why this explains the symptoms: [Logic]
- What would fix it: [Solution]
For Phase 5 validation, we'll verify:
1. Confirm current [Setting] value in customer config
2. Compare against expected value from docs
3. Validate that changing [Setting] to [Expected] resolves issue
During triage, we identified pattern P-### (Rate Limiting) with X% confidence.
Phase 3 evidence analysis reveals:
- HTTP 429 (Too Many Requests) errors at [timestamp]
- Request rate was [Rate] requests/minute
- Started failing at [specific time]
- Evidence: HAR shows 429 responses, logs show rate messages
Our hypothesis: The root cause is rate limiting on [API/Integration].
- API/Integration has limit: [Limit] requests/minute
- Customer's request rate exceeded this: [Rate] > [Limit]
- Retry behavior created cascade: [Description]
For Phase 5 validation, we'll verify:
1. Confirm API rate limit in [Product] [Version] documentation
2. Verify customer's request rate during test window
3. Calculate if [Rate] exceeds [Limit] = rate limit exceeded
4. Review API docs for workarounds (pagination, backoff, etc.)
During triage, we identified pattern P-### (Timeout) with X% confidence.
Phase 3 evidence analysis reveals:
- Timeout occurred at [timestamp]
- Timeout after [X] seconds
- Integration type: [webhook/polling/API]
- Evidence: HAR shows timeout, logs show connection lost
Our hypothesis: The root cause is [Timeout Type] on [Integration].
- [Integration] has timeout: [X] seconds
- Customer's [operation] takes longer than [X] seconds
- Or: Network/connectivity issue causing delay
For Phase 5 validation, we'll verify:
1. Confirm [Integration] timeout setting in customer config
2. Check if customer's [operation] normally takes [Y] seconds
3. Verify network connectivity to [endpoint]
4. Review firewall rules or TLS certificate issues
During triage, we identified pattern P-### (Version Compatibility) with X% confidence.
Phase 3 evidence analysis reveals:
- Customer is using [Product] [Version] [Build]
- Feature [Feature] behaves differently in this version
- Or: Feature [Feature] deprecated in [Version]
- Evidence: Docs show [behavior], customer sees [different behavior]
Our hypothesis: The root cause is [Feature] incompatibility with [Version].
- [Feature] changed in [Version]: [How it changed]
- Customer's workflow relied on old behavior
- Or: [Feature] removed/deprecated in [Version]
For Phase 5 validation, we'll verify:
1. Confirm feature behavior in [Product] [Version] documentation
2. Review changelog for [Version] regarding [Feature]
3. Identify migration path or workaround
4. Test recommended alternative: [Alternative]
Process:
1. Extract all assumptions from hypothesis
2. For each assumption, ask: "Is this verified in Phase 3 evidence?"
3. If NO → Mark as KEY_ASSUMPTION
4. If YES → Don't include (already verified)
Example:
Hypothesis: "API has 100 req/min rate limit, customer exceeded it"
Assumption 1: "API has 100 req/min rate limit"
Verified in Phase 3? NO (inferred from 429 errors)
Mark as: KEY_ASSUMPTION
Verification Strategy: Search customer config, API docs
Assumption 2: "Customer's request rate exceeded 100 req/min"
Verified in Phase 3? YES (HAR shows 150 req/min at failure time)
Mark as: VERIFIED, don't include as assumption
Assumption 3: "Retry behavior created cascade"
Verified in Phase 3? PARTIALLY (see retries, not full logic)
Mark as: WEAK_ASSUMPTION
Verification Strategy: Search integration config, docs
For Each Key Assumption, Generate Phase 5 Target:
Assumption: "API rate limit is 100 req/min"
Target 1 (Primary):
description: "Confirm API endpoint rate limit value"
sources: ["Customer config file", "API documentation"]
search_terms: ["rate_limit", "max_requests", "throttle", "429"]
expected: "Find configuration or documentation confirming limit value"
Target 2 (Secondary):
description: "Verify customer's actual request rate"
sources: ["HAR file", "Integration logs", "Monitoring data"]
search_terms: ["request rate", "requests per minute", "polling interval"]
expected: "Confirm customer was sending > limit requests"
Target 3 (Secondary):
description: "Check for API workarounds or best practices"
sources: ["API docs", "TAC playbooks", "Cortex documentation"]
search_terms: ["backoff", "retry logic", "pagination", "batch"]
expected: "Identify recommended approach for high-volume requests"
Evidence Gap Types:
1. Missing Configuration
gap: "Customer's integration polling interval unknown"
impact: "Can't confirm if polling rate caused issue"
verification: "Search customer config files in Phase 5"
2. Unverified Behavior
gap: "Webhook retry logic not confirmed"
impact: "Can't confirm cascade effect"
verification: "Search integration docs, customer logs"
3. Timing Uncertainty
gap: "Exact timeout value unknown (assumed 30s)"
impact: "Can't confirm if timeout caused issue"
verification: "Check customer config or product docs"
4. Architecture Assumptions
gap: "Webhook integration architecture not confirmed"
impact: "May be wrong integration type"
verification: "Verify with customer or JIRA history"
Phase 5 Focus:
- Prioritize gaps that affect root cause confirmation
- Mark gaps that would change hypothesis if filled
- Document workaround for gaps that can't be filled
Compute narrative_coherence_score (0–100) before returning:
| Criterion | Points |
|---|---|
| All 3 phases (triage/evidence/hypothesis) explicitly connected in narrative | 40 |
| Causal logic chain is explicit and every claim traces to a cited artifact | 40 |
| All key assumptions stated and bounded (what would change the conclusion) | 20 |
Score ≥ 80 → gate PASSES and handoff proceeds to Phase 7 (SP-5).
Score < 80 → gate FAILS; include recovery_options with what narrative gaps remain.
{
"status": "NARRATIVE_READY",
"investigation_summary": {
"phase_2": "[Pattern found at X% confidence]",
"phase_3": "[Key evidence facts from logs/HAR]",
"phase_4": "[Root cause hypothesis]"
},
"narrative": "[Complete 3-phase narrative connecting findings]",
"narrative_structure": "CONFIGURATION|RATE_LIMITING|TIMEOUT|VERSION_COMPATIBILITY",
"narrative_coherence_score": 0,
"coherence_check": {
"phases_connected": true,
"logic_flow": "STRONG|WEAK",
"assumptions_explicit": true
},
"validation_focus": [
"Primary: [Most critical thing to verify]",
"Secondary: [Supporting verification]",
"Secondary: [Workaround verification]"
],
"key_assumptions": [
{
"assumption": "[Unverified claim]",
"verified_confidence": 0.3,
"importance": "CRITICAL|HIGH|MEDIUM",
"test_strategy": "[How to verify in Phase 5]",
"research_sources": ["[Source 1]", "[Source 2]"]
}
],
"evidence_gaps": [
{
"gap": "[What's missing]",
"impact": "[How it affects hypothesis]",
"verification": "[How to fill in Phase 5]"
}
],
"phase_5_preparation": {
"primary_target": "[Main focus for Phase 5]",
"secondary_targets": ["[Support 1]", "[Support 2]"],
"sources_to_search": [
"Customer config files",
"JIRA tickets",
"Cortex documentation",
"Product changelog"
],
"search_strategy": "[Recommended approach]",
"success_criteria": "[What constitutes verification]"
}
}
def get_narrative_template(root_cause_type: str) -> str:
"""Returns narrative template for root cause type"""
# CONFIGURATION, RATE_LIMITING, TIMEOUT, VERSION_COMPATIBILITY
def create_narrative(phase_summaries: Dict) -> str:
"""Creates 3-phase narrative from phase summaries"""
# Takes: {phase_2: "...", phase_3: "...", phase_4: "..."}
# Returns: Coherent narrative connecting all phases
def extract_validation_targets(hypothesis: str, inv_context: Dict) -> List[str]:
"""Extracts what needs verification in Phase 5"""
# Returns: [primary_target, secondary_1, secondary_2, ...]
def extract_key_assumptions(inv_context: Dict) -> List[Dict]:
"""Extracts unverified assumptions from Phase 4"""
# Returns: [{assumption, verified, importance, test_strategy}]
def extract_evidence_gaps(inv_context: Dict) -> List[Dict]:
"""Identifies evidence gaps from Phase 3"""
# Returns: [{gap, impact, verification}]
def prepare_phase_5(hypothesis: str, assumptions: List, gaps: List) -> Dict:
"""Generates Phase 5 research direction"""
# Returns: {primary_target, secondary_targets, sources, search_strategy}
def validate_narrative_coherence(narrative: str, inv_context: Dict) -> bool:
"""Validates that narrative connects all phases logically"""
# Returns: True if coherent, False otherwise
Before building the root cause narrative and remediation steps, verify them against current documentation, similar resolved cases, and KCS playbooks. This ensures the solution is grounded in authoritative sources, not just investigation reasoning.
Cortex documentation — confirm remediation steps match current product docs:
~/.cappy/tools/web-fallback/websearch.sh "site:docs-cortex.paloaltonetworks.com {root_cause_keywords} {product} remediation" 5
Similar resolved cases — find precedent for this resolution:
// **SF CLI (closed TAC cases — primary)**:
```bash
SANDBOX_OK=$(docker ps --filter name=cappy-client --format "{{.Status}}" 2>/dev/null | grep -c "Up")
if [ "$SANDBOX_OK" -gt 0 ]; then
docker exec cappy-client sf data query \\
--query "SELECT CaseNumber, Subject, Product__c, Status, Cause__c, Resolution_Steps__c FROM Case WHERE (Subject LIKE '%{root_cause_keywords}%' OR Description LIKE '%{root_cause_keywords}%') AND Status = 'Closed' LIMIT 5" \\
--target-org panw --json 2>/dev/null
fi
Fallback (sandbox unavailable or auth expired):
mcp__mcp-gateway__mcp_jira__jira_search({
jql: "project IN (XSUP, XSOAR) AND text ~ \"{root_cause_keywords}\" AND status = Closed ORDER BY updated DESC",
fields: "summary,status,resolution",
limit: 5
})
TAC playbooks / KCS articles:
mcp__mcp-gateway__confluence__confluence_search({
query: "{root_cause_keywords} {product}",
limit: 10
})
Incorporate any doc-sourced remediation steps into the narrative. Cite resolved cases as precedent. If a KCS article exists for this root cause, reference it explicitly in the solution.
synthesis_skill = load_skill("/synthesis")template = synthesis_skill.get_narrative_template(root_cause_type)narrative = synthesis_skill.create_narrative({phase_2, phase_3, phase_4})synthesis_skill.validate_narrative_coherence(narrative, inv_context)targets = synthesis_skill.extract_validation_targets(hypothesis)assumptions = synthesis_skill.extract_key_assumptions(inv_context)gaps = synthesis_skill.extract_evidence_gaps(inv_context)phase_5_prep = synthesis_skill.prepare_phase_5(hypothesis, assumptions, gaps){narrative, validation_focus, key_assumptions, phase_5_preparation}Narrative synthesis is working when:
Skill Version: 1.0.0 Last Updated: 2026-02-05 Status: Ready for CAPPY (singleton agent) integration