Extract cross-campaign meta-patterns.
Extracts cross-campaign meta-patterns by analyzing completed workspaces for pattern clusters, evolution chains, and domain bridges. Use it after campaigns to identify reusable insights, failure modes, and success conditions that emerge across multiple projects.
/plugin marketplace add enzokro/crinzo-plugins/plugin install ftl@crinzo-pluginsopusPatterns compound. Find connections across campaigns.
ls .ftl/campaigns/complete/*.json 2>/dev/null
Read all completed campaigns.
source ~/.config/ftl/paths.sh 2>/dev/null; python3 "$FTL_LIB/campaign.py" patterns
Build pattern frequency map:
cat .ftl/synthesis.json 2>/dev/null || echo "{}"
Look for:
Pattern Clusters - Things that work together:
#pattern/session-token-flow often appears with #pattern/refresh-token
→ Meta-pattern: token-lifecycle
Evolution Chains - What replaced what:
#pattern/jwt-storage → #pattern/httponly-cookies
→ Evolution: security improvement
Decision-Based Evolution (Phase D) - Mine Options Considered:
Decision 015 rejected localStorage (XSS)
Decision 023 rejected localStorage (XSS)
→ #antipattern/jwt-localstorage evolves to #pattern/httponly-cookies
Look for patterns across Options Considered sections:
Domain Bridges - Patterns that transfer:
#pattern/retry-with-backoff used in auth, now applies to API
→ Bridge: resilience pattern
For each significant pattern, mine Thinking Traces for semantic context:
Look for these markers:
Example extraction:
Thinking Traces: "Chose refresh tokens because re-authentication on every request creates poor UX. This failed when tokens were stored in localStorage (XSS vulnerable). Worked because we used httpOnly cookies with proper rotation."
Extracted:
Rationale: Refresh tokens avoid re-authentication on every request
Failure modes: localStorage storage (XSS vulnerable)
Success conditions: httpOnly cookies, rotation policy
Store in synthesis.json:
{
"pattern_semantics": {
"#pattern/session-token-flow": {
"rationale": "Refresh tokens avoid re-authentication",
"failure_modes": ["localStorage XSS", "missing rotation"],
"success_conditions": ["httpOnly cookies", "rotation policy"],
"extracted_from": ["015", "023"]
}
}
}
Write to .ftl/synthesis.json:
{
"meta_patterns": [
{
"name": "token-lifecycle",
"components": ["#pattern/session-token-flow", "#pattern/refresh-token"],
"signals": {"positive": 5, "negative": 1},
"domains": ["auth", "api"]
}
],
"evolution": [
{
"from": "#antipattern/jwt-localstorage",
"to": "#pattern/httponly-cookies",
"trigger": "security audit",
"decisions": ["008", "015"],
"rejected_count": 3,
"rejection_reasons": ["XSS vulnerability", "security audit"],
"confidence": 0.9
}
],
"conditions": {
"#pattern/session-rotation": {
"works_when": ["cookies", "long-lived sessions"],
"fails_when": ["clock-skew", "high-concurrency"],
"learned_from": ["015", "023"]
}
},
"bridges": [
{
"pattern": "#pattern/retry-with-backoff",
"from_domain": "auth",
"to_domains": ["api", "external-services"]
}
],
"updated": "2025-01-02T12:00:00Z"
}
For each completed workspace in this campaign:
.ftl/workspace/*_complete.md## Key Findings
#pattern/name - brief description
Conditions: when this works
Failure modes: when this breaks
#constraint/name - constraint discovered
Conditions: when constraint applies
Failure modes: what happens if violated
This replaces per-task Learner. One pass at campaign end is richer than incremental shallow extraction.
source ~/.config/ftl/paths.sh && python3 "$FTL_LIB/context_graph.py" mine
Run once at campaign end (not per-task). Mines all completed workspaces into memory.json.
## Synthesis Complete
### Meta-Patterns
- **token-lifecycle**: session-token-flow + refresh-token (net +4)
### Evolution
- jwt-storage → httponly-cookies (security)
### Bridges
- retry-with-backoff: auth → api, external-services
### Statistics
- Patterns analyzed: 12
- Meta-patterns: 3
- Evolutions tracked: 1
- Cross-domain bridges: 2
For the just-completed campaign, evaluate what worked:
Analyze:
Store in synthesis.json:
{
"retrospectives": [{
"campaign": "oauth-integration",
"completed": "2025-01-02",
"tasks_total": 5,
"tasks_revised": 1,
"verification_pass_rate": 0.8,
"useful_precedent": ["#pattern/session-token-flow"],
"lessons": [
"OAuth schema should come before provider implementations",
"Token refresh tests caught edge cases early"
]
}]
}
Report:
### Retrospective
Campaign: oauth-integration
- Tasks: 5 total, 1 revised
- Verification: 80% passed first attempt
- Key lesson: Schema-first decomposition worked well
Quality over quantity. If nothing extractable, report:
Synthesis: No new meta-patterns. Insufficient data.
Use this agent when analyzing conversation transcripts to find behaviors worth preventing with hooks. Examples: <example>Context: User is running /hookify command without arguments user: "/hookify" assistant: "I'll analyze the conversation to find behaviors you want to prevent" <commentary>The /hookify command without arguments triggers conversation analysis to find unwanted behaviors.</commentary></example><example>Context: User wants to create hooks from recent frustrations user: "Can you look back at this conversation and help me create hooks for the mistakes you made?" assistant: "I'll use the conversation-analyzer agent to identify the issues and suggest hooks." <commentary>User explicitly asks to analyze conversation for mistakes that should be prevented.</commentary></example>