---
/plugin marketplace add DNYoussef/context-cascade/plugin install dnyoussef-context-cascade@DNYoussef/context-cascadeThis skill inherits all available tools. When active, it can use any tool Claude has access to.
PROCESS.mdREADME.mdSKILL-meta.yamlprocess-diagram.gvBefore writing ANY code, you MUST check:
.claude/library/catalog.json.claude/docs/inventories/LIBRARY-PATTERNS-GUIDE.mdD:\Projects\*| Match | Action |
|---|---|
| Library >90% | REUSE directly |
| Library 70-90% | ADAPT minimally |
| Pattern exists | FOLLOW pattern |
| In project | EXTRACT |
| No match | BUILD (add to library after) |
Implement ReasoningBank adaptive learning with AgentDB's 150x faster vector database for trajectory tracking, verdict judgment, memory distillation, and pattern recognition. Build self-learning agents that improve decision-making through experience.
import { AgentDB, ReasoningBank } from 'reasoningbank-agentdb';
// Initialize
const db = new AgentDB({
name: 'reasoning-db',
dimensions: 768,
features: { reasoningBank: true }
});
const reasoningBank = new ReasoningBank({
database: db,
trajectoryWindow: 1000,
verdictThreshold: 0.7
});
// Track trajectory
await reasoningBank.trackTrajectory({
agent: 'agent-1',
decision: 'action-A',
reasoning: 'Because X and Y',
context: { state: currentState },
timestamp: Date.now()
});
// Judge verdict
const verdict = await reasoningBank.judgeVerdict({
trajectory: trajectoryId,
outcome: { success: true, reward: 10 },
criteria: ['efficiency', 'correctness']
});
// Learn patterns
const patterns = await reasoningBank.distillPatterns({
minSupport: 0.1,
confidence: 0.8
});
// Apply learning
const decision = await reasoningBank.makeDecision({
context: currentContext,
useLearned: true
});
const trajectory = {
agent: 'agent-1',
steps: [
{ state: s0, action: a0, reasoning: r0 },
{ state: s1, action: a1, reasoning: r1 }
],
outcome: { success: true, reward: 10 }
};
await reasoningBank.storeTrajectory(trajectory);
const verdict = await reasoningBank.judge({
trajectory: trajectory,
criteria: {
efficiency: 0.8,
correctness: 0.9,
novelty: 0.6
}
});
const distilled = await reasoningBank.distill({
trajectories: recentTrajectories,
method: 'pattern-mining',
compression: 0.1 // Keep top 10%
});
const enhanced = await reasoningBank.enhance({
query: newProblem,
patterns: learnedPatterns,
strategy: 'case-based'
});
This skill operates using AgentDB's npm package and API only. No additional MCP servers required.
All AgentDB/ReasoningBank operations are performed through:
npx agentdb@latestimport { AgentDB, ReasoningBank } from 'reasoningbank-agentdb'ReasoningBank Adaptive Learning operates on 3 fundamental principles for building self-improving AI agents:
Agents learn from complete decision trajectories (state, action, reasoning, outcome) rather than isolated actions, enabling understanding of reasoning patterns.
In practice:
Evaluate decision quality across multiple criteria (efficiency, correctness, novelty) using structured judgment rather than binary success/failure.
In practice:
Extract and consolidate successful reasoning patterns through pattern mining, pruning ineffective approaches to maintain lean memory.
In practice:
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Learning From All Trajectories | Treating all decisions equally regardless of outcome quality creates noise in learned patterns, degrading decision-making over time | Implement verdict judgment (Phase 3) with threshold filtering (0.7 default) to learn only from high-quality trajectories, pruning ineffective approaches |
| Storing Raw Trajectories Indefinitely | Accumulating all historical trajectories without compression causes memory bloat, slow retrieval, and dilutes signal with obsolete patterns | Run memory distillation (Phase 4) periodically to extract patterns, keep top 10% by quality, and prune low-value historical data |
| Ignoring Reasoning Context | Recording only actions and outcomes without capturing reasoning and context makes patterns non-transferable to new situations | Store full trajectories with reasoning text and context state (Phase 2) to enable case-based reasoning and debugging decision-making |
ReasoningBank Adaptive Learning with AgentDB provides a framework for building self-improving AI agents that learn from experience through trajectory tracking, verdict judgment, memory distillation, and pattern recognition. By capturing complete decision contexts, evaluating quality across multiple dimensions, and extracting proven patterns, it enables agents to continuously improve decision-making.
This skill excels at building meta-learning systems where agents need to improve over time, reinforcement learning applications requiring trajectory analysis, and decision support systems that learn from historical outcomes. Use this when agents face recurring decision scenarios where learning from past successes and failures can improve future performance.
The 5-phase framework (initialize ReasoningBank, track trajectories, judge verdicts, distill memory, apply learning) provides systematic progression from data collection to active learning. The integration with AgentDB's 150x faster vector search makes it suitable for production environments with real-time decision requirements and large trajectory datasets.
This skill should be used when the user asks to "create a hookify rule", "write a hook rule", "configure hookify", "add a hookify rule", or needs guidance on hookify rule syntax and patterns.
Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.