From skillkit-frameworks
Implements critical thinking framework for AI agents: reasoning router (CoT/ToT/GoT), metacognitive monitoring, self-verification, bias detection. For complex tasks needing self-correction and reliability.
npx claudepluginhub rfxlamia/skillkit --plugin skillkit-frameworksThis skill uses the workspace's default tool permissions.
Critical Thinking Framework (FCT) provides architectural components for building AI agents with metacognition and self-correction capabilities. This framework integrates state-of-the-art techniques such as Chain-of-Thought, Tree-of-Thoughts, and self-verification to produce more reliable and transparent reasoning.
references/bias_detector.mdreferences/fallback_handler.mdreferences/memory_curator.mdreferences/metacognitive_monitor.mdreferences/producer_critic_orchestrator.mdreferences/reasoning_router.mdreferences/reasoning_validator.mdreferences/reflection_trigger.mdreferences/self_verification.mdreferences/uncertainty_quantifier.mdProvides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Analyzes competition with Porter's Five Forces, Blue Ocean Strategy, and positioning maps to identify differentiation opportunities and market positioning for startups and pitches.
Critical Thinking Framework (FCT) provides architectural components for building AI agents with metacognition and self-correction capabilities. This framework integrates state-of-the-art techniques such as Chain-of-Thought, Tree-of-Thoughts, and self-verification to produce more reliable and transparent reasoning.
When to use this skill:
Triggers: Use this skill when the user requests help with:
File: references/reasoning_router.md
Detects problem complexity and routes to the optimal reasoning method (CoT/ToT/GoT/Self-Consistency).
Use cases:
File: references/metacognitive_monitor.md
Self-assessment and error detection in the reasoning process. Implements the Producer-Critic pattern for continuous quality control.
Key features:
File: references/self_verification.md
Implementation of Chain-of-Verification (CoVe) and other self-verification techniques to validate outputs before delivery.
Methods covered:
File: references/bias_detector.md
Detection of cognitive bias in the reasoning process and mitigation strategies.
Bias types covered:
File: references/producer_critic_orchestrator.md
Pattern for orchestrating Generate-Critique-Refine cycles in agent workflows.
Architecture:
File: references/memory_curator.md
Management of episodic memory with quality weighting to prevent memory pollution from bad episodes.
Features:
File: references/reasoning_validator.md
Logical consistency checker and structural validation for reasoning chains.
Validation types:
File: references/reflection_trigger.md
Rule-based triggers to activate self-correction loops based on specific conditions.
Trigger conditions:
User Request: Build/improve AI agent with critical thinking
├─ Step 1: Analyze Task Complexity
│ ├─ Simple, single-path → Use CoT (Chain-of-Thought)
│ ├─ Complex, multi-path → Use ToT (Tree-of-Thoughts)
│ ├─ Interconnected → Use GoT (Graph-of-Thoughts)
│ └─ Critical, needs verification → Use Self-Consistency
│
├─ Step 2: Implement Metacognitive Layer
│ ├─ Add confidence scoring
│ ├─ Set up reflection triggers
│ └─ Configure human handoff thresholds
│
├─ Step 3: Add Self-Verification
│ ├─ Implement CoVe for factual claims
│ ├─ Add backward verification for math/logic
│ └─ Setup cross-verification if external sources available
│
├─ Step 4: Integrate Bias Detection
│ ├─ Check for confirmation bias
│ ├─ Validate assumption diversity
│ └─ Apply mitigation strategies
│
└─ Step 5: Setup Memory & Learning
├─ Configure episodic memory
├─ Setup quality weighting
└─ Implement experience replay
| Task Characteristic | Recommended Method | Cost | Accuracy |
|---|---|---|---|
| Simple, linear | CoT | Low | Good |
| Complex planning | ToT-BFS | High | Very Good |
| Deep reasoning | ToT-DFS | High | Very Good |
| Interconnected | GoT | Very High | Excellent |
| Critical decisions | Self-Consistency | Very High | Excellent |
| Factual claims | CoVe | Medium | Good |
# Pseudo-code for agent with ACT-F
class CriticalThinkingAgent:
def __init__(self):
self.reasoning_router = ReasoningRouter()
self.metacognitive_monitor = MetacognitiveMonitor()
self.self_verifier = SelfVerification()
self.bias_detector = BiasDetector()
async def solve(self, problem):
# Step 1: Route to appropriate method
method = self.reasoning_router.select(problem)
# Step 2: Generate with monitoring
thoughts = []
for step in method.generate(problem):
confidence = self.metacognitive_monitor.assess(step)
if confidence < THRESHOLD:
step = self.reflect_and_improve(step)
thoughts.append(step)
# Step 3: Self-verification
verified = self.self_verifier.verify(thoughts)
# Step 4: Bias check
if self.bias_detector.detect(verified):
verified = self.bias_detector.mitigate(verified)
return verified
Complete documentation for each ACT-F component:
reasoning_router.md - Reasoning method selection (P0)metacognitive_monitor.md - Self-assessment and monitoring (P0)self_verification.md - Output verification techniques (P0)bias_detector.md - Bias detection and mitigation (P0)producer_critic_orchestrator.md - Generate-critique-refine pattern (P1)memory_curator.md - Memory management (P1)reasoning_validator.md - Logical validation (P1)reflection_trigger.md - Trigger conditions (P1)uncertainty_quantifier.md - Confidence calibration (P2)fallback_handler.md - Graceful degradation (P2)Utilities and templates for implementation (optional).
Sources: