Runs parallel agents to gather positive/negative code evidence for a statement, synthesizes objectively, verifies via file reads/lines. For architecture reviews, bug claims, performance analysis.
From thinking-toolsnpx claudepluginhub umputun/cc-thingz --plugin thinking-toolsThis skill is limited to using the following tools:
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Implements structured self-debugging workflow for AI agent failures: capture errors, diagnose patterns like loops or context overflow, apply contained recoveries, and generate introspection reports.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Objective analysis of a statement by running two agents with opposing goals in parallel, then synthesizing findings.
CRITICAL: Both Task tool calls MUST be in a single message for true parallel execution. Do NOT use run_in_background. Do NOT launch sequentially. Foreground agents in the same message run in parallel and block until both complete.
Agent 1 (Thesis) — find all POSITIVE evidence:
Agent 2 (Antithesis) — find all NEGATIVE evidence:
Both agents must provide specific file paths and line numbers when analyzing code.
After both agents complete, synthesize findings into an objective conclusion:
CRITICAL — after presenting the synthesis, verify it against actual implementation:
Architecture decisions:
/thinking-tools:dialectic this microservice split improves maintainability
Bug analysis:
/thinking-tools:dialectic the connection pool fixes the timeout issue
Performance claims:
/thinking-tools:dialectic caching reduced database load by 80%
Refactoring safety:
/thinking-tools:dialectic extracting this interface simplifies testing
Code review:
/thinking-tools:dialectic this implementation is thread-safe
Review changes:
/thinking-tools:dialectic review the changes in server.go