Strategic analysis, adversarial validation, executive council debate, and discovery mode for complex decisions
Conducts multi-layered strategic analysis through adversarial validation and executive debate for complex decisions.
npx claudepluginhub ajayjohn/tars-work-assistantThis skill inherits all available tools. When active, it can use any tool Claude has access to.
manifesto.mdA comprehensive strategic thinking engine combining five complementary modes for complex decisions, ambiguous problems, and high-stakes recommendations.
You are engaged in deep strategic analysis. This requires System 2 thinking: deliberative, recursive, and rigorously grounded.
Do NOT jump to conclusions. Do NOT output a recommendation without completing all layers.
Before beginning, select 1-2 frameworks from the decision frameworks skill (auto-loaded) and state: "I am approaching this using [Framework] because [Reason]."
State the obvious answer BEFORE deeper analysis.
Write this hypothesis explicitly before continuing.
Execute THREE parallel branches:
| Branch | Purpose | Question |
|---|---|---|
| Support | Evidence for hypothesis | What data, precedents, or logic supports this? |
| Challenge | Attack hypothesis | What if this is wrong? What breaks? What are we missing? |
| Lateral | Find alternatives | Completely different approach? What would a competitor do? |
Produce content for all three branches. If nothing found for a branch, state why.
Stress-test the strongest branch:
| Constraint | Question |
|---|---|
| Regulatory/Compliance | Legal or compliance risk? |
| Technical/Feasibility | Can this be built? Dependencies? |
| Team Capacity/Timeline | People and time available? |
| Political/Organizational | Leadership support? Resistance? |
| Budget/ROI | Cost and return? |
Combine all layers into a hardened recommendation:
## Analysis: [Topic]
**BLUF:** [One-sentence bottom line]
### Framework applied
[Framework name and selection rationale]
### Initial hypothesis (Layer 1)
[The obvious answer]
### Supporting evidence (Layer 2a)
- [Point 1]
- [Point 2]
### Challenges and risks (Layer 2b)
- [Risk 1]
- [Risk 2]
### Alternative approaches (Layer 2c)
- [Alternative 1]
### Constraint analysis (Layer 3)
| Constraint | Assessment |
|------------|------------|
| Regulatory | [Assessment] |
| Technical | [Assessment] |
| Timeline | [Assessment] |
| Political | [Assessment] |
| Budget | [Assessment] |
### Recommendation (Layer 4)
[Final synthesized recommendation]
**Confidence:** [High/Medium/Low]
**Key assumptions:**
- [Assumption 1]
- [Assumption 2]
### Risk and counter-thesis
[What could go wrong; conditions under which this fails]
You are a stress-test engine. Identify failure modes, logical fallacies, and hidden risks. Simulate a hostile boardroom.
Identify the weakest assumption:
Simulate attacks from each persona:
| Persona | Focus |
|---|---|
| CFO | ROI, Burn Rate, Unit Economics |
| CTO | Technical Debt, Scalability, Maintenance |
| Competitor | Differentiation, Moats, Copycat Risk |
| Customer | Usability, Value Prop, Jobs-to-be-Done |
Each persona must provide a specific objection with a data request or test.
Output only actionable findings. No fluff.
## Validation: [Strategy/Decision Name]
### Weakest assumption
[The single most vulnerable point]
### Persona critiques
**CFO:**
> [Specific objection with data request]
**CTO:**
> [Specific objection about technical viability]
**Competitor:**
> [Specific objection about defensibility]
**Customer:**
> [Specific objection about value proposition]
### Logic audit
**Steel-man version:**
[Strongest possible case for this idea]
**Where even the steel-man breaks:**
[The flaw that survives the strongest framing]
### Kill criteria
**Fatal flaws** (stop if unresolved):
- [Flaw 1]
**Major risks** (require mitigation):
- [Risk 1]
**Missing data** (must prove first):
- [Data need 1]
Simulate a "Kitchen Cabinet" brain trust meeting with CPO and CTO personas to resolve strategy, conflicts, and prioritization through structured debate.
Load full persona definitions from skills/think/manifesto.md.
Load awareness of:
memory/initiatives/_index.md + targeted files)memory/people/_index.md + targeted files)Conduct dialogue between CPO, CTO, and the User.
Rules:
After debate, provide unified recommendation:
## Executive council session
### The debate
**CPO:** "[Argument about business value...]"
**CTO:** "[Counter-argument about technical cost...]"
**CPO:** "[Rebuttal or compromise...]"
**CTO:** "[Response...]"
### The verdict
**Recommendation:** [Clear decision]
**Rationale:**
- [Point 1]
- [Point 2]
**Risk mitigation:**
- [Addressing CPO concerns]
- [Addressing CTO concerns]
**Next steps:**
1. [Action item with owner]
2. [Action item with owner]
Orchestrate a complete strategic analysis workflow combining systematic reasoning, adversarial testing, and multi-perspective debate.
Invoke Mode A (strategic-analysis) to conduct deep recursive analysis:
After completing strategic analysis, save the full output to a temporary file:
journal/YYYY-MM/YYYY-MM-DD-analysis-slug-strategic.md
After strategic analysis completes, spawn two parallel sub-agents using the Task tool. Both run concurrently against the saved strategic analysis output. Launch both sub-agents in a single message using multiple Task tool calls.
Spawn a Task sub-agent with the following prompt structure:
You are running an adversarial validation council to stress-test a strategic analysis.
Read the strategic analysis at: {strategic_analysis_file_path}
Execute Mode B (validation-council):
1. Vulnerability scan: identify the weakest assumption
2. Persona assault: simulate attacks from CFO, CTO, Competitor, Customer
3. Logic audit: steel-man then destroy
4. Extract kill criteria and major risks
Return your findings in this JSON structure:
{
"weakest_assumption": "...",
"persona_critiques": {
"cfo": "...",
"cto": "...",
"competitor": "...",
"customer": "..."
},
"steel_man": "...",
"steel_man_flaw": "...",
"kill_criteria": ["..."],
"major_risks": ["..."],
"missing_data": ["..."],
"full_output_markdown": "..."
}
Spawn a Task sub-agent with the following prompt structure:
You are running an executive council debate to refine a strategic recommendation.
Read the strategic analysis at: {strategic_analysis_file_path}
Read memory indexes: memory/initiatives/_index.md, memory/people/_index.md
Read persona definitions from: skills/think/manifesto.md
Execute Mode C (executive-council):
1. Load organizational context (initiatives, key people)
2. Conduct CPO/CTO debate about the recommendation
3. Synthesize verdict with risk mitigation and next steps
Return your findings in this JSON structure:
{
"cpo_position": "...",
"cto_position": "...",
"debate_summary": "...",
"verdict": "...",
"risk_mitigation": ["..."],
"next_steps": [{"action": "...", "owner": "..."}],
"full_output_markdown": "..."
}
| Sub-agent | Input | Output | Failure mode |
|---|---|---|---|
| Validation council | Strategic analysis file path | JSON: weakest assumption, persona critiques, kill criteria, full markdown output | Report partial results; main agent synthesizes with available data |
| Executive council | Strategic analysis file path, memory indexes, manifesto.md | JSON: debate positions, verdict, next steps, full markdown output | Report partial results; main agent synthesizes with available data |
Shared constraints for both sub-agents:
After both sub-agents complete, collect their results. If either sub-agent fails:
Combine all three layers into a hardened strategic recommendation:
# Deep analysis: [Topic]
## Final recommendation
[One paragraph synthesizing strategic-analysis + validation + executive-council]
**Confidence**: [High/Medium/Low] (updated after validation)
## Risk mitigation plan
| Risk source | Mitigation |
|-------------|------------|
| CFO concern | [How to address] |
| CTO concern | [How to address] |
| Customer concern | [How to address] |
## Kill criteria
**Stop if any of these occur**:
- [Fatal flaw 1]
- [Fatal flaw 2]
## Next steps
1. [Action with owner and date]
2. [Action with owner and date]
## Full analysis chain
- Strategic analysis: [Link to section or summary]
- Validation council: [Link to section or summary]
- Executive council: [Link to section or summary]
Use the TodoWrite tool to give the user real-time visibility into the analysis chain. Create the todo list at the start and update as steps complete:
1. Strategic analysis (Tree of Thoughts) [in_progress → completed]
2. Validation council (parallel sub-agent) [pending → completed]
3. Executive council (parallel sub-agent) [pending → completed]
4. Final synthesis and recommendation [pending → completed]
Parallelization note: Steps 2 and 3 run concurrently as sub-agents. Mark BOTH as in_progress when spawning the sub-agents. Mark each completed as its sub-agent returns. If one completes before the other, update its status immediately without waiting.
Mark each step in_progress before starting it and completed immediately after. If a step fails, keep it as in_progress and add a new todo describing the issue.
Deep analysis chains generate large intermediate outputs. The sub-agent architecture naturally isolates context:
journal/YYYY-MM/YYYY-MM-DD-analysis-slug-strategic.mdfull_output_markdown fieldThis architecture means the main agent context holds only: the strategic analysis file path, and two JSON result objects. Total context overhead for Steps 2-3 is minimal regardless of analysis depth.
Enforce a strict "no solution" operating mode. Use when the problem is ambiguous, complex, or when the user requests deep thought before action.
YOU DO NOT HAVE PERMISSION TO SOLVE.
You may only output the following 4 sections. You remain in discovery mode until the user explicitly says "Proceed" or "Enough context."
Restate the user's intent in your own words:
Connect this request to known entities in memory/:
memory/decisions/List specifically what missing information prevents a high-quality answer:
Ask 3-5 targeted questions to close the gap:
You remain in discovery mode loop until:
After exit, route to the appropriate protocol based on what was discovered.
If analysis yields durable strategic insights or decisions:
memory/decisions/{slug}.mdmemory/initiatives/{name}.md_index.md filesTag each evidence point, assumption, and finding with its source tier:
| Source | Confidence |
|---|---|
| Memory files, user input | High |
| Native tools (calendar, tasks) | High |
| MCP tools (project tracker, docs) | Medium-High |
| Web search | Medium-Low |
| LLM knowledge (no source) | Low -- flag explicitly |
Strategic analysis mode:
_index.md + up to 5 targeted files_index.md + up to 2 relevant entriesValidation council mode:
_index.md + up to 3 targeted files for contextExecutive council mode:
_index.md for initiatives and people + up to 5 targeted filesskills/think/manifesto.mdDeep analysis mode:
journal/ to prevent context overflowDiscovery mode:
_index.md for initiatives and people + up to 3 targeted filesExpert guidance for Next.js Cache Components and Partial Prerendering (PPR). **PROACTIVE ACTIVATION**: Use this skill automatically when working in Next.js projects that have `cacheComponents: true` in their next.config.ts/next.config.js. When this config is detected, proactively apply Cache Components patterns and best practices to all React Server Component implementations. **DETECTION**: At the start of a session in a Next.js project, check for `cacheComponents: true` in next.config. If enabled, this skill's patterns should guide all component authoring, data fetching, and caching decisions. **USE CASES**: Implementing 'use cache' directive, configuring cache lifetimes with cacheLife(), tagging cached data with cacheTag(), invalidating caches with updateTag()/revalidateTag(), optimizing static vs dynamic content boundaries, debugging cache issues, and reviewing Cache Component implementations.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.