From founder-skills
Simulates a realistic VC Investment Committee discussion with three partner archetypes debating a startup's merits, concerns, and deal terms, scored across 28 dimensions. Use when user asks to 'simulate an IC', 'how would VCs discuss this', 'IC meeting simulation', 'investment committee practice', 'prepare for IC', 'VC partner discussion', 'what will investors debate', 'how would a fund evaluate this', 'IC prep', or provides startup materials for investment committee simulation. Do NOT use for pitch deck feedback (use deck-review), market sizing, or financial model analysis.
npx claudepluginhub lool-ventures/founder-skills --plugin founder-skillsThis skill uses the workspace's default tool permissions.
Help startup founders prepare for the conversation that happens behind closed doors — the one where VC partners debate whether to invest. Produce a realistic IC simulation with three distinct partner perspectives, scored across 28 dimensions, with specific coaching on what to prepare. The tone is founder-first: a coaching tool for preparation, not a judgment.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Help startup founders prepare for the conversation that happens behind closed doors — the one where VC partners debate whether to invest. Produce a realistic IC simulation with three distinct partner perspectives, scored across 28 dimensions, with specific coaching on what to prepare. The tone is founder-first: a coaching tool for preparation, not a judgment.
Accept any combination: pitch deck, financial model, data room contents, text descriptions, prior market-sizing or deck-review artifacts, or just a verbal description of the business.
All scripts are at ${CLAUDE_PLUGIN_ROOT}/skills/ic-sim/scripts/:
fund_profile.py — Validates fund profile structure (archetypes, check size, thesis, portfolio)detect_conflicts.py — Validates conflict assessments and computes summary statsscore_dimensions.py — Scores 28 dimensions across 7 categories with conviction-based scoringcompose_report.py — Assembles report with cross-artifact validation; --strict exits 1 on high/medium warningsvisualize.py — Generates self-contained HTML with SVG charts (not JSON)Run with: python3 ${CLAUDE_PLUGIN_ROOT}/skills/ic-sim/scripts/<script>.py --pretty [args]
Read each when first needed — do NOT load all upfront. At ${CLAUDE_PLUGIN_ROOT}/skills/ic-sim/references/:
partner-archetypes.md — Read before Step 3. Three canonical archetypes with focus areas, debate styles, red flagsevaluation-criteria.md — Read before Step 5. 28 dimensions across 7 categories with stage-calibrated thresholdsic-dynamics.md — Read before Step 5d. How real VC ICs work: formats, decisions, what kills dealsartifact-schemas.md — Consult as needed when depositing agent-written artifactsEvery simulation deposits structured JSON artifacts into a working directory. The final step assembles all artifacts into a report and validates consistency. This is not optional.
| Step | Artifact | Producer |
|---|---|---|
| 1 | startup_profile.json | Sub-agent (Task) or agent (heredoc) |
| 2 | prior_artifacts.json | Sub-agent (Task) or agent (heredoc) |
| 3 | fund_profile.json | Agent (heredoc) then fund_profile.py validates |
| 4 | conflict_check.json | Agent (heredoc) then detect_conflicts.py validates |
| 5a-c | partner_assessment_{role}.json | Sub-agents (Task, parallel) |
| 5d | discussion.json | Main agent (composes from 5a-c + debate) |
| 6 | score_dimensions.json | score_dimensions.py |
| 7 | Report | compose_report.py reads all |
Rules:
references/artifact-schemas.md for the JSON schema{"skipped": true, "reason": "..."}isolation: "worktree" for sub-agents — files written in a worktree won't appear in the main $SIM_DIRKeep the founder informed with brief, plain-language updates at each step. Never mention file names, scripts, or JSON. After each analytical step (4–6), share a one-sentence finding before moving on.
SCRIPTS="$CLAUDE_PLUGIN_ROOT/skills/ic-sim/scripts"
REFS="$CLAUDE_PLUGIN_ROOT/skills/ic-sim/references"
if ls "$(pwd)"/mnt/*/ >/dev/null 2>&1; then
ARTIFACTS_ROOT="$(ls -d "$(pwd)"/mnt/*/ | head -1)artifacts"
elif ls "$(pwd)"/sessions/*/mnt/*/ >/dev/null 2>&1; then
ARTIFACTS_ROOT="$(ls -d "$(pwd)"/sessions/*/mnt/*/ | head -1)artifacts"
else
ARTIFACTS_ROOT="$(pwd)/artifacts"
fi
If CLAUDE_PLUGIN_ROOT is empty, fall back: Glob for **/founder-skills/skills/ic-sim/scripts/score_dimensions.py, strip to get SCRIPTS, derive REFS.
If ARTIFACTS_ROOT resolves to $(pwd)/artifacts but no artifacts/ directory exists at $(pwd): The workspace may not be mounted yet. Use Glob with pattern **/artifacts/founder_context.json to locate existing artifacts, and derive ARTIFACTS_ROOT from the result. If nothing is found, mkdir -p "$ARTIFACTS_ROOT" and proceed.
SIM_DIR="$ARTIFACTS_ROOT/ic-sim-{company-slug}"
mkdir -p "$SIM_DIR"
Ask the user (or infer from context):
When files are provided, spawn a general-purpose Task sub-agent to read materials, extract startup profile, and import prior market-sizing/deck-review artifacts. The sub-agent deposits both startup_profile.json and prior_artifacts.json to $SIM_DIR.
If missing fields are flagged, ask the user and patch the artifact. When only text is provided, extract directly without a sub-agent.
Graceful degradation: If Task tool is unavailable, extract directly.
fund_profile.jsonREQUIRED — read $REFS/partner-archetypes.md now.
Generic mode: Build a standard early-stage fund profile with the three canonical archetypes.
Fund-specific mode: Use WebSearch to research fund thesis, portfolio, partner backgrounds, check size range, and stage preference. Map real partners to archetype roles.
Validation constraints: check_size_range must be a dict (not a string), stage_focus must be a non-empty array, each source must have url or title.
cat <<'FUND_EOF' | python3 "$SCRIPTS/fund_profile.py" --pretty -o "$SIM_DIR/fund_profile.json"
{...fund profile JSON...}
FUND_EOF
Accepted warnings: Add accepted_warnings array with code, match (case-insensitive), and reason. Compose downgrades matching warnings to "acknowledged".
conflict_check.jsonReview the fund's portfolio against the startup. Assess each company for: direct conflict, adjacent conflict, or customer overlap. Use consistent names between portfolio and conflicts. Duplicates are auto-deduplicated by (company, type) pair.
cat <<'CONFLICT_EOF' | python3 "$SCRIPTS/detect_conflicts.py" --pretty -o "$SIM_DIR/conflict_check.json"
{"portfolio_size": 15, "conflicts": [...]}
CONFLICT_EOF
Step 5a-5c: Parallel Sub-Agent Assessments
REQUIRED — read $REFS/evaluation-criteria.md now.
Spawn 3 general-purpose Task sub-agents in a single message (parallel, no isolation: "worktree"). Each receives: archetype persona, startup_profile, fund_profile, conflict_check, prior_artifacts, and relevant evaluation criteria. Each independently produces partner_assessment_{role}.json. Instruct each sub-agent to return ONLY: (1) the file path written, (2) the verdict, and (3) a one-sentence rationale — do not echo the full assessment back.
Graceful degradation: If Task tool unavailable, generate sequentially with strict persona separation. Set assessment_mode: "sequential" and "assessment_mode_intentional": true in discussion.json.
Step 5d: Orchestrate Discussion -> discussion.json
REQUIRED — read $REFS/ic-dynamics.md now.
Read all 3 assessments. Generate debate: each partner presents, partners respond to each other, build toward consensus. In interactive mode, pause between positions.
Verdict reconciliation: Ensure each partner's verdict reflects their final position after debate, not their opening position. The compose report flags UNANIMOUS_VERDICT_MISMATCH when all partners contradict the consensus.
Discussion-to-Score reconciliation: Before scoring, re-read discussion conclusions. If a dimension was debated as a dealbreaker, ensure the score reflects that severity. Compose flags CONSENSUS_SCORE_MISMATCH when discussion verdict and score diverge.
score_dimensions.jsoncat <<'SCORE_EOF' | python3 "$SCRIPTS/score_dimensions.py" --pretty -o "$SIM_DIR/score_dimensions.json"
{
"items": [
{"id": "team_founder_market_fit", "category": "Team", "status": "strong_conviction", "evidence": "...", "notes": "..."},
...all 28 items...
]
}
SCORE_EOF
python3 "$SCRIPTS/compose_report.py" --dir "$SIM_DIR" --pretty -o "$SIM_DIR/report.json"
Fix high-severity warnings and re-run. Use --strict to enforce a clean report.
Primary deliverable: Read report_markdown from the output JSON, write it to $SIM_DIR/report.md, and display it to the user in full. Present the file path so the user can access it directly. Then add coaching commentary.
python3 "$SCRIPTS/visualize.py" --dir "$SIM_DIR" -o "$SIM_DIR/report.html"
Present the HTML file path to the user so they can open the visual report.
Copy final deliverables to workspace root: {Company}_IC_Simulation.md, .html (if generated), .json (optional).
strong_conviction / moderate_conviction / concern / dealbreaker / not_applicable(strong*1.0 + moderate*0.5) / applicable * 100invest (>=75%), more_diligence (>=50%), pass (<50%), hard_pass (any dealbreaker)hard_pass regardless of scoreThis skill imports artifacts from prior market-sizing and deck-review analyses. Imported artifacts are recorded with dates. Imports older than 7 days are flagged as STALE_IMPORT.