From mainbranch
Executes research, decision-making, and codification workflow to explore questions before committing, document decisions, and update reference files with testimonials or proof. Supports full flow, research-only, decide-only, or codify modes.
npx claudepluginhub noontide-co/mainbranchThis skill uses the workspace's default tool permissions.
Research, decide, and codify knowledge into reference files.
Generates GTM strategy documents (brand, market landscape, messaging, channels) by scanning codebases and conducting multi-session user interviews. Supports initial research and git-diff updates.
Orchestrates research workflows from question definition to evidence-based findings documentation for technical, requirements, literature, and codebase topics.
Guides continuous product discovery: weekly rhythms, Opportunity Solution Trees, interview snapshots, solution exploration, assumption tests before engineering commits.
Share bugs, ideas, or general feedback.
Research, decide, and codify knowledge into reference files.
This skill is for: "I don't know what happens next. I just need to start."
Something came your way — a video, a voice memo, a vague feeling, a problem to solve. You don't need a plan. Just start. The skill finds the overlap between your interest and your offer.
Re-invoke often. Saying /mb-think again is normal. It reloads context. Do it after:
Before diving in, know which mode you're in:
| Mode | You're doing | Examples |
|---|---|---|
| Enriching the core | Pulling insights → reference files | Mining videos, making decisions, updating offer.md, building content-strategy.md |
| Creating for the world | Reference files → output | Ads, scripts, courses, code, posts |
/mb-think is for enriching the core. When you're ready to create, use /mb-ads, /mb-organic, /mb-vsl, or just ask.
Both are work. Enriching the core levels up everything downstream.
See references/pull-engine-updates.md for the canonical engine resolution + pull bash block. Run it at the start of every /mb-think invocation.
Tool status persists in .vip/config.yaml under tools:. Read config first, only probe unknowns, always write results back.
Quick gist: On first /mb-think invocation each session, read tools from config, re-probe unknowns or stale-false entries, write results back immediately, report once at experience-appropriate verbosity. Surface a tool option to the user only when their intent needs it and it's missing — once per session per tool.
See references/tool-detection.md for the full status-value table, staleness rules, per-tool detection methods (Apify, Gemini, Grok, whisper, Nano Banana, Pipeboard, document tools), required config-update format, and the intent-based tool surfacing matrix.
For self-healing semantics (stale false handling, status-change messaging, true-tool degradation), see tool-status-self-healing.md.
Before loading reference files, resolve the active offer:
.vip/local.yaml for current_offerreference/offers/[current_offer]/offer.md as the active offerreference/offers/ exists: ask which offeroffers/ folder: use reference/core/offer.md (single-offer, backward compatible)Always-core files (never per-offer): soul.md, voice.md, content-strategy.md
Offer-aware files (check offers/ first, fall back to core/): offer.md, audience.md
Accumulate files (load both): testimonials.md (offer-specific + brand-level)
All files save to YOUR business repo, not the Main Branch engine.
your-business-repo/ <- Files saved here
├── research/ <- Research output
├── decisions/ <- Decision output
└── reference/ <- Codify updates this
├── core/ <- Brand-level
├── offers/ <- Per-offer (if multi-offer)
└── domain/
mainbranch/ (engine) <- Never modified
Detect mode from user's natural language:
| User Says | Mode | Reference |
|---|---|---|
| "figure out", "explore", "I'm trying to..." | Full Flow | - |
| "research", "investigate", "what do we know about" | Research | research-phase.md |
| "what are people saying", "sentiment", "X/Twitter", "trending" | Research (Grok) | grok-social.md |
| "decide", "we chose", "document decision" | Decide | decide-phase.md |
| "codify", "apply", "update reference files" | Codify | codify-phase.md |
| "add context", "enrich", "I have new info" | Codify | codify-phase.md |
| "content strategy", "pillars", "what platforms", "content plan", "cadence" | Full Flow (codify to content-strategy.md) | codify-phase.md |
| "where was I", "continue", "pick up" | Recovery | recovery.md |
| "here's a PDF", "ingest this", "convert this document", file path (.pdf/.docx/.pptx) | Document Ingestion | document-ingestion.md |
If unclear, ask: "Are you exploring a question, documenting a decision, or updating reference files?"
When routing to research mode, detect research TYPE from user intent:
| User Intent | Trigger Phrases | Primary Tool | Fallback |
|---|---|---|---|
| YouTube research | YouTube URL, "transcribe video", "what does [creator] say" | Apify | Ask for manual transcript |
| X/Twitter sentiment | "what are people saying", "sentiment on X", "Twitter discourse" | Grok | WebSearch site:x.com |
| Deep web research | "deep research", "comprehensive analysis", "research everything" | Gemini | Multi-source WebSearch |
| Local transcription | Local file path (.mp4, .m4a), "transcribe my recording" | whisper | CLI mlx_whisper or whisper-cli |
| Instagram mining | Instagram handle, "mine [handle]", "competitor posts" | Apify | Manual screenshots |
| Ad account research | "ad performance", "what's working", "check CPA", "audit my ads" | Pipeboard | Manual Ads Manager check |
| General research | Default, "research [topic]", "what do we know" | WebSearch + codebase | Always available |
Multiple types needed at once? Spawn them as parallel Task agents in a single message. Example: user says "research what [creator] says on YouTube and what people think on X" — spawn one agent for YouTube transcript mining and another for X/Twitter sentiment simultaneously. Each saves its own research file; main conversation synthesizes when both return.
1. Parse user message for intent triggers
2. Check if preferred tool is available (from session cache)
3. If multiple sources needed → spawn parallel Task agents (one per source)
4. If single source → use preferred tool directly
5. If not available → offer setup ONCE, then use fallback
6. Never block on missing optional tools
YouTube without Apify:
"YouTube transcript mining needs Apify MCP. Options:
- Set up Apify now (5 min)
- Paste transcript manually
- Skip this video"
X/Twitter without Grok:
"X sentiment research is best with Grok, but I can use web search instead. Results will be less real-time. Proceed with web search?"
Deep research without Gemini:
"Running comprehensive research using Claude Code web search. This may take longer than Gemini deep research."
Local file without whisper:
"Local transcription needs a whisper variant. Check:
which mlx_whisperorwhich whisper-cli. Or upload to a transcription service and paste the result."
Ad account data without connection:
"Ad account research works best with a direct Meta Ads connection (OAuth, no developer account needed, uses Pipeboard). Options:
- Set up now (5 min, free tier: 30 calls/week)
- Check Ads Manager manually and paste what you find
- Skip account data, research from reference files only"
Never block on missing tools. WebSearch + codebase search are ALWAYS available. External tools enhance but don't gate research.
Don't just provide templates. Actively move people through the cycle.
On every /mb-think invocation, detect state and guide the next step:
# Check for work in progress
ls -lt research/*.md 2>/dev/null | head -3
grep -l "status: proposed\|status: accepted" decisions/*.md 2>/dev/null
# Also check content strategy state
ls reference/domain/content-strategy.md 2>/dev/null
| If you find... | Then... |
|---|---|
| Recent research, no decision | "You have research on [topic]. Ready to make a decision?" |
| Proposed decision | "Decision [topic] is proposed. Ready to accept it?" |
| Accepted decision (not yet codified) | "Decision [topic] is accepted. Ready to codify the changes into reference files?" |
| content-strategy.md exists but empty/thin | "Your content strategy file is a skeleton. Want to fill it in? We can derive pillars from your soul.md + offer.md + audience.md." |
| content-strategy.md missing (community biz) | "You don't have a content strategy yet. Want to build one? It'll define your pillars, platforms, and cadence." |
| skool-surfaces.md missing (community biz with live Skool) | "Your Skool about page and pricing card copy aren't in reference yet. Want to add them? Skills check this for congruence." |
reference/offers/ exists | Multi-offer repo. Check .vip/local.yaml for current_offer. If not set, ask which offer this research/decision is about. |
| Nothing in progress | "What are you trying to figure out?" |
The goal is reference files. Research and decisions are waypoints. Keep asking: "What needs to happen to get this into reference?"
Research -> Checkpoint -> Decide -> Checkpoint -> Codify
"What specifically are you trying to figure out?"
Gather from codebase, web, user input, local recordings.
When research involves 2+ sources (e.g., YouTube + web, X/Twitter + codebase, competitor mining + deep research), spawn parallel Task agents in a single message — one agent per source. Use subagent_type: "general-purpose" (has Write, Edit, Bash, MCP access). Each agent:
research/YYYY-MM-DD-topic-yt-mining.md)After all agents return: Check that files landed on disk. If any agent reported a write failure or the file doesn't exist, write it from the returned content. Then synthesize across summaries. This keeps heavy content out of your main context window while recovering gracefully from the known Claude Code subagent write persistence bug.
Do NOT run research agents in background (run_in_background: true) — background agents cannot access MCP tools (Apify, etc.) and cannot prompt for permissions.
Mining sources:
| Source | How | Output suffix |
|---|---|---|
| YouTube videos | Apify transcript MCP | -yt-mining.md |
| X/Twitter sentiment | Grok X Insights MCP (grok-social.md) | -x-social.md |
| Local video/audio | whisper-cpp (local-transcription.md) | -local-mining.md |
| Voice memos | whisper-cpp | -voice-mining.md |
| Instagram mining | Apify or manual | -ig-mining.md |
| Ad account data | Pipeboard MCP (Meta Ads) | -ad-account.md |
| Competitor sites | Browser MCP or web fetch | -competitor-mining.md |
| Your own emails/DMs | Paste into conversation | -internal-mining.md |
| Deep research | Build prompt → Gemini/GPT | -gemini.md or -gpt.md |
| Codebase exploration | Grep, read, subagents | -claude-code.md |
| Documents (PDF, DOCX, PPTX) | markitdown / pandoc / marker (document-ingestion.md) | -doc-extraction.md |
Every research output needs:
Synthesis works best when the main conversation is clean — which is exactly what subagents provide. Heavy raw content (transcripts, mined posts) lives in the subagent context windows, and only distilled summaries return to main.
For content mining specifically: AI shows WHAT worked. You must judge WHY.
Extract three dimensions from mined content:
Framework extraction is human judgment work. AI surfaces the data; you interpret the frameworks. This methodology comes from Koston Williams (6M view video) — the skill isn't copying, it's framework transfer. A competitor's content worked for THEM. Your job is to:
Don't skip to content generation. Mining → Human Synthesis → Reference Update → THEN Create.
"Ready to make a decision, or need more research?"
Present options with pros/cons. Document choice and rationale.
Describe what reference files are affected in the decision file:
## What Changes
offer.md gets a guarantee section after pricing. A new angle file (risk-reversal.md) captures the guarantee messaging.
See decide-phase.md for format details.
"Ready to update reference files now, or save for later?"
Apply changes described in ## What Changes to reference files. Mark decision as codified.
Codify targets include: reference/core/*.md, reference/core/voice.md (named enemies section — each content pillar fights a named concept enemy), reference/offers/[active]/offer.md, reference/offers/[active]/audience.md (when multi-offer), reference/proof/angles/*.md (evolving library — new angles add, never replace), reference/proof/testimonials.md, reference/domain/content-strategy.md (pillars, hooks library, framework library, metrics — saves are #1 purchase intent signal), reference/domain/funnel/skool-surfaces.md (live Skool copy — update when about page or pricing changes), reference/domain/product-ladder.md (when multi-offer, cross-offer decisions).
/mb-think research "topic"
Output: research/YYYY-MM-DD-topic-claude-code.md
See references/research-phase.md for full workflow.
/mb-think decide "topic"
Output: decisions/YYYY-MM-DD-topic.md
See references/decide-phase.md for full workflow.
/mb-think codify decisions/YYYY-MM-DD-topic.md
Or: "/mb-think add new testimonials to my files"
See references/codify-phase.md for full workflow.
| Layer | Scope | Use For |
|---|---|---|
| Claude tasks | Session | Execution tracking, spinners |
| Decision files | Forever | Rationale, anchor for work |
| GitHub issues | Forever | Cross-session, team visibility |
Decision files are the anchor — create early with status: proposed, update to accepted, then codified.
See decide-phase.md for task creation patterns. See recovery.md for resuming sessions.
If conversation compacted, check multiple sources:
1. Claude tasks (current session):
TaskList
2. Recent files:
ls -lt research/*.md 2>/dev/null | head -5
grep -l "status: proposed\|status: accepted" decisions/*.md 2>/dev/null
3. GitHub issues (if using):
gh issue list --assignee @me --state open
Then confirm: "I see you were working on [topic]. Continue from here?"
See references/recovery.md for details.
/mb-ads, /mb-vsl, /mb-organic)Use /mb-think when the answer requires investigation and the choice needs documentation.
The repo is a precision instrument. The think cycle exists to filter — not to cram everything in. Research gets synthesized, decisions get distilled, and only the sharpest insights survive into reference. Curation over collection.
Reference files aren't just documentation. They're how you stay connected to why you do this.
Angles evolve. Each research session may surface a new emotional entry point, a new enemy to name, a new lifestyle aspiration. The angle library in reference/proof/angles/ is additive — it grows as understanding deepens. Treating angles as locked is an anti-pattern.
When AI makes information infinite, curation is the moat. The think cycle is curation in action — filtering signal from noise, codifying what matters, discarding what doesn't. Every reference file update is a curation decision.
The act of researching, deciding, and codifying forces you to articulate:
Research and decisions go stale. Reference files compound. Every update makes all downstream content better.
If the overlaps between your interests and your offer don't make sense, maybe you have the wrong offer. The think cycle should feel like pull, not push.