Research Companion
Strategic research thinking agents for Claude Code — idea evaluation, project triage, and structured brainstorming to help you do research that matters.
This repository now also includes native Codex support.
Most AI writing tools help you write papers. This plugin helps you decide which papers to write.
Inspired by Nicholas Carlini's essay "How to Win a Best Paper Award" — which argues that great research starts with taste, strategic problem selection, honest self-evaluation, and knowing when to kill your darlings.
📢 Update (April 2026): Research Companion is now also a component of researcher-pack — a more advanced, all-in-one Claude Code plugin for researchers that bundles strategic brainstorming, paper reading, literature tracking, academic writing, and project management into a single integrated workflow. This standalone repo is still actively maintained, and any updates to Research Companion will be synced between both repos — so you can use it here on its own, or get the full toolkit over at researcher-pack. Feedback, issues, and PRs are very welcome in either place! 🙏
The Problem
Researchers don't lack the ability to write papers. They lack a trusted colleague who will:
- Tell them an idea isn't worth 6 months of their life — before they invest those months
- Ask "who else is working on this and what's your unfair advantage?"
- Challenge them to state the key insight in one sentence (and refuse to move on until they can)
- Help them find the unexpected cross-field connection that makes a contribution truly novel
- Evaluate whether a struggling project should be continued, pivoted, or killed
This plugin provides that colleague.
What's Inside
Agents
| Agent | What it does |
|---|
| Idea Critic | Stress-tests research ideas along 7 dimensions: novelty, impact, timing, feasibility, competitive landscape, the nugget, and narrative potential. Returns a Pursue / Refine / Kill verdict. |
| Research Strategist | Project-level strategic thinking — triage (continue/pivot/kill), comparative advantage mapping, impact forecasting, opportunity cost analysis, and scooping risk assessment. |
| Brainstormer | Enhanced creative brainstormer with explicit focus on cross-field connections, "strategic ignorance" (challenging flawed assumptions the field follows uncritically), and the skeptical-reader test. |
Skill
| Skill | What it does |
|---|
/research-companion | A structured multi-phase ideation session that orchestrates all three agents through: Seed → Diverge → Evaluate → Deepen → Frame → Decide. Includes Carlini's "conclusion-first test." |
Principles
8 research strategy principles organized into three categories (Problem Selection, Execution Strategy, Strategic Positioning) that guide the agents' evaluations.
Codex Support
This repository also includes a native Codex plugin manifest and interface metadata:
.codex-plugin/plugin.json
agents/openai.yaml
.agents/plugins/marketplace.json
The same core prompts, agents, and principles are shared across Claude Code and Codex, with the orchestration instructions written to degrade gracefully if delegated subagents are unavailable.
Codex supports plugin marketplaces and installation through the Plugin Directory. This repo now includes a repo-scoped marketplace file so the plugin can be installed after cloning, without manually creating marketplace JSON. Detailed Codex setup lives in docs/codex-installation.md.
Installation
Claude Code
claude plugin marketplace add https://github.com/andrehuang/research-companion
claude plugin install research-companion@andrehuang-research-companion
Codex
See docs/codex-installation.md for the full Codex installation guide, including:
- repo marketplace installation
- personal local marketplace installation
- official Codex plugin docs links
Usage
Evaluate a research idea
Just describe your idea and ask for evaluation:
I'm thinking about studying how LLM-generated code introduces subtle security
vulnerabilities that pass standard code review. Can you evaluate this idea?
The Idea Critic will evaluate across 7 dimensions and give you a verdict with the single most important question to resolve next.
Decide whether to continue a project
I've been working on adversarial attacks against multimodal models for 3 months.
I have some results but they're incremental. Two other groups just posted preprints
in the same area. Should I continue?