Finance Idea Screening
Turn rough topics into credible paper ideas or kill weak ideas early.
Stage 0: Desk-editor screen
Before running the full literature search and scoring workflow, answer four questions in 1-2 sentences each:
- Prior-changing: What broad finance prior would change if this paper is right?
- Zero-result value: Why would the paper still matter if the main estimate is zero or the sign flips?
- Question over method: Is the question more important than the shock, instrument, or dataset?
- Desk read: Could this clear a top generalist journal's desk read today?
If two or more answers are weak, return Revise or Kill before investing in the full search chain.
Framing rule: If the one-sentence summary starts with "Using a novel dataset..." or "Exploiting a policy shock...", force a rewrite until it starts with the finance question. The question must come first; the data or design is the tool, not the contribution.
What counts as a strong idea
A strong idea scores well on six dimensions:
- Sharp question — Can you state the research question in one sentence without hedging? Is there a clear dependent variable and a testable prediction?
- First-order importance — Would the answer change priors in a broad finance literature or on a materially important market, policy, or contracting problem? Would a generalist finance editor care even if the estimated effect is zero?
- Verified contribution — Relative to the 3-5 closest papers, is the novelty real and nontrivial? Novelty must be in mechanism, question, identification, measurement, or implication, not just in setting, time period, or data access. "X but in country Y" or "X but with newer data" is not a contribution unless the setting variation generates a genuinely different economic prediction.
- Economic mechanism — Is there a friction, incentive, or equilibrium force that generates a testable prediction? Can you name the channel and distinguish it from obvious alternatives?
- Convincing theory-to-evidence bridge — Is there a persuasive inferential bridge from idea to evidence? This can be a quasi-experiment, field experiment, theory model, structural estimation, validated measurement exercise, sufficient-statistic approach, or out-of-sample asset-pricing test. Judge whether the bridge matches the question, not whether it looks like a standard causal design.
- Data, measurement, and replication feasibility — Three sub-checks:
- Access: Can the data realistically be obtained in time?
- Construct validity: Does the empirical measure actually map to the mechanism? Would a skeptic accept it as a reasonable proxy?
- Replication feasibility: Could the core result be documented and replicated under current top-journal data and code sharing policies?
Scoring rubric
Rate each of the six dimensions 1-5:
- 5: Publication-grade strength — among the best you could imagine for this dimension
- 4: Strong — needs only minor refinement, no fundamental concern
- 3: Promising — needs significant work but a clear path exists
- 2: Weak — major gap that may or may not be fixable
- 1: Fatal — unlikely to be resolved
Calibration guidance:
- Default to 2 or 3 unless there is a strong, specific case for higher.
- Use 5 sparingly. A 5 means this dimension would survive the toughest referee in the field.
- Never rate an idea as top-3 Go if the only clear strength is data access or a clever shock.
- A paper that is 3 across the board is a plausible working paper or field-journal candidate, not a top generalist journal paper.
Decision rules:
Top Generalist Go
- Importance >= 4
- Contribution >= 4
- Bridge >= 4
- No dimension < 3
- At most 3 essential hurdles to publication
Strong Field-Journal Go
- All dimensions >= 3
- Total >= 18
- One or more of importance, contribution, or bridge is 3 rather than 4, but the overall package is solid
Revise
- The question may matter, but at least one of contribution, bridge, or construct validity is underdeveloped
- Total 15-20 with a clear path to improvement on the weak dimensions
Kill / Pivot
- Importance < 3 (the question is second-order)
- Contribution is mostly setting variation
- The bridge cannot distinguish the proposed mechanism from obvious alternatives
- More than 3 essential hurdles would be needed for publication
- Total <= 14 or any dimension = 1
Revision-feasibility rule
After scoring, list the essential hurdles a referee would raise. Apply this filter:
- If more than three essential hurdles are needed to make the paper publishable, default to Kill or Pivot unless the question is unusually important (Importance = 5) and the contribution is unusually novel (Contribution >= 4).
- Ask whether a plausible path to publication exists within one R&R round, or at most two. If not, the idea is not ready.
Imaginary abstract test
Before running the full scoring workflow, write a 5-part imaginary abstract for the idea:
- Motivation/Tension: What is the gap between what economists assume and what the world actually looks like? What prior would change?
- Empirical Setting and Data: What is the ideal experiment or setting? What data would you use?
- Results: What would you expect to find? Be specific about magnitudes if possible.
- Implications: What does this mean for theory, practice, or policy?
- Conclusion: One-sentence takeaway.
Purpose: The imaginary abstract forces coherence. If you cannot write a plausible 5-part abstract, the idea is not ready. Ideas that sound exciting as a sentence but collapse when spelled out are Type I errors waiting to happen. The goal is not to draft a flawless abstract, but to test whether the idea has substance.
If the abstract feels impossible to write, that is diagnostic: the idea lacks a clear question, a plausible test, or a meaningful result. Kill or revise.
Four quick stress tests
Apply these after scoring. Each should be answerable in one sentence.
- Null-result value test: State what the paper teaches if the main estimate is zero. If the answer is "nothing," the paper is too dependent on a specific result.
- Simultaneous-discovery test: If a similar working paper appeared on SSRN tomorrow, would this paper still deserve publication because it gives the cleanest, most credible, or most important answer? If not, speed-to-market is the only edge, which is fragile.
- Syllabus test: Name one sentence in a PhD finance lecture that would have to change if the paper is right. If you cannot, the paper may not be first-order.
- Necessary-conditions test: Name the one critical first-stage condition that must hold for the project to work (e.g., the shock must show up in the data, the treatment must change behavior, the instrument must have a first stage). Can you check this condition with a few hours of work before committing months? If the necessary condition fails, kill the project early. Treat research like drug development: lab test before clinical trial.
Kill criteria
Flag the idea as weak if one or more of these dominate:
- It is a minor setting variation with no new mechanism ("X but in country Y").
- The identification is decorative rather than essential — the result does not require the proposed design.
- The data are unavailable, too small, or too noisy for the claim.
- The contribution depends on saying "first" without verification.
- The likely result would be too local or too descriptive for the intended journal.
- The idea needs multiple heroic assumptions to work.
- The question has been answered convincingly and the proposed paper cannot improve on existing answers.
- The mechanism is a just-so story with no way to distinguish it from alternatives.
Screening workflow
- Stage 0: Run the desk-editor screen (four questions above). If weak, stop early.
- Restate the idea in one sentence (question-first framing).
- Write the imaginary abstract (5 parts: motivation/tension, setting/data, expected results, implications, takeaway). If you cannot write a coherent abstract, flag the idea as underdeveloped.
- Identify the mechanism or friction.
- Name the closest 3-5 papers (use
search_papers to verify — do not guess).
- State what is new relative to the closest papers — be specific about whether the novelty is in mechanism, data, identification, setting, or implication. Score contribution honestly.
- Sketch the theory-to-evidence bridge (quasi-experiment, model, structural, measurement, or asset-pricing test).
- Assess data, measurement, and replication feasibility (use
search_datasets if needed).
- Necessary-conditions check: Name the one critical first-stage condition and whether it can be tested quickly.
- Identify the likely journal track (finance generalist, finance specialist, or real-estate/housing).
- List the 2-3 biggest fatal risks (these become the essential hurdles).
- Apply the six-dimension scoring rubric.
- Run the four stress tests (null-result, simultaneous-discovery, syllabus, necessary-conditions).
- Apply the revision-feasibility rule (max 3 hurdles).
- Deliver the verdict with the expanded Idea Card.
Idea generation guidance
When asked to generate ideas (not just screen them):
- Start from frictions, not topics. Ask: what market failure, information asymmetry, behavioral bias, regulatory distortion, or institutional feature creates inefficiency?
- Look for natural experiments, policy changes, or data innovations that create new identification opportunities.
- Consider what new data sources have become available (satellite imagery, web scraping, administrative records, fintech platforms, text/NLP from filings, alternative workforce data).
- Think about which classic questions have become answerable with modern methods (ML, high-frequency data, granular geographic data, LLMs for text analysis).
- Always ask: is the question more important than the data? If the data is the star, the paper is likely second-order.
Tool integration (Corbis MCP)
Never claim an idea is novel without searching first. Use this exact sequence:
Check existing data, then architecture and frontier (always first)
- If
output/paper_set.json exists, read it. Papers already collected for this topic reduce redundant searching.
search_papers (query: the core question, sortBy: "citedByCount", matchCount: 15) — immediately see which papers define this space. High-citation close papers are the biggest contribution threats.
search_papers (query: the core question, minYear: 2020, matchCount: 15) — catch the recent frontier. Low-citation recent papers are scooping threats.
Save results to output/paper_set.json (merge if exists) and append queries to output/search_log.md.
Novelty verification chain
search_papers (query: the specific idea phrased as a research question, matchCount: 15) — find closest existing work by relevance. Phrase the query like a research question, not keywords.
get_paper_details_batch (paper IDs from top 3-5 results) — read abstracts to confirm whether they truly overlap or just share vocabulary.
top_cited_articles (journalNames: relevant journals, query: specific topic) — identify seminal papers the user must know about.
Citation-aware threat assessment
- High-citation close paper (500+): Contribution threat. The question may already be answered. You must differentiate on mechanism, identification, or setting with different economics.
- Medium-citation close paper (100-499): Active conversation. Your paper must advance beyond this work.
- Low-citation recent paper (<100, post-2020): Scooping threat. A recent paper doing nearly the same thing is more dangerous to your submission than a classic paper, because the referee may see them side by side.
Data feasibility check
search_datasets (topic keywords) — discover available datasets and their coverage.
fred_search (keywords like "commercial real estate" or "bank lending") — find relevant FRED macro series for context or controls.
For real-estate ideas specifically
get_market_data (metro name) — current CRE fundamentals to assess whether the phenomenon is economically relevant now.
search_markets (criteria) — find markets with the characteristics needed for the natural experiment.
export_citations (format: bibtex) — export BibTeX entries for the 3-5 closest papers identified during the novelty verification chain. Offer this after the Idea Card is produced.
format_citation — format individual references from the screening results for inclusion in notes or memos.
Preferred outputs
When generating or screening ideas, produce one or more Idea Cards using:
- assets/idea-card-template.md
Also provide:
- a one-sentence contribution claim
- a one-sentence skepticism test from a referee's point of view
- a scoring rubric result (six dimensions)
- a go, revise, or kill recommendation with specific reasoning
- the three stress-test answers
- the killer exhibit description
Journal-fit logic
Top generalist ideas must change how the field thinks about something. They require a first-order question with a convincing bridge from theory to evidence. The bridge can be causal identification, a structural model, a measurement innovation, or a theoretical contribution. Strong field journals are slightly more receptive to careful empirical work on narrower or cross-disciplinary questions.
When assessing journal fit, consider mapping the candidate idea to 2-3 recent papers in the target journal that share the closest structure (question type, method, contribution style). If no recent archetype exists, the fit may be weak.
Guardrails
- Never confuse an interesting topic with a publishable question.
- Do not overpraise novelty. If the search reveals close papers, say so.
- If the user gives several ideas, rank them using the scoring rubric.
- Prefer fewer, stronger claims over many diffuse ones.
- Do not let enthusiasm for a topic substitute for identification feasibility.
- Do not let enthusiasm for a dataset or shock substitute for question importance.
- Score importance and contribution honestly. Most ideas are 2-3 on these dimensions, not 4-5.
Output format
Use the template in assets/idea-card-template.md. The template includes all required fields.
Example prompts
- "Give me three corporate-finance ideas using private-credit data."
- "Is this housing-supply idea strong enough for a top real-estate journal?"
- "Turn this empirical asset-pricing anomaly into a better paper idea."
- "I have access to fintech lending data — what could I do with it?"