From research-workspace
Verifies NotebookLM briefs against research-hub source bundles, reporting missed sources, unsupported claims, contradictions, and follow-up prompts. Triggered by verify/audit requests.
npx claudepluginhub wenyuchiou/ai-research-skills --plugin research-workspaceThis skill uses the workspace's default tool permissions.
NotebookLM is great at producing readable briefs, but it can:
Integrates Google NotebookLM via nlm CLI for querying project docs, managing notebooks/sources, retrieving AI-synthesized info, and generating podcasts/reports. Use for RAG on curated knowledge bases.
Deeply analyzes a specific research paper: dissects experimental setups, extracts key numbers, evaluates claims against hypotheses. For arXiv IDs and analysis requests.
Share bugs, ideas, or general feedback.
NotebookLM is great at producing readable briefs, but it can:
This skill verifies a downloaded brief against the actual source bundle that research-hub uploaded, so the user can trust (or distrust) the brief before sharing or citing.
Trigger phrases:
Not for:
research-hub notebooklm generate.literature-triage-matrix.academic-writing-skills.In priority order:
research-hub-managed mode (default). When the brief was generated
via research-hub notebooklm generate + download:
.research_hub/artifacts/<cluster>/brief-*.txt.research_hub/bundles/<cluster>/manifest.json
— list of which source files were uploaded.raw/<cluster>/*.md — for
spot-checking specific claims.pdfs/<cluster>/ — last-resort spot-check
only; cap at 3 per session.Manual fallback mode (new in v0.68.x). When the user generated the brief themselves on notebooklm.google.com — direct upload, web UI, copy-paste — research-hub never saw the bundle. Accept either CLI flags or a paste-into-chat:
--brief <path-to-brief.{md,txt,pdf}> — the downloaded brief
file (any path, not just .research_hub/artifacts/).--sources <path-to-source-list.{yml,md,json}> — a plain list
of the source titles + DOIs / URLs the user uploaded to NLM.Conversational variant: paste the brief and the source list directly into the chat. The skill should ask explicitly for the source list if missing — do NOT assume coverage without ground truth.
The verification logic (source coverage scan, claim attribution, contradiction scan, overgeneralization scan, spot-check, follow-up prompts) is identical in both modes. Only the input-loading layer differs.
If the user names a brief file directly, prefer that path over guessing.
S_bundle.S_bundle item, search the brief
text for the citation key, DOI, or first-author name. Call any
bundle item with zero hits a "missed source".In-conversation report (no file written by default). Structure:
## NotebookLM brief verification report
**Brief**: <path>
**Bundle**: <cluster_slug> (<N> sources)
### Source coverage
- Cited in brief: <X> / <N>
- Missed sources: <list of citation keys not mentioned>
### Unsupported claims
- "<claim text>" (line <N> of brief) — no clear source attribution
- ...
### Cross-source contradictions
- "<claim A>" (cites Smith 2024) vs "<claim B>" (cites Jones 2023) —
appear to contradict; brief does not flag this
- ...
### Potential overgeneralizations
- "Studies show..." — actually one paper, Smith 2024
- ...
### Spot-checked claims
- "<load-bearing claim>" — reviewed Smith 2024 §3, **supported**
- "<surprising claim>" — reviewed Jones 2023 abstract, **partially
supported** (specific to coastal basins, not generalizable)
### Recommended follow-up NotebookLM prompts
- "What does Smith 2024 say about <X> specifically? Cite directly."
- "Compare Smith 2024's claim about <Y> with Jones 2023's findings."
### Verdict
- Reliable for: <broad takeaways, comparison framing>
- Use with caution for: <specific numbers, generalizations>
- Do not cite without spot-check: <list>
If the brief is well-attributed and bundle coverage is complete, the report is short — that's a feature, not a bug.
.research_hub/artifacts/<cluster>/brief-verify-<ts>.md
optionally if the user says "save this report"..research/ or .paper/ — this is verification, not
workspace setup.