Manages NotebookLM notebooks via the notebooklm-mcp server — create notebooks, add sources, query content, generate podcasts/audio/slides/infographics, and search across notebooks. Trigger on: 'notebook', 'notebooklm', 'research notebook', 'create podcast', 'add source', 'query notebook', 'audio overview', 'study materials', 'create from notebook'. Even if the user just says 'make a podcast from that' or 'add this to my notebook', use this skill. Do NOT use for generic web searches without NotebookLM, local file summarization without notebooks, or document management unrelated to NotebookLM notebooks. If a vendor nlm skill is installed, this skill supersedes it — remove the vendor skill if conflicts occur.
From tandemnpx claudepluginhub binatrixai/tandem-marketplace --plugin tandemThis skill uses the workspace's default tool permissions.
evals/evals.jsonreferences/setup.mdreferences/tools.mdreferences/workflows.mdExecutes pre-written implementation plans: critically reviews, follows bite-sized steps exactly, runs verifications, tracks progress with checkpoints, uses git worktrees, stops on blockers.
Guides idea refinement into designs: explores context, asks questions one-by-one, proposes approaches, presents sections for approval, writes/review specs before coding.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Orchestrates NotebookLM workflows over the notebooklm-mcp MCP server. Two modes: high-level workflow recipes (## Research, ## Artifacts, etc.) and direct MCP tool calls (## Tool Index in Phase 34+). Requires notebooklm-mcp server running and authenticated — see references/setup.md for installation.
When any MCP tool returns an error containing "authentication", "cookies", "expired", or "401":
refresh_auth (MCP tool — no parameters)refresh_auth succeeds: retry the original tool call from the beginning of the failed steprefresh_auth fails or is unavailable: tell user "Run: nlm login in your terminal, then tell me when done"refresh_auth again, then retryTo proactively check auth before a long workflow: nlm auth status via Bash tool (non-destructive).
Auth sessions last approximately 20 minutes from last activity.
All tool names in this skill use unprefixed logical names (e.g., notebook_create).
mcp__notebooklm-mcp__ | Cowork prefix: mcp__notebooklm__When calling studio_create, research_start, or any operation that returns a task/artifact ID and runs in background:
Do NOT poll more than once. The single check catches fast completions; everything else finishes in background.
To check later: user can say "check my NotebookLM operation" and you call the relevant status tool.
Sequential execution — CRITICAL:
notebook_query, cross_notebook_query, source_get_content — any tool that returns large text content.Large response handling: If a single query returns a very large response (3000+ words), summarize it to essential points before proceeding to the next query. This prevents context window bloat when chaining multiple queries.
Why: NotebookLM responses can be large. Running 2-3 queries in parallel can exceed context limits and force compaction, losing critical session state.
"Research [topic]" or "quick research [topic]":
notebook_create(title="[topic] Research") — save returned notebook_idsource_add(notebook_id, source_type="url", url=..., wait=True) for eachresearch_start(query="[topic]", notebook_id=..., mode="deep") — returns task_id
mode="fast" if user says "quick research" (~30s, ~10 sources)mode="deep" by default (~5 min, ~40-80 sources)research_status(notebook_id, task_id=..., max_wait=0) — check once after ~15s (see ## Long-Running Operations). If still in_progress, save reminder and move on.research_import(notebook_id, task_id=...) — MANDATORY: research discovers sources but does NOT add them until imported
research_start(..., force=True)notebook_query(notebook_id, "Summarize the key findings and main themes") — return to user"Ask my [notebook name] about [question]":
notebook_list() — resolve notebook name: fuzzy-match user's name against titles
notebook_query(notebook_id=..., query="[user's question]") — return answer with citations
conversation_id only if user wants query history to persist in NotebookLM web UI"Add [source] to my [notebook]":
notebook_list() (see Workflow 2 step 1)source_add(notebook_id, ..., wait=True):
source_type="url", url="https://..."source_type="text", text="...", title="[descriptive title]"source_type="drive", document_id="[extracted from URL]", doc_type="doc"|"slides"|"sheets"|"pdf"source_type="file", file_path="[absolute path]"source_add for each, all with wait=Truenotebook_describe(notebook_id) to verify ingestion countASYNC:
studio_createreturns immediately — generation runs in background. Check status ONCE per ## Long-Running Operations. Do NOT poll repeatedly.
studio_create(notebook_id, artifact_type="[type]", confirm=True, [type params])
— returns artifact_idstudio_status(notebook_id) — check once after ~15s (see ## Long-Running Operations)
— "complete" → proceed | "failed" → report error | "in_progress" → save reminder, move on| Type | Use for | Key params |
|---|---|---|
audio | Podcast / deep dive | audio_format (deep_dive|brief|critique|debate), audio_length (short|default|long) |
video | Explainer video | video_format (explainer|brief), visual_style (auto_select, classic, whiteboard, kawaii, anime, ...) |
report | Written summary | report_format (Briefing Doc|Study Guide|Blog Post|Create Your Own), custom_prompt |
slide_deck | Presentation | slide_format (detailed_deck|presenter_slides), slide_length (short|default) |
infographic | Visual summary | orientation, detail_level, infographic_style (11 styles) |
quiz | Study test | question_count, difficulty (easy|medium|hard) |
flashcards | Flash review | difficulty (easy|medium|hard) |
mind_map | Topic map | title (optional) |
data_table | Structured data | description (REQUIRED — no default; call fails without it) |
Common params on all types: source_ids (limit to specific sources), language, focus_prompt
After status="complete": download_artifact(notebook_id, artifact_type, artifact_id, output_path)
— if user omits path: suggest ~/Downloads/<notebook-title>.<ext>
— for reports/data tables: export_artifact(notebook_id, artifact_id, export_type="docs"|"sheets") to export to Google Docs/Sheets
studio_revise(artifact_id, slide_instructions=["Slide 1: ...", "Slide 3: ..."], confirm=True)
— creates a NEW artifact (original unchanged); poll studio_status for the new artifact_id
Listing and inspection:
source_list_drive(notebook_id) — shows Drive sources with freshness statussource_describe(source_id) — AI summary + keywordssource_get_content(source_id)source_add per the Research workflow patterns (always wait=True)Sync stale Drive sources (two-step):
source_stale(notebook_id) — identify Drive sources that need re-syncingsource_sync_drive(notebook_id, source_ids="[ids from step 1]", confirm=True)Rename: source_rename(notebook_id, source_id, new_title)
Delete (irreversible):
source_delete(source_id, confirm=True)notebook_list() — returns all notebooks with IDs and titlesnotebook_create(title="[name]") — returns notebook_idnotebook_describe(notebook_id) — AI summary + suggested query topicsnotebook_rename(notebook_id, new_title="[new name]")Delete (irreversible — deletes notebook and ALL its sources and artifacts):
notebook_describe(notebook_id) to show user what will be permanently deletednotebook_delete(notebook_id, confirm=True)"Search all my notebooks for [topic]" or "which of my notebooks covers [topic]":
cross_notebook_query(query="[topic]", all=True) — aggregated answer with per-notebook citationscross_notebook_query(query="[topic]", notebook_names="Notebook A, Notebook B")cross_notebook_query(query="[topic]", tags="ml, research")tag(action="select", query="machine learning") to find relevant notebooks firstFor common multi-step flows, use the pipeline tool instead of manual orchestration:
pipeline(action="list") — see available pipelinespipeline(action="run", notebook_id=..., pipeline_name="ingest-and-podcast", input_url="https://...")pipeline(action="run", notebook_id=..., pipeline_name="research-and-report")pipeline(action="run", notebook_id=..., pipeline_name="multi-format") — creates audio + report + slidesFor non-standard sequences, use the manual workflow steps from ## Research and ## Artifacts. See references/workflows.md for batch operations and extended pipeline patterns.
notebook_share_status(notebook_id) — see collaborators + public statusnotebook_share_invite(notebook_id, email="user@example.com", role="viewer"|"editor")notebook_share_public(notebook_id, enabled=True) to enable, enabled=False to disablenotebook_share_status(notebook_id) again and report the new access state to usernote(action="create"|"list"|"update"|"delete", notebook_id=..., ...) — manage notes in a notebooktag(action="add"|"remove"|"list", notebook_id=..., tags="...") — organize notebooks for discoverychat_configure(notebook_id, goal="learning_guide"|"custom", custom_prompt="...") — tune chat behaviorserver_info() — check version and whether an update is available