From learn-toolkit
Deep Learning Workflow: Tavily + Exa research into NotebookLM learning package with CandleKeep library integration (podcast, infographic, flashcards). Use for deep dives into new technologies, frameworks, or concepts. Do NOT use for quick diagrams (use /learn-toolkit:visualize), interactive exploration (use /learn-toolkit:playground), or when neither Tavily nor Exa are configured.
npx claudepluginhub yodem/learn-toolkit --plugin learn-toolkitThis skill is limited to using the following tools:
CRITICAL: Follow these steps in exact order. Each phase has a verification gate — do NOT proceed to the next phase until verification passes.
Provides CLI access to Google NotebookLM API: create notebooks, add sources (URLs, PDFs, YouTube, audio, video), generate podcasts, videos, infographics, slides, quizzes, flashcards, mind maps.
CLI automation for Google NotebookLM: create notebooks, add sources (URLs, YouTube, PDFs, audio/video/images), chat with content, generate artifacts (podcasts, videos, reports, quizzes, mind maps, flashcards, infographics), download results.
Provides CLI access to Google NotebookLM: create notebooks, add sources (URLs, YouTube, PDFs, audio/video/images), chat with content, generate podcasts/videos/quizzes/reports/mind maps, download artifacts.
Share bugs, ideas, or general feedback.
CRITICAL: Follow these steps in exact order. Each phase has a verification gate — do NOT proceed to the next phase until verification passes.
he). User can override with --language <code>This phase is mandatory. Do NOT skip it.
Before any research, discover which search backends and NotebookLM tools are actually available in this session.
First, check if the Tavily agent skills are installed (these provide tvly CLI access via Skill()):
tvly --version 2>/dev/null && echo "TAVILY_CLI=true" || echo "TAVILY_CLI=false"
If tvly is found, check auth status (no API call needed):
tvly --status 2>/dev/null && echo "TAVILY_CLI_AUTH=true" || echo "TAVILY_CLI_AUTH=false"
Set HAS_TAVILY_SKILLS = true only if both checks pass. When true, tvly commands are available directly (the /learn-toolkit:learn skill's allowed-tools: Bash(tvly *) grants permission). Available commands:
tvly search — web search with LLM-optimized resultstvly extract — extract content from specific URLstvly crawl — crawl websites to local markdowntvly map — discover URLs on a domaintvly research — deep AI-synthesized research with citationsUse ToolSearch to probe for each MCP backend:
ToolSearch(query="+tavily search") — look for mcp__tavily__tavily_search and mcp__tavily__tavily_extractToolSearch(query="+exa search") — look for mcp__exa__web_search_exa and mcp__exa__crawling_exaToolSearch(query="+notebooklm") — look for mcp__notebooklm-mcp__notebook_create, mcp__notebooklm-mcp__source_add, mcp__notebooklm-mcp__studio_create, mcp__notebooklm-mcp__studio_statusRun all 3 searches in parallel.
Set flags based on results:
HAS_TAVILY_MCP = true if mcp__tavily__tavily_search was foundHAS_TAVILY_SKILLS = true if tvly CLI is installed and authenticated (from Step 0a)HAS_TAVILY = true if HAS_TAVILY_MCP OR HAS_TAVILY_SKILLS is trueHAS_EXA = true if mcp__exa__web_search_exa was foundHAS_NOTEBOOKLM = true if NotebookLM tools were foundck --version 2>/dev/null && echo "CK_CLI=true" || echo "CK_CLI=false"
Set HAS_CANDLEKEEP = true/false. CandleKeep missing is not an error — just note it and continue.
Tell the user which backends are available:
Research backends: [Tavily MCP ✓/✗] [Tavily Skills/CLI ✓/✗] [Exa ✓/✗] [CandleKeep ✓/✗]
NotebookLM: [✓/✗]
Tavily priority: If both HAS_TAVILY_MCP and HAS_TAVILY_SKILLS are true, prefer MCP for search (richer metadata) but use Tavily skills for extract/crawl/research (better at bulk operations). If only skills/CLI are available, use them for all Tavily operations.
If ANY required backend is missing, STOP the workflow immediately. Do NOT fall back to WebSearch. Do NOT proceed to Phase 1. Instead, show the user exactly what's missing and how to fix it:
If Tavily is missing (neither MCP nor CLI):
Tavily is not connected. You can set it up via either method:
Option A: Tavily CLI (recommended)
curl -fsSL https://cli.tavily.com/install.sh | bashtvly login(opens browser) ortvly login --api-key tvly-YOUR_KEY- Optionally install agent skills:
npx skills add tavily-ai/skills --yesOption B: Tavily MCP server
- Get a free API key at https://tavily.com
- Add
export TAVILY_API_KEY="your-key-here"to your~/.zshrc(or~/.bashrc)- Run
source ~/.zshrcand restart Claude CodeDo not paste your API key in this chat.
If Exa is missing, first check if the key is already set in the environment:
[ -n "$EXA_API_KEY" ] && echo "EXA_KEY_SET=true" || echo "EXA_KEY_SET=false"
If EXA_KEY_SET=true (key is set but MCP not loaded):
Exa API key is set, but the Exa MCP server is not loaded in this session.
This happens when the key was added to
~/.zshrcafter Claude Code was started. Simply restart Claude Code and run the command again — no other changes needed.
If EXA_KEY_SET=false (key not set):
Exa is not connected. Your
EXA_API_KEYis not set in your shell environment.To fix:
- Get an API key at https://exa.ai
- Open
~/.zshrc(or~/.bashrc) in your editor and add:export EXA_API_KEY="your-key-here"- Run
source ~/.zshrcand restart Claude CodeDo not paste your API key in this chat.
If NotebookLM is missing:
NotebookLM is not connected. Install it from https://github.com/nicholasgriffintn/notebooklm-mcp and run
nlm loginto authenticate.
After showing the missing tools, end with:
Run
/learn-toolkit:learn $ARGUMENTSagain after fixing the above.
Verification gate: At least ONE search backend must be available: Tavily ✓ (MCP or CLI) OR Exa ✓. If both search backends are missing, the workflow STOPS here with setup instructions. Do NOT continue.
NotebookLM is recommended but optional — if missing, warn the user ("NotebookLM not found — skipping notebook creation and artifact generation. Research and local files will still be saved.") and continue in degraded mode (Phases 3–5 and 6b skip). Phase 6a (ASCII diagram) and local file saving always run.
Condition: HAS_CANDLEKEEP = true AND no --no-ck-read flag. Skip this phase entirely otherwise.
Consult ${CLAUDE_SKILL_DIR}/references/candlekeep-integration.md for detailed library scan logic and ck command patterns.
ck items list --json — scan titles/descriptions for topic overlap with $ARGUMENTSck items toc <id>, then ck items read "id:<relevant-pages>"ck_sources[] with {id, title, content_snippet}CandleKeep content is available as context for Phase 1 — use it to refine search queries (skip basics already covered in library).
Verification gate: Library scanned (even if 0 matches). Proceed regardless.
Research $ARGUMENTS across all available backends simultaneously. Only use backends where the corresponding flag from Phase 0 is true.
If HAS_TAVILY_MCP (preferred for search):
mcp__tavily__tavily_search(query="$ARGUMENTS", search_depth="advanced", include_raw_content=true)mcp__tavily__tavily_search(query="$ARGUMENTS tutorial guide 2025 2026", search_depth="advanced", include_raw_content=true)mcp__tavily__tavily_extract if that tool is availableIf HAS_TAVILY_SKILLS (fallback, or complement to MCP):
Use tvly CLI commands directly (permitted by this skill's allowed-tools: Bash(tvly *)):
tvly search "$ARGUMENTS" --depth advanced --max-results 10 --include-raw-content --jsontvly search "$ARGUMENTS tutorial guide 2025 2026" --depth advanced --max-results 10 --jsontvly extract "URL1" "URL2" "URL3" --jsontvly research "$ARGUMENTS" --json (multi-source synthesis with citations — can replace manual search+extract when only CLI is available)If both MCP and skills are available, use MCP for search and tvly CLI for extract/crawl/research (CLI is better at bulk operations).
If HAS_EXA:
mcp__exa__web_search_exa(query="$ARGUMENTS documentation")mcp__exa__web_search_exa(query="$ARGUMENTS architecture patterns examples")mcp__exa__crawling_exaVerification gate: At least 5 unique URLs collected across all backends. If fewer, run additional queries with broader terms before proceeding.
$TOPIC_SLUG with the actual slug — topic lowercased, spaces to hyphens, special chars removed):echo "{\"topic\":\"$ARGUMENTS\",\"notebooks\":[],\"total_sources\":0,\"candlekeep\":{\"read_ids\":[],\"write_id\":null},\"local_path\":\"$HOME/dev/learn-research/learn-$TOPIC_SLUG/\"}" > /tmp/learn-workflow-state.json
Verification gate: State file written successfully. Research summary covers at least 3 distinct subtopics. If not, return to Phase 1 with refined queries.
Always runs (not CandleKeep-specific). Save all research to ~/dev/learn-research/learn-<topic-slug>/. Create the directory if it doesn't exist:
mkdir -p "$HOME/dev/learn-research/learn-<topic-slug>"
~/dev/learn-research/learn-<topic-slug>/
README.md — index with TOC, metadata, date
research-summary.md — 500-word synthesis
sources/
01-official-docs.md
02-library.md — CandleKeep sources (if any)
03-tutorials.md
04-articles.md
Each source file contains: URL/source identifier, title, backend that provided it, and content snippet. The topic-slug is the topic lowercased, spaces replaced with hyphens, special chars removed.
Consult ${CLAUDE_SKILL_DIR}/references/candlekeep-integration.md for file structure details.
Report path to user: "Research saved to ~/dev/learn-research/learn-<topic-slug>/"
Note: If --ck-write is set, book.md will be added to this directory later in Phase 5.5.
Verification gate: Directory created, README.md and research-summary.md exist.
IMPORTANT: Max 50 sources per notebook. Track the count. Overflow creates a new notebook.
Consult ${CLAUDE_SKILL_DIR}/references/notebooklm-loading.md for notebook creation strategy, source addition patterns, and overflow handling.
mcp__notebooklm-mcp__notebook_create(title="[Topic] - Core Learning")ck_sources[] from Phase 0.5), add them as text sources with wait=false (before URLs). Use title "Library: [Item Title]". Max 3 items = negligible impact on 50-source limitwait=false (non-blocking)wait=true (blocking)candlekeep.read_ids after each additionVerification gate: Run mcp__notebooklm-mcp__studio_status and confirm all sources show status: "ready" or "completed". If sources are still processing, wait 10 seconds and check again (max 3 retries).
For each notebook, create all four artifacts in parallel (confirm=true on each).
Consult ${CLAUDE_SKILL_DIR}/references/artifact-generation.md for exact tool call signatures.
| Artifact | Type | Key params |
|---|---|---|
| Podcast | audio | deep_dive, language="he" |
| Infographic | infographic | bento_grid, portrait, language="he" |
| Mind Map | mind_map | language="he" |
| Flashcards | flashcards | medium, language="he" |
| Study Guide | report | report_type="Study Guide", language="he", implementation-focused focus_prompt |
The Study Guide is an implementation-focused artifact that includes: key concepts summary, step-by-step implementation guide with code examples, action items checklist, common pitfalls, and recommended next steps.
Verification gate: All 5 studio_create calls returned successfully with artifact IDs. If any failed, retry once before reporting failure.
Poll mcp__notebooklm-mcp__studio_status every 30 seconds until all artifacts are complete (max 10 polls / 5 minutes).
Present final summary to user:
## Learning Package: [Topic]
### Notebooks
| # | Name | Sources | Link |
|---|------|---------|------|
### Artifacts
| Notebook | Type | Status | Title |
|----------|------|--------|-------|
### Research Summary
- X official docs, X tutorials, X articles, X repos
- Total unique sources: X
Verification gate: Summary table includes at least 1 notebook and 5 artifacts. All artifact statuses are reported (completed or failed, not in_progress).
Condition: HAS_CANDLEKEEP = true AND --ck-write flag present. Skip this phase entirely otherwise.
Consult ${CLAUDE_SKILL_DIR}/references/candlekeep-integration.md for book compilation template and ck command patterns.
~/dev/learn-research/learn-<topic-slug>/book.md — includes executive summary, 3-5 chapters adapted to the topic, and source index appendixck items create "[Topic] - Research Compendium" --description "Auto-generated research compendium on [Topic]. Created by learn-toolkit on [date]." --no-sessionck items put <id> --file ~/dev/learn-research/learn-<topic-slug>/book.md --no-sessioncandlekeep.write_id"Research book uploaded to CandleKeep (item #ID)"If ck items create or ck items put fails, warn user and continue — the compiled book.md is still available in ~/dev/learn-research/.
Verification gate: Book file exists at ~/dev/learn-research/learn-<topic-slug>/book.md. If --ck-write was set, CandleKeep item was created (or failure was reported).
After the NotebookLM artifacts are complete, generate the other two skill outputs using the research already gathered. This gives the user the full 3-step learning experience in one workflow.
/learn-toolkit:visualize)Using the research summary from Phase 2, generate an ASCII diagram directly in the terminal. Pick the most appropriate diagram type for the topic:
Output the diagram inline (same as /learn-toolkit:visualize would produce). Use Unicode box-drawing characters, keep width under 100 chars.
/learn-toolkit:playground)Do NOT generate the HTML yourself. Delegate to the playground:playground skill, passing the research summary as context.
Invoke: Skill(skill="playground:playground", args="<topic> — based on this research summary: <paste the 500-word research summary from Phase 2>")
The playground skill will generate the interactive HTML file and open it in the browser. Let it handle all HTML creation, styling, and file output.
Present the complete learning package:
## Complete Learning Package: [Topic]
### Step 1: Quick Visual (Terminal)
[ASCII diagram rendered above]
### Step 2: Interactive Explorer (Browser)
File: ~/dev/learn-research/playground-<topic-slug>.html (opened in browser)
### Step 3: Deep Learning (NotebookLM)
| # | Notebook | Sources | Link |
|---|----------|---------|------|
| Notebook | Type | Status | Title |
|----------|------|--------|-------|
### CandleKeep
| Direction | Items | Details |
|-----------|-------|---------|
| Read | X | "Doc A", "Doc B", "Doc C" |
| Write | X | Item #ID - "[Topic] - Research Compendium" |
(Omit this section entirely if HAS_CANDLEKEEP = false)
### Local Files
Research saved to: ~/dev/learn-research/learn-<topic-slug>/
### Research Summary
- X official docs, X library sources, X tutorials, X articles, X repos
- Total unique sources: X
Verification gate: All three steps produced output: ASCII diagram rendered, playground skill invoked and HTML opened, NotebookLM artifacts complete.
After presenting the final summary, ask the user:
Would you like to keep the research summary? I can save it with a proper name (e.g.,
[Topic]-research-summary.md) and add it as a source to your NotebookLM notebook.
If the user says yes:
~/dev/learn-research/learn-<topic-slug>/research-summary.md to the current working directory with a descriptive name (e.g., [Topic]-Research-Summary-[YYYY-MM-DD].md)HAS_NOTEBOOKLM = true, add the summary as a text source to the notebook: mcp__notebooklm-mcp__source_add(notebook_id=<id>, source_type="text", text=<summary content>)If the user declines, skip this phase.
User says: /learn-toolkit:learn Next.js App Router
Actions:
/tmp/playground-nextjs-app-router.htmlResult: Complete learning package — ASCII diagram + interactive playground + 1 notebook, 19 sources, 5 artifacts
User says: /learn-toolkit:learn Kafka event streaming (Tavily MCP not configured, but tvly CLI installed)
Actions:
tvly --version found ✓, auth check passes ✓. ToolSearch finds Tavily MCP ✗, Exa ✓, NotebookLM ✓tvly search "Kafka event streaming" --depth advanced --max-results 10 --jsontvly search "Kafka event streaming tutorial guide 2025 2026" --depth advanced --max-results 10 --jsontvly extract on top URLs for full contentResult: Same quality output, using CLI instead of MCP for Tavily
User says: /learn-toolkit:learn Kafka event streaming (no Tavily/Exa configured)
Actions:
tvly not found. ToolSearch finds Tavily MCP ✗, Exa ✗, NotebookLM ✓tvly or sets env vars, restarts Claude Code, runs /learn-toolkit:learn Kafka event streaming againResult: No research performed. User gets clear fix instructions.
User says: /learn-toolkit:learn Kubernetes
Actions:
Result: Complete learning package — ASCII diagram + playground + 2 notebooks, 65 sources, 10 artifacts
User says: /learn-toolkit:learn GraphQL federation --language en
Actions: Same workflow, but all NotebookLM artifacts use language="en" instead of "he"
Result: English-language learning package
| Error | Cause | Action |
|---|---|---|
| Tavily not found (neither MCP nor CLI) | Server not configured, CLI not installed, or API key missing | STOP workflow. Show Tavily setup instructions (CLI or MCP). Do NOT fall back to WebSearch |
| Tavily CLI auth error (exit code 3) | tvly installed but not authenticated | Run tvly login or set TAVILY_API_KEY env var |
| Exa MCP not found in ToolSearch | Key not set, or key set after Claude Code started (MCP not loaded) | STOP workflow. Check $EXA_API_KEY env var first: if set → tell user to restart Claude Code; if not set → show Exa setup instructions. Do NOT fall back to WebSearch |
| NotebookLM not found in ToolSearch | MCP not configured | STOP workflow. Show NotebookLM setup instructions |
| NotebookLM auth expired | Token expired | Run nlm login via Bash (timeout 120s), then retry |
| Source add fails for a URL | URL blocked or invalid | Log the URL, skip it, continue with remaining sources |
| Source limit (50) hit | Too many sources | Create new notebook with next-tier name, continue adding |
| Studio generation fails | NotebookLM internal error | Retry once. If still fails, report in summary table as "Failed" |
| State file write fails | /tmp permission issue | Continue without state tracking, use in-memory counting |
ck not found | CLI not installed | HAS_CANDLEKEEP = false, skip silently |
ck items list fails | Auth issue | Warn user, set HAS_CANDLEKEEP = false, continue |
ck items read fails | Bad item | Skip that item, continue with remaining |
ck items create/put fails | Permission issue | Warn, skip write, book.md still in /tmp |