Searches across Notion pages/databases and Google Drive to find relevant knowledge base documents. Activates when the user wants to search the knowledge base, find a document, look something up, or asks 'what do we have on [topic]?' Scores results by relevance and extracts content previews for downstream answer synthesis.
From founder-osnpx claudepluginhub thecloudtips/founder-os --plugin founder-osThis skill uses the workspace's default tool permissions.
references/search-strategies.mdDesigns and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Search Notion and Google Drive to find relevant documents, score them for relevance, and extract content for answer synthesis or document listing. This skill powers both /founder-os:kb:ask (full answer with citations) and /founder-os:kb:find (ranked document listing with previews).
Execute searches in a fixed order. Notion is the required primary source; Google Drive is optional and additive.
Pipeline stages:
/founder-os:kb:find output.Always complete Notion search before starting Drive search. If gws CLI is unavailable or not authenticated, skip stage 3 and continue the pipeline -- never error on a missing optional source.
Generate 2-3 query variants from the user's original question to maximize recall. Different phrasings surface different results because Notion and Drive use simple keyword matching, not semantic search.
Variant generation rules:
Constraints:
Consult ${CLAUDE_PLUGIN_ROOT}/skills/kb/knowledge-retrieval/references/search-strategies.md for detailed query variant examples and source-specific search tips.
Use the Notion MCP notion-search tool to find pages and databases matching each query variant.
Search execution:
notion-search with each query variant.Notion-specific patterns:
Result capture format per item:
source: "notion"
title: [page or database entry title]
url: [Notion URL]
content_snippet: [first 500 chars of page body]
last_edited: [ISO date from last_edited_time]
content_type: "page" | "database_entry"
Use the gws CLI via Bash to search and read Google Drive documents. Drive is optional -- if the gws CLI is unavailable or not authenticated, set drive_status: "unavailable" and proceed with Notion-only results. Check availability with which gws.
Search execution:
gws drive files list --params '{"q":"fullText contains '\''[query]'\''","pageSize":20,"fields":"files(id,name,mimeType,modifiedTime,webViewLink)"}' --format json with each query variant.gws drive files export --params '{"fileId":"FILE_ID","mimeType":"text/plain"}' --output /tmp/kb-drive-[fileId].txt and then reading the output file.Drive-specific patterns:
truncated: true in the result.Result capture format per item:
source: "drive"
title: [filename]
url: [Drive URL]
content_snippet: [first 500 chars of file content]
last_modified: [ISO date]
content_type: "doc" | "sheet" | "pdf" | "text"
Score each result on three factors. Total score range: 0-100.
Count how many of the user's key terms (nouns, verbs, adjectives from the original question) appear in the result's title and content snippet combined.
| Match ratio | Score |
|---|---|
| All key terms present | 35-40 |
| Most key terms (>70%) | 25-34 |
| Some key terms (40-70%) | 15-24 |
| Few key terms (<40%) | 5-14 |
| No key terms | 0-4 |
Exact phrase matches (consecutive terms in order) earn a 5-point bonus within this factor, capped at 40.
Measure how closely the result title matches the user's query intent.
| Match level | Score |
|---|---|
| Title contains the exact query phrase | 28-30 |
| Title contains most query keywords | 20-27 |
| Title contains some query keywords | 10-19 |
| Title is tangentially related | 3-9 |
| Title has no relationship to query | 0-2 |
Title matches are weighted heavily because document titles in a knowledge base are intentionally descriptive.
Score based on how recently the document was last edited.
| Age | Score |
|---|---|
| Edited within last 7 days | 28-30 |
| Edited within last 30 days | 20-27 |
| Edited within last 90 days | 12-19 |
| Edited within last 365 days | 5-11 |
| Older than 365 days | 0-4 |
Recency matters because knowledge bases contain living documents. A recently edited document is more likely to be current and maintained.
Composite score: Sum all three factors. Rank results by composite score descending.
Consult ${CLAUDE_PLUGIN_ROOT}/skills/kb/knowledge-retrieval/references/search-strategies.md for scoring formula details, worked examples, and edge case adjustments.
For top-scoring results (top 5 for /founder-os:kb:ask, top 10 for /founder-os:kb:find), extract the most relevant content section.
Extraction rules:
[Source: {title} | {source} | Last edited: {date}].For /founder-os:kb:find results, generate a 150-character preview of each document.
Preview rules:
**bold**, # headings, or [links](url)).Handle missing or failing sources without erroring:
sources_searched: ["notion"] in output. Do not mention Drive absence to the user unless no results were found at all -- then suggest installing the gws CLI and running gws auth login for broader coverage.Return search results in this format for consumption by commands and the answer-synthesis skill:
search_results:
query_original: [user's question]
query_variants: [list of 2-3 variants used]
sources_searched: ["notion", "drive"] or ["notion"]
total_results: [count]
results:
- rank: 1
score: [0-100]
source: "notion" | "drive"
title: [document title]
url: [link]
content_type: [page | database_entry | doc | sheet | pdf | text]
last_edited: [ISO date]
preview: [150-char preview for /founder-os:kb:find]
extracted_content: [up to 3000 chars for /founder-os:kb:ask]
score_breakdown:
keyword_density: [0-40]
title_match: [0-30]
recency: [0-30]