Answers questions from Google Drive documents and generates summaries with inline citations. Activates when the user wants to ask about a document, summarize a Drive file, extract information, or asks 'what does this doc say about [topic]?' Handles multi-format extraction, answer synthesis, confidence assessment, and graceful no-answer responses.
From founder-osnpx claudepluginhub thecloudtips/founder-os --plugin founder-osThis skill uses the workspace's default tool permissions.
references/citation-formats.mdDesigns and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Extract content from Google Drive documents and synthesize answers with inline citations, or generate structured summaries. This skill serves two modes: Q&A mode (for /founder-os:drive:ask) produces cited answers to user questions from one or more Drive files, and summarization mode (for /founder-os:drive:summarize) produces structured summaries at quick or detailed depth.
This skill handles content extraction, answer synthesis, citation formatting, confidence assessment, summary generation, and conflict reconciliation. File discovery, search, and relevance scoring are the responsibility of the drive-navigation skill. The command layer (/founder-os:drive:ask, /founder-os:drive:summarize) orchestrates the handoff between search and document QA.
Extract text content from Google Drive files before performing any analysis. Handle each file format according to its extraction rules.
Google Docs: Extract as markdown text. Preserve heading hierarchy (H1-H4), bullet lists, numbered lists, bold/italic emphasis, and table structure. Strip comments, suggestions, and revision history.
Google Sheets: Extract as structured tabular data. Detect header rows by checking the first row for non-numeric, label-like values. Preserve sheet/tab names as section headings. For multi-tab spreadsheets, extract each tab as a separate section labeled with the tab name. Represent tables in markdown pipe format.
PDFs: Extract available text content. Note that scanned PDFs without embedded text will yield limited or no content -- report this clearly rather than guessing. Do not attempt OCR beyond what the Drive API provides natively.
Other formats (plain text, CSV, markdown files stored in Drive): Extract raw text content directly.
Limit extraction to 3000 characters per source document. When a document exceeds this cap, identify the most relevant section by scanning headings and subheadings for keywords matching the user's query or the summarization target. Extract the best-matching section plus surrounding context up to the 3000-character limit. When no heading match exists, extract from the beginning of the document.
For long documents, scan the heading structure first before extracting body text. Build a heading map (heading text + approximate position) and select the section most relevant to the query. This heading map also supports section-level citation (see Citation System below).
Process every question through three sequential phases when operating in Q&A mode (/founder-os:drive:ask). Do not skip phases or reorder them.
Evaluate extracted content before attempting to construct an answer.
Construct the answer from assessed sources.
Structure the final output for delivery.
Use numbered inline references and a citation block. Every Q&A answer must include at least one citation. See ${CLAUDE_PLUGIN_ROOT}/skills/drive/document-qa/references/citation-formats.md for full format specification, examples, and edge cases.
Insert bracketed numbers after the sentence or clause they support:
The Q3 revenue target is $2.4M. [1] Marketing owns the lead generation
milestone, while Sales owns the conversion target. [2]
Rules:
[1][3].Append a citation block at the end of every answer:
---
Sources:
[1] "Document Title" - https://docs.google.com/document/d/...
[2] "Spreadsheet Title > Tab Name" - https://docs.google.com/spreadsheets/d/...
[3] "Document Title > Section Heading" - https://drive.google.com/file/d/...
For Google Sheets, include the tab name after the spreadsheet title. For long documents where a specific section was cited, include the section heading after the document title.
Assign one of three confidence levels to every Q&A answer. Display the level prominently at the top of the answer output.
Assign when ALL of the following are true:
Assign when ANY of the following are true:
Assign when ANY of the following are true:
When confidence is Low, prepend a warning: "Low confidence -- this answer is based on limited or potentially outdated sources. Verify before acting on it."
For detailed scoring criteria with worked examples, see ${CLAUDE_PLUGIN_ROOT}/skills/drive/document-qa/references/citation-formats.md.
Generate structured summaries when operating in summarization mode (/founder-os:drive:summarize). Support two depth levels.
Produce a concise summary consisting of:
For Sheets: replace "Key points" with Data highlights -- surface the key metrics, totals, outliers, or trends visible in the data. Include specific numbers.
Produce a full section-by-section analysis:
For Sheets with multiple tabs: summarize each tab as a separate section under its tab name.
When the --output flag is provided, write the summary to a local file using the template at ${CLAUDE_PLUGIN_ROOT}/templates/summary-template.md. When no --output flag is provided, deliver the summary directly in chat.
All summaries include a metadata header: document title, Drive URL, last modified date, and file type.
Activate when the assessment phase determines that extracted content is insufficient to answer the question. Never fabricate an answer from general knowledge when Drive documents lack coverage.
/founder-os:drive:search with different keywords, or specify a folder with --in to narrow the search."Activate the no-answer pathway when:
When two or more sources provide different answers to the same question, reconcile rather than ignore the conflict.
Never silently choose one source over another when a conflict exists. Always disclose the conflict, even when a precedence rule resolves it.
**Confidence:** [High | Medium | Low]
[Answer text with inline citations]
---
Sources:
[1] "Source Title" - Drive URL
[2] "Source Title > Section" - Drive URL
[Optional: staleness warning]
[Optional: partial coverage note]
**Summary of "[Document Title]"**
**Source:** [Drive URL]
**Last Modified:** [date]
**Type:** [Google Doc | Google Sheet | PDF | ...]
[Executive summary]
**Key Points:**
- [point 1]
- [point 2]
- [point 3]
[Optional: Section breakdown for detailed depth]
When logging activity to the [FOS] Activity Log Notion database, populate the Company relation if the source document's file path or filename matches a known client from [FOS] Companies. Populate Company relation when the Drive file belongs to a known client folder or filename contains a client name. Use the detection rules defined in the drive-navigation skill.
When only one source is relevant, cite it and assign Medium confidence at best (never High for single-source answers unless the source is an official shared document updated within the last 30 days).
When a Google Sheet lacks a clear header row, treat the first row as data rather than headers. Note in the answer: "Spreadsheet lacks headers; column references use column letters (A, B, C)."
When a spreadsheet exceeds the 3000-character extraction cap, prioritize the tab or range most relevant to the query. Cite the specific tab name in the citation block.
When the question is too vague to match document content effectively, ask for clarification: "Could you be more specific? For example: [2-3 concrete question suggestions based on the document titles in scope]."
When a file referenced in search results is no longer accessible, note it in the citation: [1] "Document Title" - [file deleted or inaccessible]. Do not count inaccessible files toward confidence scoring.