From ai-brain-starter
Ingests recent Notion database entries or page subtrees into vault as daily markdown files for querying and knowledge graph use. Triggered by /ingest-notion <id> [--depth N] or ingest/sync requests.
npx claudepluginhub adelaidasofia/ai-brain-starter<database-id-or-page-id> [--depth N]This skill uses the workspace's default tool permissions.
Ingests recent Notion database entries or a page subtree into the vault as markdown the graphify pipeline can read and the rest of the AI Brain Starter substrate (decision log, session-close cascade, hooks) can act on.
Create, search, and update Notion pages/databases using the Notion API. Useful for documenting work, generating runbooks, and automating knowledge base updates.
Provides CLI access to Notion via 4ier/notion-cli Go binary for searching pages, querying databases, reading/exporting pages, managing blocks, and adding comments.
Interact with Notion pages and databases via official API using notion-cli (Node.js/Python). Query databases, create/update pages and rows, append blocks, manage schemas.
Share bugs, ideas, or general feedback.
Ingests recent Notion database entries or a page subtree into the vault as markdown the graphify pipeline can read and the rest of the AI Brain Starter substrate (decision log, session-close cascade, hooks) can act on.
This is the third connector in the ingest-* pattern. Adding the next external source means writing a new normalizer, not a new architecture.
/ingest-notion <database-id-or-page-id> (with or without --depth N)Do NOT use for:
<database-id-or-page-id> argument. Notion IDs are 32 hex chars, with or without dashes.query_database; pages get walked with get_page plus get_block_children recursively to --depth.External Inputs/Notion/<database-or-page-slug>/<YYYY-MM-DD>.md.This skill calls a Notion MCP for read access (query_database, get_page, get_block_children). If no Notion MCP is connected to your Claude Code install, the skill prints a clear error naming the missing MCP and instructions for connecting one (the canonical reference is the official @modelcontextprotocol/server-notion package or the Notion MCP shipped by makenotion). The skill does not silently fall back to the public Notion API; it surfaces the gap so you can wire the MCP once and run the skill cleanly.
If the MCP is connected but the integration token lacks access to the requested database or page, the call returns 404 or 403 and the skill reports the access issue.
The skill is a thin orchestrator. The actual normalization runs in Python at ~/.claude/skills/ingest-notion/ingest.py (or the public-repo path). The skill assembles the Notion MCP tool calls, hands the raw payloads to ingest.py as JSON on stdin, and the script writes the file.
When invoked:
<id> (required), --depth N (optional, default 1).query_database succeeds, treat as database. Else fall back to get_page.query_database collecting all entries.get_page for the root, then recursively get_block_children to --depth, capping at 5.ingest.py as JSON on stdin.ingest.py writes the vault file and prints a summary.The vault file at External Inputs/Notion/<slug>/<YYYY-MM-DD>.md has frontmatter:
---
type: external-input
source: notion
database_id: <uuid-with-dashes> # set when root is a database
page_id: <uuid-with-dashes> # set when root is a single page
root_kind: database # or "page"
page_count: <int>
ingested_at: <ISO 8601 timestamp>
entity_ids:
notion: [<uuid>, ...] # every Notion page id captured
---
Body is grouped by item, each item with title, URL, last_edited, and a body excerpt. For database mode the items are sorted by last_edited_time descending. For page mode the body is rendered as a nested block tree to the requested depth.
The entity_ids.notion array conforms to the cross-type frontmatter contract so downstream consumers (graph builders, fact aggregators, agents) can join Notion pages to their source records without re-parsing the body.
Re-running /ingest-notion <id> --depth N on the same calendar day overwrites the same vault file. The file path is keyed by date and root slug, so the same source produces the same path across re-runs. Re-runs do not duplicate; they refresh.
A successful run produces:
External Inputs/Notion/<slug>/<date>.mdWrote N page(s) to <path>.If the database resolves but contains no entries, write the file anyway with page_count: 0 so re-runs are still idempotent and the absence is recorded.