Ask a complex question about the memex vault — search, cross-reference, and synthesize from memos and observations. Use for "why did we...", "what was the reasoning behind...", cross-project pattern questions, or when simple keyword search isn't enough. <example> Context: User asks about a past architectural decision User: "Why did we reject the Honcho data model?" Assistant: Runs `memex ask` with thorough depth, synthesizes from observations and memos. <commentary> Complex "why" question spanning multiple sessions — ask retrieves observations (atomic facts) plus document context, enabling cross-referencing. </commentary> </example> <example> Context: User asks about cross-project patterns User: "What patterns do we use for config management across projects?" Assistant: Runs `memex ask`, finds observations from memex, my-app, and research-project projects. <commentary> Cross-project pattern question — ask searches observations across all projects and merges results via RRF scoring. </commentary> </example>
From memexnpx claudepluginhub linxule/memex-plugin --plugin memexThis skill is limited to using the following tools:
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Run deep retrieval via CLI:
memex ask "<question>"
For thorough (semantic + keyword) retrieval:
memex ask "<question>" --depth=thorough
After receiving results:
content field of each result directly.observations for atomic facts that answer the question.query_info.gaps./memex:load <path> when a source memo deserves deeper reading.