Write knowledge to markdown AND sync to PKB. MUST invoke - do not write markdown directly.
From aops-coworknpx claudepluginhub nicsuzor/academicops --plugin aops-coworkThis skill is limited to using the following tools:
procedures/capture.mdprocedures/prune.mdprocedures/sync.mdprocedures/validate.mdreferences/detail-level-guide.mdreferences/obsidian-format-spec.mdDesigns and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Taxonomy note: This skill provides domain expertise (HOW) for knowledge capture and persistence. See [[TAXONOMY.md]] for the skill/workflow distinction.
Persist knowledge to markdown + PKB. Both writes required for semantic search.
$ACA_DATA contains ONLY semantic memory - timeless truths, always up-to-date:
$ACA_DATA.data/tasks/, managed via tasks MCP).$ACA_DATA reflects only what's currentPKB is the universal index. Write to your primary storage AND PKB for semantic search retrieval.
| What | Primary Storage | Also Sync To |
|---|---|---|
| Epics/projects | PKB (type="epic" or type="project") | PKB index |
| Tasks/issues | GitHub Issues (gh issue create) | PKB index |
| Durable knowledge | $ACA_DATA/ markdown files | PKB index |
| Session findings | Task body updates | PKB index |
See [[base-memory-capture]] workflow for when and how to invoke this skill.
Is this a time-stamped observation? (what agent did, found, tried)
→ YES: Use tasks MCP (create_task or update_task) - NOT this skill
→ NO: Continue...
Is this about the framework (axioms, heuristics)?
→ YES: HALT and invoke /framework skill to add properly to $AOPS
→ NO: Continue...
Is this about the user? (projects, goals, context, tasks)
→ YES: Use appropriate location below
→ NO: Use `knowledge/<topic>/` for general facts
| Content | Location | Notes |
|---|---|---|
| Project metadata | projects/<name>.md | Hub file |
| Project details | projects/<name>/ | Subdirectory |
| Goals | goals/ | Strategic objectives |
| Context (about user) | context/ | Preferences, history |
| Sessions/daily | sessions/ | Daily notes only |
| Tasks | Delegate to [[tasks]] | Use scripts |
| General knowledge | knowledge/<topic>/ | Facts NOT about user |
NEVER create files for:
mcp__pkb__create_task(task_title="...", type="task", project="<project>", parent="<parent-id>")mcp__pkb__create_task(task_title="...", type="task", project="<project>", parent="<parent-id>", tags=["bug"])mcp__pkb__create_task(task_title="Learning: Z", project="<project>", parent="<parent-id>")mcp__pkb__update_task(id="...", body="...")Rule: If it has a timestamp or describes agent activity, it's episodic → tasks MCP.
mcp__pkb__search(query="topic") + Glob under $ACA_DATA/---
title: Descriptive Title
type: note|project|knowledge
tags: [relevant, tags]
created: YYYY-MM-DD
---
Content with [[wikilinks]] to related concepts.
mcp__pkb__create_memory(
title="[descriptive title]",
body="[content]",
tags=["relevant", "tags"]
)
When a memory references an external issue, bug, or resource, always link it explicitly:
[org/repo#NNN](https://github.com/org/repo/issues/NNN) — don't just mention "#NNN" in prosegh issue create link or [#NNN](url)## Relationships section with typed edges:
## Relationships
- [related] [[task-id]] — brief description
- [upstream-bug] [org/repo#NNN](url)
- [parent] [[parent-id]]
Why: Unlinked references are dead ends. The PKB graph and future agents can't traverse prose mentions — they need explicit edges.
Use PKB semantic search for $ACA_DATA/ content. Never grep for markdown in the knowledge base. Give agents enough context to make decisions - never use algorithmic matching (fuzzy, keyword, regex).
When capturing learnings from debugging/development sessions, prefer generalizable patterns over implementation specifics.
| ❌ Too Specific | ✅ Generalizable |
|---|---|
| "AOPS_SESSION_STATE_DIR env var set at SessionStart in router.py:350" | "Configuration should be set once at initialization, no fallbacks" |
| "Fixed bug in session_paths.py on 2026-01-28" | "Single source of truth prevents cascading ambiguity" |
| "Gemini uses ~/.gemini/tmp/<hash>/ for state" | "Derive paths from authoritative input, don't hardcode locations" |
Why this matters: Specific implementation details are only useful for one code path. Generalizable patterns apply across all future framework work. We're dogfooding - capture what helps NEXT session, not what happened THIS session.
Test: Would this memory help an agent working on a DIFFERENT component? If not, it's too specific.
For factual observations NOT about the user. Location: knowledge/<topic>/
Constraints:
Topics (use broadly):
cyberlaw/ - copyright, defamation, privacy, AI ethics, platform lawtech/ - protocols, standards, technical factsresearch/ - methodology, statistics, findingsFormat:
---
title: Fact/Case Name
type: knowledge
topic: cyberlaw
source: Where learned
date: YYYY-MM-DD
---
[[Entity]] did X. Key point: Y. [[Person]] observes: "quote".
For non-blocking capture, spawn background agent:
Task(
subagent_type="general-purpose", model="haiku",
run_in_background=true,
description="Remember: [summary]",
prompt="Invoke Skill(skill='remember') to persist: [content]"
)
Report both operations:
[path][hash]