From onebrain
Teach the AI something and save it to the agent folder for future recall : context about your world or behavioral patterns
npx claudepluginhub kengio/onebrain --plugin onebrainThis skill uses the workspace's default tool permissions.
Explicitly teach the agent something to remember across all future sessions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Performs token-optimized structural code search using tree-sitter AST parsing to discover symbols, outline files, and unfold code without reading full files.
Explicitly teach the agent something to remember across all future sessions.
Usage: /learn [content] or /learn then describe what to save.
If content was provided after the command, use it directly. If not, ask:
What do you want me to learn? Share a fact, preference, domain detail, or anything you want me to remember.
If vault.yml exists, read it and extract folders.agent. Default to 05-agent if the file does not exist or the key is absent.
Set agent_folder for all paths below.
Apply this rule to determine the destination:
→ [agent_folder]/context/ if the content describes the user's world:
→ [agent_folder]/memory/ if the content is a behavioral preference or pattern:
Classification examples:
| Input | Destination |
|---|---|
| "My stack is Go + Postgres + Redis" | context/ |
| "When I say 'the app' I mean my SaaS at app.example.com" | context/ |
| "My target user is small business owners, not enterprises" | context/ |
| "I prefer async communication and document decisions in notes" | memory/ |
| "I get frustrated when responses are too long" | memory/ |
Tiebreaker for domain-preference hybrids: If the input is a preference about how to work within a domain (e.g., "always use Go idioms, not Java-style abstractions" or "keep SQL queries readable, not optimized"), classify it as memory/ : it governs AI behavior, even though it references a technical domain. When in doubt: if you would change how you respond based on this input, it's memory/.
If classification is unclear, ask: "Is this about your world (context) or how you want me to behave (preference)?"
For context/ notes:
Topic Name.md (Title Case, by topic)[agent_folder]/context/. If so, offer to append to it rather than create a new file.For memory/ notes:
YYYY-MM-DD-slug.md where slug is a short kebab-case description (first note of the day)[agent_folder]/memory/YYYY-MM-DD-*.md), use a counter: YYYY-MM-DD-02-slug.md, YYYY-MM-DD-03-slug.md, etc.2026-03-23-prefers-async-comms.md2026-03-23-02-no-long-responses.mdFor context/ notes:
---
tags: [agent-context]
created: YYYY-MM-DD
source: /learn
---
# [Topic]
[Content as provided, lightly structured if helpful]
When appending to an existing context note: Add a new ## section with a descriptive heading for the additional information. Do not rewrite or restructure existing content.
For memory/ notes:
---
tags: [agent-memory]
created: YYYY-MM-DD
source: /learn
---
# [Short description of pattern]
[Pattern or behavioral observation, 1-3 sentences]
Report what was saved:
Learned: "[short summary]" Saved to:
[agent_folder]/[context or memory]/[filename]I'll use this in future sessions when relevant.
If a context note was appended rather than created:
Updated:
[agent_folder]/context/[filename]with new information.