From self-learning
Guides capturing high-quality, generalizable learnings from ClosedLoop runs using decision tree, rejection criteria, and workflow to classify into CLAUDE.md or org-patterns.toon.
npx claudepluginhub closedloop-ai/claude-plugins --plugin self-learningThis skill uses the workspace's default tool permissions.
This skill defines when and how to capture learnings during ClosedLoop runs.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
This skill defines when and how to capture learnings during ClosedLoop runs.
Before writing a learning, run through this decision tree in order:
1. Did I make a mistake and correct it, or discover something non-obvious?
NO → Don't capture (no learnings event)
YES → Continue
2. Is it a config value? (specific URL, file path, project command, type name)
YES → Write to CLAUDE.md (project scope), not org-patterns
NO → Continue
3. Is it tied to a single feature/bug with no generalizable principle?
YES → SKIP
NO → Continue
4. Will it still be true in 6 months?
NO → SKIP (or generalize the principle)
YES → Continue
5. Does it already exist in org-patterns.toon or CLAUDE.md?
YES → SKIP (or note "Supersedes: [old pattern]" if correcting)
NO → CAPTURE IT
Note: Even "basic" knowledge is worth capturing if you actually made that mistake. These learnings exist because LLM agents struggle with certain patterns that humans might consider obvious. The goal is to help future agent runs avoid the same mistakes.
SKIP if ANY of these apply:
| Criterion | Example | Why |
|---|---|---|
| Specific URL/path/config | "Use https://github.com/org/repo" | Config, not principle → CLAUDE.md |
| Project-specific names | "Use MyProjectType not OtherType" | Belongs in CLAUDE.md |
| One-off bug fix | "Field X was null in row 123" | Not reusable |
| Already captured | (check pending/, CLAUDE.md, org-patterns.toon) | Avoid duplicates |
Note: Even patterns that seem like "basic knowledge" are worth capturing if you actually made that mistake. These learnings exist because LLM agents struggle with certain patterns. The goal is to help future agent runs avoid the same mistakes.
When you have a learning worth capturing:
| Scope | Destination | Heuristic |
|---|---|---|
| Project | CLAUDE.md | Mentions specific file paths, package names, or project-unique features |
| Global | org-patterns.toon | Applies to any project using the same language/framework/tool |
Extract the underlying principle, not the specific instance.
GitHubActionStatus uses SUCCESS not COMPLETED When using Prisma enums, verify valid values in schema.prisma - don't assume names adm-zip is already installed for webhooks Check existing dependencies before adding new packages for common functionality The workstream query checks parentId before title Implementation detail with no generalizable principle ARTIFACT_SECTIONS has ISSUE in two places Specific bug, not a patternTest: Would this help someone working on a different feature?
Formula: [When/Where] + [specific action] + [context]
Before writing:
$CLOSEDLOOP_WORKDIR/.learnings/pending/ for learnings in this run~/.closedloop-ai/learnings/org-patterns.toonIf contradiction exists (existing says "do X", new says "don't do X"):
Output location:
$CLOSEDLOOP_WORKDIR/.learnings/pending/{agent-name}-$CLOSEDLOOP_AGENT_ID.json
Format:
{
"what_happened": "Brief description of what occurred",
"why": "Root cause or reason this matters",
"fix_applied": "What you did to resolve it (if applicable)",
"pattern_to_remember": "The actionable takeaway (minimum 20 chars)",
"applies_to": ["agent-name"],
"context": {
"file": "relative/path/to/file.ext",
"line": 42,
"function": "function_name"
}
}
Use ["*"] for applies_to if the pattern applies to all agents.
If you completed work without learnings to capture:
{
"no_learnings": true,
"reason": "Task was straightforward with no new patterns discovered"
}
This is valid—not every task produces learnings.
| Category | Examples |
|---|---|
| Common knowledge | TS strict mode, git basics, debugging 101 |
| Config values | URLs, file paths, project commands |
| Implementation details | Query order, field names, styling choices |
| Temporary | Bug workarounds, feature-specific decisions |
Mentions specific paths/packages/features? → Project (CLAUDE.md)
Applies to any project with same tech? → Global (org-patterns.toon)
Your agent definition may reference a domain-specific learning prompt (e.g., prompts/plan-writer-learning.md). If so, read it before capturing learnings.