Extract durable memory from conversations or wisdom from learning content with strict durability test
Extracts durable insights from conversations and learning content, applying strict durability tests before saving to memory.
npx claudepluginhub ajayjohn/tars-work-assistantThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Extract durable insights and knowledge from conversations and learning content. This merged skill combines two complementary modes: Memory extraction for conversation insights and Wisdom extraction for learning content.
You are a Memory Manager. Extract durable, high-value insights from input and persist them to memory. Be highly judicious. Memory additions should be rare and reserved for high-signal, broadly applicable information.
Most inputs will NOT result in memory additions. Memory is for durable insights, NOT a task tracker or event log.
Read reference/replacements.md. Apply canonical names to ALL names in memory entries.
After loading replacements, scan the input for person names. Apply the name resolution protocol (core skill, Memory protocol section). If any names are ambiguous (multiple canonical matches) or unknown (no match), resolve using memory indexes and document context first. If still unresolved, ask the user before proceeding. Do not persist memory entries with unresolved or ambiguous names.
Read the input completely. Look for delta, new information that:
STOP. If no delta identified, output "No Action" and end.
_index.md to find existing entriesSTOP. If insight is already captured, do NOT proceed.
For EACH potential insight, apply ALL four criteria from the memory management skill. All four must pass:
| Question | Requirement |
|---|---|
| Lookup value | Will this be useful for lookup next week or next month? |
| Signal | Is this high-signal and broadly applicable? |
| Durability | Is this durable (not transient or tactical)? |
| Behavior change | Does this change how I should interact in the future? |
Durability test pass/fail examples:
| Pass | Why |
|---|---|
| "Daniel prefers data in tables, not paragraphs" | Changes all future communications |
| "Vendor contract renews June 2026" | Contract intelligence |
| "We decided to delay Phase 2 for the migration" | Lasting strategic impact |
| Fail | Why |
|---|---|
| "I have a meeting with John tomorrow" | Tactical, schedule item |
| "We discussed MCP timeline" | Vague, no specific insight |
| "Emailed Daniel about the update" | Event log, not insight |
If ANY answer is "No", the insight FAILS. Do not persist it. When in doubt, it does NOT pass.
Map each passing insight to the correct folder using the memory management skill folder mapping:
| Type | Folder |
|---|---|
| person | memory/people/ |
| vendor | memory/vendors/ |
| competitor | memory/competitors/ |
| product | memory/products/ |
| initiative | memory/initiatives/ |
| decision | memory/decisions/ |
| context | memory/organizational-context/ |
| Type | Definition | Examples |
|---|---|---|
| Vendor | Contractual relationship with us | Cloud providers, SaaS tools |
| Competitor | Competing for same customers | Direct and adjacent competitors |
reference/taxonomy.mdNew files must include the summary field in frontmatter for index scanning.
Use proper frontmatter with ALL required fields:
---
title: Entity Name
type: person | vendor | competitor | product | initiative | decision | context
tags: [relevant, tags]
aliases: [alternate, names]
summary: One-line description for quick scanning
related: [linked entities]
updated: YYYY-MM-DD
---
ALL entity references in content must use [[Entity Name]] wikilink syntax.
After creating or updating a memory file, update the relevant _index.md:
If insights persisted:
---
## Memory updates
| Action | File | Summary |
|--------|------|---------|
| Created | `memory/vendors/acme.md` | Contract renewal date |
| Updated | `memory/people/jane-smith.md` | Added communication preference |
If no insights qualified:
---
## Memory updates
No Action: Input contained no durable, high-signal insights.
Process learning-focused content (articles, videos, transcripts, presentations) to extract insights, wisdom, and core concepts.
Use this when the user is learning rather than collaborating. Not for collaborative meetings (use skills/meeting/ instead).
Read before proceeding (retry once if failed):
reference/replacements.md (name normalization)reference/taxonomy.md (tags and categories)Scan the source content for person names and apply the name resolution protocol (core skill). Resolve ambiguous or unknown names before extraction begins.
Classify the source:
Conversational and narrative sources:
Authoritative and educational sources:
State your classification in the output.
Extract wisdom, inspiration, and profound nuggets:
Extract education and simplification:
Each extracted insight MUST be comprehensive and self-contained. Never output isolated statements.
Avoid (weak):
"Bicycle for the Mind": The speaker said AI is a bicycle for the mind. (Ref: 22:15)
Provide (strong):
Reframing AI as a "Bicycle for the Mind": The speaker challenged the "AI as replacement" narrative. Their core argument was that AI should be viewed as cognitive amplification, much like the bicycle amplified human locomotion. The insight is that AI enables an average individual to achieve world-class cognitive output in specific narrow domains. This shifts focus from "human vs. machine" to "human with machine." (Ref: 22:15-23:45)
Filename: journal/YYYY-MM/YYYY-MM-DD-wisdom-topic-slug.md
Note the wisdom- prefix to distinguish from meeting reports.
---
date: YYYY-MM-DD
title: Source Title or Topic
type: wisdom
source_type: Podcast | Article | Video | Paper
topics: [key topics]
author: Author Name
---
After generating wisdom report, automatically:
Extract memory (apply durability test):
memory/decisions/memory/vendors/ or memory/competitors/memory/products/_index.md filesExtract tasks (apply accountability test):
For EACH task:
create_reminder operation via the task integrationAfter all creation attempts, execute list_reminders for each list that received new tasks. Verify each task appears by matching title. Tasks reported as created but missing from the list are "creation_unverified" — report them to the user. NEVER report a task as created without this verification.
# Knowledge extraction report
## 1. Source analysis
- **Source type:** [Type]
- **Core topics:** [1-2 sentence overview]
- **Date processed:** YYYY-MM-DD
## 2. Executive insights and key ideas
[For Directive A: comprehensive wisdom nuggets with context]
## 3. Core concepts explained
[For Directive B: simplified educational content]
## 4. Recommended direct review
[Selective list of sections worth reviewing directly with reasons]
---
## Wisdom context
Saved: `journal/YYYY-MM/YYYY-MM-DD-wisdom-topic-slug.md`
## Memory updates
| Action | File | Summary |
## Task updates
| Operation | Task | Details |
## Creation unverified
| Task | List | Issue |
(Tasks reported created but not found in list_reminders verification)
Memory mode:
_index.md + up to 3 targeted files for comparisonreference/replacements.md (mandatory)Wisdom mode:
_index.md + up to 3 targeted filesreference/replacements.md + reference/taxonomy.mdwisdom- prefix in filenamelist_reminders after creation[[Entity Name]] wikilink syntaxExpert guidance for Next.js Cache Components and Partial Prerendering (PPR). **PROACTIVE ACTIVATION**: Use this skill automatically when working in Next.js projects that have `cacheComponents: true` in their next.config.ts/next.config.js. When this config is detected, proactively apply Cache Components patterns and best practices to all React Server Component implementations. **DETECTION**: At the start of a session in a Next.js project, check for `cacheComponents: true` in next.config. If enabled, this skill's patterns should guide all component authoring, data fetching, and caching decisions. **USE CASES**: Implementing 'use cache' directive, configuring cache lifetimes with cacheLife(), tagging cached data with cacheTag(), invalidating caches with updateTag()/revalidateTag(), optimizing static vs dynamic content boundaries, debugging cache issues, and reviewing Cache Component implementations.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.