From arscontexta
Proposes upgrades to generated skills via Ars Contexta research graph consultation, justifying methodology improvements. Never auto-implements; triggers on /upgrade or upgrade phrases.
npx claudepluginhub agenticnotetaking/arscontexta --plugin arscontextaThis skill is limited to using the following tools:
Read these files to configure domain-specific behavior:
Evolves existing skills by refining content in-place or creating advanced variants. Assesses gaps, applies changes, updates version metadata, syncs registry/cross-references. Use for outdated steps, feedback gaps, complexity upgrades.
Analyzes skill executions from conversation friction, file diffs, user feedback, diagnostics, and lessons to propose concrete improvements to SKILL.md files for efficiency.
Analyzes agent skills to identify business purpose and propose 1-2 functional upgrades like new features, workflow improvements, automation, or UX enhancements for iterative evolution.
Share bugs, ideas, or general feedback.
Read these files to configure domain-specific behavior:
ops/derivation-manifest.md — vocabulary mapping, platform hints
vocabulary.notes for the notes folder namevocabulary.note / vocabulary.note_plural for note type referencesvocabulary.reduce for the extraction verbvocabulary.reflect for the connection-finding verbvocabulary.reweave for the backward-pass verbvocabulary.verify for the verification verbvocabulary.rethink for the meta-cognitive verbvocabulary.topic_map for MOC referencesops/config.yaml — processing depth, domain context
ops/derivation.md — derivation state and engine version
If these files don't exist, use universal defaults.
Target: $ARGUMENTS
Parse immediately:
START NOW. Reference below defines the upgrade process.
Skills do not upgrade through hash comparison against a generation manifest. Hash comparison answers a narrow question: "Has this file changed?" Meta-skill consultation answers the right question: "Is this skill's approach still the best approach given what we know?"
A skill could be unchanged but outdated because the knowledge base has grown. Or a skill could be heavily edited by the user but already incorporate the latest thinking through a different path. Reasoning about methodology is more valuable than diffing bytes.
Generated skills and meta-skills follow fundamentally different upgrade mechanisms:
| Category | Skills | Upgrade Mechanism |
|---|---|---|
| Generated skills | /{vocabulary.reduce}, /{vocabulary.reflect}, /{vocabulary.reweave}, /{vocabulary.verify}, /ralph, /next, /remember, /{vocabulary.rethink}, /stats, /graph, /tasks, /refactor, /learn, /recommend, /ask | Runtime consultation with knowledge graph |
| Meta-skills | /setup, /architect, /health, /reseed, /add-domain, /help, /tutorial, /upgrade | Plugin release cycle — update the plugin itself |
/upgrade evaluates generated skills. It cannot evaluate itself or other meta-skills — that is the plugin maintainers' responsibility.
Gather the vault's current state:
Read ops/derivation.md for:
Read ops/generation-manifest.yaml (if exists) for:
List all installed skills:
# Find all skill directories with SKILL.md
for dir in .claude/skills/*/; do
skill=$(basename "$dir")
version=$(grep '^version:' "$dir/SKILL.md" 2>/dev/null | head -1 | awk -F'"' '{print $2}')
gen_from=$(grep '^generated_from:' "$dir/SKILL.md" 2>/dev/null | head -1 | awk -F'"' '{print $2}')
echo "$skill v$version (from $gen_from)"
done
Read ops/config.yaml for current dimensional positions
Check for user modifications:
# Detect skills modified after generation
for dir in .claude/skills/*/; do
skill=$(basename "$dir")
file="$dir/SKILL.md"
[[ ! -f "$file" ]] && continue
# Check git status — modified files indicate user customization
git_status=$(git status --porcelain "$file" 2>/dev/null)
if [[ -n "$git_status" ]]; then
echo "MODIFIED: $skill"
fi
done
Present inventory:
--=={ upgrade : inventory }==--
System: {domain description}
Engine: arscontexta-{version}
Skills: {count} installed ({modified_count} user-modified)
Skill Version Generated From Modified
/{vocabulary.reduce} 1.0 arscontexta-v1.6 no
/{vocabulary.reflect} 1.0 arscontexta-v1.6 yes
...
For each generated skill (or the specific skill if targeted), consult the plugin's bundled knowledge base to evaluate whether the skill's current approach reflects current best practices.
Read from the plugin's four content tiers:
| Tier | Path | What It Contains |
|---|---|---|
| Methodology graph | ${CLAUDE_PLUGIN_ROOT}/methodology/ | All content — filter by kind: field (research/guidance/example) |
| Reference docs | ${CLAUDE_PLUGIN_ROOT}/reference/ | WHAT — structured reference documents and dimension maps |
Notes in methodology/ are differentiated by their kind: frontmatter field:
kind: research — WHY: principles and cognitive science grounding (213 claims)kind: guidance — HOW: operational procedures and best practices (9 docs)kind: example — WHAT IT LOOKS LIKE: domain compositions (12 examples)type: moc — Navigation: topic maps linking related notes (15 maps)For each skill being evaluated:
Read the current vault skill — understand its complete approach, quality gates, edge case handling
Read relevant knowledge base documents:
Compare methodology, not text:
Classify each finding:
| Classification | Meaning | Example |
|---|---|---|
| Current | Skill reflects knowledge base best practices | No action needed |
| Enhancement | Knowledge base adds technique the skill lacks | New quality gate, better search pattern |
| Correction | Knowledge base contradicts skill's approach | Outdated methodology, known anti-pattern |
| Extension | Knowledge base covers scenario skill ignores | New edge case, new domain pattern |
Check user modifications: If the skill has been modified by the user, read both the current (user-modified) version and evaluate whether:
For each skill with available improvements, create a structured proposal:
Skill: /{domain:skill-name}
Status: {current | enhancement | correction | extension}
User-modified: {yes | no}
Current approach:
{2-3 sentences describing what the skill currently does}
Proposed improvement:
{2-3 sentences describing what would change}
Research backing:
{Specific claims from the knowledge base that support this change}
- "{claim title}" — {how it applies}
- "{claim title}" — {how it applies}
Impact: {what changes for the user's workflow}
Risk: low | medium | high
Reversible: yes (previous version archived to ops/skills-archive/)
| Risk Level | Criteria |
|---|---|
| Low | Additive change (new quality gate, better logging). Existing behavior unchanged. |
| Medium | Modified behavior (different extraction strategy, changed search pattern). Output quality affected. |
| High | Structural change (different phase ordering, changed handoff format). Pipeline coordination affected. |
When a skill has been modified by the user AND an upgrade is available, show a side-by-side comparison:
Skill: /{domain:skill-name} (USER-MODIFIED)
Your version: Recommended:
[relevant section excerpt] [what knowledge base suggests]
Your customization:
{description of what the user changed and why it appears intentional}
Options:
(a) Keep your version unchanged
(b) Apply upgrade, preserving your customizations
(c) Apply upgrade, replacing your version (archived to ops/skills-archive/)
Option (b) requires the upgrade to be compatible with the user's changes. If they conflict, explain why and recommend (a) or (c).
--=={ upgrade }==--
Plugin: arscontexta-{current_version}
Knowledge base: {count} research claims, {count} guidance docs
Skills checked: {count}
Upgrades available: {count}
Enhancements: {n} | Corrections: {n} | Extensions: {n}
1. /{domain:skill-name}
Type: Enhancement
Change: {one-line summary}
Research: "{claim title}"
Risk: low
2. /{domain:skill-name} (USER-MODIFIED)
Type: Correction
Change: {one-line summary}
Research: "{claim title}", "{claim title}"
Risk: medium
Note: Side-by-side comparison available
...
{If no upgrades:}
All {count} skills reflect current best practices.
No upgrades needed.
Apply all? Select specific upgrades (e.g., "1, 3")?
Or "show 2" for side-by-side detail on a specific skill.
Wait for user response. Do NOT proceed without explicit approval.
For each approved upgrade:
mkdir -p ops/skills-archive
SKILL_NAME="{skill-name}"
DATE=$(date +%Y-%m-%d)
cp ".claude/skills/${SKILL_NAME}/SKILL.md" \
"ops/skills-archive/${SKILL_NAME}-${DATE}.md"
ops/derivation-manifest.mdops/config.yamlUpdate the skill's frontmatter:
---
version: "{incremented}"
generated_from: "arscontexta-{current_plugin_version}"
---
If ops/generation-manifest.yaml exists, update the entry for this skill:
skills:
{skill-name}:
version: "{new_version}"
upgraded: "{ISO 8601 UTC}"
upgrade_source: "knowledge-graph-consultation"
changes: "{brief description of what changed}"
After applying all approved upgrades:
Kernel validation — run kernel checks to confirm structural invariants hold:
# Verify skill files are valid
for dir in .claude/skills/*/; do
[[ -f "$dir/SKILL.md" ]] || echo "MISSING: $dir/SKILL.md"
done
Context file check — verify all skill references in the context file still resolve
Vocabulary check — confirm upgraded skills use domain vocabulary consistently:
# Spot-check that vocabulary markers were resolved
grep -l '{vocabulary\.' .claude/skills/*/SKILL.md 2>/dev/null
# Should return nothing — all markers should be resolved
Pipeline compatibility — if pipeline skills were upgraded (/{vocabulary.reduce}, /{vocabulary.reflect}, /{vocabulary.reweave}, /{vocabulary.verify}), verify handoff format compatibility with /ralph
--=={ upgrade complete }==--
Applied: {N} upgrades
Archived: {N} previous versions to ops/skills-archive/
Skipped: {N} (user-modified, kept as-is)
Changes:
- /{skill}: {what changed} (Research: "{claim}")
- /{skill}: {what changed} (Research: "{claim}")
Validation: {PASS | FAIL with details}
{If any validation failed:}
WARNING: Validation issue detected.
Previous versions available in ops/skills-archive/
for manual rollback.
Note: Run /{vocabulary.verify} on a recent {vocabulary.note}
to confirm upgraded skills work correctly in practice.
/upgrade never auto-implements. The upgrade plan is always presented to the user first. The user decides which upgrades to apply. This prevents the cognitive outsourcing failure mode where the system changes itself without human understanding.
All upgrades are advisory. The user owns the files.
No improvements available: Report "All skills reflect current best practices. No upgrades needed." with the count of skills checked.
No generation manifest: Treat all skills as version 0 (unknown generation state). Compare methodology against current knowledge base. This is fine — consultation reasons about approach, not version numbers.
Skill has been user-modified: Present the side-by-side comparison. Offer three options: keep user version, merge upgrade with customizations, or replace (with archive). Never silently overwrite.
No ops/derivation-manifest.md: Use universal vocabulary for all output.
Plugin knowledge base unavailable: Report that knowledge base consultation requires the Ars Contexta plugin. Without the plugin's bundled methodology/ and reference/ directories, /upgrade cannot evaluate skills.
User rejects upgrades consistently: This is a signal, not an error. Note the pattern — it may indicate the knowledge base recommendations don't match this user's domain. Log to ops/observations/ if it persists across multiple /upgrade runs.
Correction conflicts with user modification: When the knowledge base identifies a correction (not just enhancement) but the user has modified the skill, explain the conflict clearly. The user may have modified the skill precisely because the original approach was wrong — their fix may already address the correction. Show both and let the user decide.
Multiple skills share a change: If the same knowledge base improvement applies to several skills (e.g., a new search pattern), present it as a single conceptual change affecting multiple skills rather than listing it redundantly per skill.