npx claudepluginhub agent-sh/skillers --plugin skillersThis skill uses the workspace's default tool permissions.
Invoked by the `skillers-compactor` agent during `/skillers compact`. Also usable standalone for manual compaction.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Invoked by the skillers-compactor agent during /skillers compact. Also usable standalone for manual compaction.
Parse from $ARGUMENTS:
| Flag | Values | Default | Description |
|---|---|---|---|
--scope | repo, global, both | global | Which knowledge scope to write to |
--state-dir | path | (from platform) | Override state directory |
--days | number | 7 | How many days of transcripts to analyze |
Skillers reads conversation transcripts from multiple AI tools. Detect which tools are installed and read from all available sources.
Location: ~/.claude/projects/{project-hash}/{session-id}.jsonl
The project hash is derived from the CWD with path separators replaced by dashes. Each transcript is a JSONL file with entries of type: user, assistant, system, progress, file-history-snapshot.
{
"type": "user",
"message": { "role": "user", "content": "the user message" },
"timestamp": "2026-03-09T14:25:05.135Z",
"sessionId": "uuid",
"cwd": "/path/to/project"
}
Location: ~/.codex/sessions/{YYYY}/{MM}/{DD}/rollout-{timestamp}-{uuid}.jsonl
Sessions are organized by date. Each JSONL file starts with a session_meta entry, followed by response_item and event_msg entries. User messages have type: "event_msg" with payload.type: "user_message".
{"timestamp":"...","type":"session_meta","payload":{"id":"uuid","cwd":"...","cli_version":"0.77.0"}}
{"timestamp":"...","type":"event_msg","payload":{"type":"user_message","message":"the user message"}}
{"timestamp":"...","type":"response_item","payload":{"type":"message","role":"user","content":[{"type":"input_text","text":"..."}]}}
Also available: ~/.codex/history.jsonl - compact log with only user inputs per session:
{"session_id":"uuid","ts":1767723836,"text":"the user message"}
Location: ~/.local/share/opencode/opencode.db (SQLite)
Tables: project, session, message. Query sessions and messages with SQL.
SELECT s.id, s.slug, m.time_created, m.content
FROM session s JOIN message m ON m.session_id = s.id
WHERE m.time_created > {cutoff_timestamp}
ORDER BY m.time_created;
Also available: ~/.local/state/opencode/prompt-history.jsonl - user input history only:
{"input":"the user message","parts":[],"mode":"normal"}
On Windows the DB may also be at %APPDATA%/opencode/opencode.db.
Cursor and Kiro do not store conversation history in an accessible local format. Skip during compaction.
const os = require('os');
const path = require('path');
const fs = require('fs');
const STATE_DIR = process.env.AI_STATE_DIR || '.claude';
const scope = args.scope || 'global';
const days = args.days || 7;
// Knowledge output directories
const globalDir = path.join(os.homedir(), STATE_DIR, 'skillers', 'knowledge');
const repoDir = path.join(process.cwd(), STATE_DIR, 'skillers', 'knowledge');
const knowledgeDirs = scope === 'both' ? [globalDir, repoDir]
: scope === 'global' ? [globalDir] : [repoDir];
const sources = [];
const claudeDir = path.join(os.homedir(), '.claude', 'projects');
if (fs.existsSync(claudeDir)) sources.push({ tool: 'claude-code', type: 'jsonl', path: claudeDir });
const codexDir = path.join(os.homedir(), '.codex', 'sessions');
if (fs.existsSync(codexDir)) sources.push({ tool: 'codex', type: 'jsonl', path: codexDir });
const opencodePaths = [
path.join(os.homedir(), '.local', 'share', 'opencode', 'opencode.db'),
path.join(process.env.APPDATA || '', 'opencode', 'opencode.db')
];
for (const p of opencodePaths) {
if (fs.existsSync(p)) { sources.push({ tool: 'opencode', type: 'sqlite', path: p }); break; }
}
For each detected source, collect transcripts using the appropriate method:
Claude Code (JSONL):
~/.claude/projects/.jsonl transcript files--days daystype: "user" and type: "assistant"Codex CLI (JSONL):
~/.codex/sessions/{YYYY}/{MM}/{DD}/ date directories.jsonl files within the date rangesession_meta for session context (cwd, cli_version)event_msg entries with payload.type: "user_message" for user messagesresponse_item entries with payload.role: "assistant" for tool usage patternsOpenCode (SQLite):
opencode.db with a SQLite reader (Bash: sqlite3 or node better-sqlite3)message table joined with sessionCommon for all sources:
lastCompactedAt - skip already-processed transcriptssource: "claude-code" | "codex" | "opencode" for traceabilitylib/sanitize.js::redact()
from the skillers plugin root. Users commonly paste API keys, GitHub tokens,
AWS keys, and Bearer tokens into chat; those must not enter agent context
or persisted knowledge files.const path = require('path');
const { redact } = require(path.join(PLUGIN_ROOT, 'lib', 'sanitize'));
function readJsonlSafe(filePath) {
const raw = fs.readFileSync(filePath, 'utf8');
return raw.split('\n').filter(Boolean).map(line => {
const safeLine = redact(line);
try { return JSON.parse(safeLine); } catch { return null; }
}).filter(Boolean);
}
For OpenCode SQLite, apply redact() to each message.content string after
fetching rows and before extracting observations.
For each transcript, analyze the conversation to identify:
| Type | Signal | Example |
|---|---|---|
pain | User expresses frustration, mentions something failing, retries | "this broke again", "why does X keep happening" |
repeat | User asks for the same type of task across sessions | "run tests", "check CI", "create PR" |
task | User works on a recurring task type | "refactor auth", "update docs", "fix flaky test" |
wish | User expresses desire for automation or tooling | "I wish this was automatic", "there should be a command for this" |
workflow | User follows a consistent multi-step pattern | "first X, then Y, then Z" every time |
For each identified pattern, create an observation:
{"ts": "ISO timestamp", "t": "pain|repeat|task|wish|workflow", "v": "5 word description", "ctx": "file or area", "session": "session-id", "source": "claude-code|codex|opencode"}
Extract observations based on actual conversation content. Focus on:
Do NOT create observations for:
Group observations by semantic similarity:
v and ctx fields (lowercase, split on spaces and path separators)For each theme cluster, calculate a composite weight:
function calculateWeight(observations) {
const now = Date.now();
// Frequency: more observations = higher weight
const frequency = Math.min(observations.length / 20, 1.0); // Cap at 20
// Recency: recent observations weigh more (exponential decay, 30-day half-life)
const recencyScores = observations.map(obs => {
const ageMs = now - new Date(obs.ts).getTime();
const ageDays = ageMs / 86400000;
return Math.exp(-0.693 * ageDays / 30); // 0.693 = ln(2)
});
const recency = recencyScores.reduce((a, b) => a + b, 0) / observations.length;
// Cross-session: patterns across multiple sessions weigh disproportionately more
const sessions = new Set(observations.map(obs => obs.session));
const crossSession = Math.min(sessions.size / 5, 1.0); // Cap at 5 sessions
// Pain intensity: "pain" and "wish" types weigh more
const painCount = observations.filter(obs => obs.t === 'pain' || obs.t === 'wish').length;
const painBoost = painCount > 0 ? 1.0 + (painCount / observations.length) * 0.5 : 1.0;
// Composite weight
const raw = (frequency * 0.3 + recency * 0.3 + crossSession * 0.4) * painBoost;
return Math.round(Math.min(raw, 1.0) * 100) / 100;
}
For each theme:
knowledge/{theme-name}.json already existslastSeen and sessions countKnowledge file format:
{
"theme": "ci-pr-workflow",
"weight": 0.82,
"observations": [
{"ts": "...", "t": "repeat", "v": "create PR check CI", "ctx": "github", "session": "abc", "source": "claude-code"},
{"ts": "...", "t": "workflow", "v": "merge after CI pass", "ctx": "github", "session": "def", "source": "codex"}
],
"sessions": 8,
"firstSeen": "2026-02-15T...",
"lastSeen": "2026-03-09T...",
"totalOccurrences": 23,
"typeCounts": {"pain": 2, "repeat": 12, "task": 4, "wish": 1, "workflow": 4}
}
Remove entries that are:
After successful compaction, update the skillers config:
{
"lastCompactedAt": "2026-03-09T22:30:00.000Z",
"lastTranscriptsProcessed": ["session-id-1", "session-id-2"]
}
This prevents re-processing the same transcripts on next compact.
Return JSON summary to the calling agent:
{
"sources": {
"claude-code": {"transcripts": 12, "observations": 35},
"codex": {"transcripts": 8, "observations": 12},
"opencode": {"sessions": 3, "observations": 5}
},
"totalTranscripts": 23,
"totalObservations": 52,
"themesUpdated": 2,
"themesCreated": 1,
"themesPruned": 0,
"themes": [
{"name": "ci-pr-workflow", "weight": 0.82, "observations": 23},
{"name": "testing-patterns", "weight": 0.65, "observations": 15}
]
}