Analyze Claude Code session transcripts with DuckDB SQL and process mining to detect anti-patterns, inefficiencies, tool misuse, and user frustration signals across sessions. Generate structured reports, then transform findings into hooks, agent prompt refinements, skill patches, CLAUDE.md updates, and automation scripts for continuous improvement.
npx claudepluginhub jamie-bitflight/claude_skills --plugin agentskill-kaizenAdmin access level
Server config contains admin-level keywords
Share bugs, ideas, or general feedback.
Own this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claimOwn this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claimRun autonomous transcript analysis pipeline across sessions
Interactive transcript exploration — presents findings, user steers investigation
Produce hook scripts from discovered anti-patterns — drafts by default, --install writes to settings
Generate summary report from existing analysis in .planning/kaizen/
Transform analysis findings into actionable improvements — generates hook scripts, skill patches, agent prompt refinements, and CLAUDE.md updates based on discovered anti-patterns and inefficiencies
Deep-dive into Claude Code session transcripts using DuckDB SQL and process mining tools — spawned by analyze and explore commands to query JSONL data, detect anti-patterns, extract frustration signals, and mine workflow patterns across sessions
Agentskill kaizen plugin documentation index. Load when needing to read about cross-platform notes, improvement plans, or DuckDB integration.
Transform transcript analysis findings into actionable improvements. Triggers on "generate hooks from findings", "improve agent", "fix anti-pattern", "kaizen improvement", "generate hook proposals", or "create improvement plan". Provides templates for hook generation, agent prompt refinement, skill patches, CLAUDE.md updates, and script automation from analysis data.
Use when extracting specific data points from large agent output transcripts, kaizen analysis reports, or JSONL session files — tool timings, query counts, error summaries, or any structured facts — without loading raw data into orchestrator context. Activates when the orchestrator needs targeted facts from large files and context pollution must be avoided.
This skill should be used when analyzing Claude Code session transcripts, reviewing agent performance, finding anti-patterns or tool misuse, detecting user frustration signals, mining workflow patterns, running kaizen analysis, debugging agent behavior, or performing session forensics. Provides JSONL schema (kaizen-analysis get_transcript_jsonl_schema or MCP resource kaizen://session-log/schema or references/jsonl-schema.md), arbitrary DuckDB SQL over JSONL via kaizen-duckdb execute_query, cookbook query patterns, 10 analysis dimensions, and PM4Py process mining methodology.
Enforces orchestrator context window discipline via PreToolUse hooks and rules. Prevents the orchestrator from reading source files it will not edit, running diagnostic commands that should be delegated, and bypassing delegation with 'small change' rationalizations. Install to structurally prevent investigation escalation anti-patterns.
Self-evolving Claude Code system that learns from corrections, manages context, and improves every session
Audit and optimize Claude Code configurations with dynamic best-practice research
Core skills: ecosystem guide, skill creator, research patterns, session reflection, and plugin development. Includes UserPromptSubmit hook for forced skill evaluation.
Prompt engineering techniques for accurate, grounded Claude responses — anti-hallucination workflow with citation-backed analysis
OpenAI Codex MCP integration for Claude Code — audit, implement, verify, review, and debug via Codex
Uses power tools
Uses Bash, Write, or Edit tools
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.
Professional workflow plugins for Claude Code — make Claude apply your project's actual linting rules, commit conventions, and testing standards, not generic defaults. Covers Python, shell, Perl, CI/CD, and AI tooling.
| Without plugins | With plugins |
|---|---|
| Claude gives generic Python advice | Claude applies Python 3.11+, Typer, Rich, httpx conventions specific to your stack |
| Claude says "done" before linters pass | holistic-linting enforces root-cause fixes before any task completes |
| Claude speculates and hallucinates | hallucination-detector blocks completion on ungrounded claims |
| Claude jumps to solutions without investigating | verification-gate forces evidence gathering before action |
| Session transcripts disappear with no learning | agentskill-kaizen mines transcripts for anti-patterns and generates skill patches |
| Commit messages are inconsistent | conventional-commits enforces feat/fix/chore format for semantic versioning |
| Claude reads source files when it should delegate | orchestrator-discipline hooks block investigation escalation at the tool level |
# Add the marketplace (one-time setup, ~10 seconds, no restart required)
/plugin marketplace add Jamie-BitFlight/claude_skills
# Install a plugin
/plugin install plugin-name@jamie-bitflight-skills
Start a new session and ask Claude to perform a task the plugin handles (for example, build a CLI with Typer after installing python3-development). Claude will apply the plugin's conventions rather than generic defaults.
Comprehensive frameworks with multiple skills, commands, and specialized agents.