By pskoett
Orchestrate parallel AI agent teams to implement multi-file features securely: conduct plan-interview alignments, frame intents with drift monitoring, iteratively simplify/harden/audit code until it compiles cleanly, tests pass, and zero issues remain, while logging errors and learnings to .learnings/ files for self-improvement.
npx claudepluginhub pskoett/pskoett-ai-skillsMonitors context window health by re-reading wave anchor artifacts and detecting drift signals. Spawnable by the context-surfing skill or standalone for periodic context health checks during long-running sessions. Read-only — inspects state but does not modify files.
Read-only security auditor that finds security and resilience gaps in modified files. Checks for input validation, error handling, injection vectors, auth/authz, secrets, data exposure, dependency risk, and race conditions. Reports findings with file, line, category, severity, attack vector, and specific fix. Use when auditing code changes for security hardening.
Captures learnings, errors, and corrections to .learnings/ files. Spawnable by other skills or at session end to log quality/security findings, user corrections, command failures, or knowledge gaps. Can write to .learnings/LEARNINGS.md, ERRORS.md, and FEATURE_REQUESTS.md.
Read-only auditor that finds unnecessary complexity in modified files. Checks for dead code, naming issues, control flow, API surface, over-abstraction, and consolidation opportunities. Reports findings with file, line, category, severity, and specific fix. Use when auditing code changes for simplification opportunities.
Read-only spec auditor that finds gaps between implementation and spec/plan. Checks for missing features, incorrect behavior, incomplete implementation, contract violations, test coverage, and acceptance criteria gaps. Reports findings with file, line, category, spec reference, and severity. Use when verifying implementation completeness against a plan or spec.
Implementation + audit loop using parallel agent teams with structured simplify, harden, and document passes. Spawns implementation agents to do the work, then audit agents to find complexity, security gaps, and spec deviations, then loops until code compiles cleanly, all tests pass, and auditors find zero issues or the loop cap is reached. Use when: implementing features from a spec or plan, hardening existing code, fixing a batch of issues, or any multi-file task that benefits from a build-verify-fix cycle.
Monitors context window health throughout a session and rides peak context quality for maximum output fidelity. Activates automatically after plan-interview and intent-framed-agent. Stays active through execution and hands off cleanly to simplify-and-harden and self-improvement when the wave completes naturally or exits via handoff. Use this skill whenever a multi-step agent task is underway and session continuity or context drift is a concern. Especially important for long-running tasks, complex refactors, or any work where degraded context would silently corrupt the output. Trigger even if the user doesn't say "context surfing" — if an agent task is running across multiple steps with intent and a plan already established, this skill is live.
Frames coding-agent work sessions with explicit intent capture and drift monitoring. Use when a session transitions from planning/Q&A to implementation for coding tasks, refactors, feature builds, bug fixes, or other multi-step execution where scope drift is a risk.
Ensures alignment between user and Claude during feature/spec planning through a structured interview process. Use this skill when the user invokes /plan-interview before implementing a new feature, refactoring, or any non-trivial implementation task. The skill runs an upfront interview to gather requirements across technical constraints, scope boundaries, risk tolerance, and success criteria before any codebase exploration. Do NOT use this skill for: pure research/exploration tasks, simple bug fixes, or when the user just wants standard planning without the interview process.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'), (3) User requests a capability that doesn't exist, (4) An external API or tool fails, (5) Claude realizes its knowledge is outdated or incorrect, (6) A better approach is discovered for a recurring task. Also review learnings before major tasks. For CI-only/headless learning capture, use self-improvement-ci.
Post-completion self-review for coding agents that runs simplify, harden, and micro-documentation passes on non-trivial code changes. Use when: a coding task is complete in a general agent session and you want a bounded quality and security sweep before signaling done. For CI pipeline execution, use simplify-and-harden-ci.
Production-grade engineering skills for AI coding agents — covering the full software development lifecycle from spec to ship.
Executes bash commands
Hook triggers when Bash tool is used
Modifies files
Hook triggers on file write and edit operations
Uses power tools
Share bugs, ideas, or general feedback.
Evidence-based agent skills compiler with progressive capability tiers (Quick/Forge/Forge+/Deep).
Open collection of AI agent skills — reusable, framework-agnostic SKILL.md packages
Professional skill and subagent creation with dual-mode workflow: 12-step fast mode and 15-step full mode with behavioral pressure testing and TDD integration.
Interactive toolkit for creating and maintaining OpenCode-compatible skills, agents, and commands
Analyze and optimize your Agent Skills (SKILL.md) using session data and research-backed static checks. Works with Claude Code, Codex, and any Agent Skills-compatible agent.
Uses Bash, Write, or Edit tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.