By bordenet
Detects AI slop in docs and code via 300+ patterns with 0-100 scoring and rewrites it using GVR loops and 11 strategies, while orchestrating wiki overhauls, enforcing git commit quality gates with reviews and audits, securing repos against secrets and vulns, and automating dev workflows across 89 skills.
npx claudepluginhub bordenet/superpowers-plus --plugin superpowers-plusBlast radius analysis - search for ALL usages before modifying any existing code. Prevents breaking unrelated consumers by scoping impact before scoping fix.
You MUST use this before any creative work - creating features, building components, adding functionality, or modifying behavior. Explores user intent, requirements and design before implementation.
Pull gate — MANDATORY before any work on an existing shared branch. git fetch + status check before touching code, running tests, or making changes. Fires whenever resuming, continuing, or reviewing work on a branch that exists on a remote.
Use when reviewing code changes to dispatch parallel specialized reviewers instead of a single monolithic review — provides deeper, more precise findings across 5 focused lenses. Invoke as: /sp-cr-battery [min-score] (optional 1.0–10.0 quality threshold, default 7.0)
Use when Biome reports noExcessiveCognitiveComplexity, when functions have deeply nested conditionals (3+ levels), or when refactoring large functions.
Use when selecting a design approach for a feature or significant change. Enforces generation of 3+ distinct options, structured comparison, harsh review, and edge-case brainstorming before committing to a design. Self-assessment trigger: invoke before committing to any architecture (see When to Use in skill body). NOT for brainstorming (idea exploration) or writing plans (execution).
PREVIEW - Conductor-led bounded investigation for complex distributed system incidents. Serial or parallel branches. Produces incident packets. Persistence tooling not yet implemented.
Synthesizes evidence from all investigator branches into a root cause verdict. Builds reasoning trees, detects contradictions, weighs evidence strength over agent count, and produces a ranked diagnosis. Dispatched by debug-conductor.
DEFAULT WORKFLOW for ANY code change. Orchestrates the full rigorous development lifecycle: brainstorming → think-twice → debate → progressive-harsh-review → plan-and-execute → progressive-harsh-review → commit. This fires AUTOMATICALLY for code changes unless the user explicitly opts out. Sequences existing skills so no phase is skipped.
Use when renaming fields, changing API contracts, or refactoring data models across multiple services. Prevents incomplete dependency analysis — the
Use when implementation is complete, all tests pass, and you need to decide how to integrate the work. Mandates autonomous code review (Step 0) before presenting options. Guides completion of development work by presenting structured options for merge, PR, or cleanup.
Use when running git checkout -b, git switch -c, git worktree add -b, or any command that creates a new work branch — enforces semantic prefix naming.
Use when implementing large issues across multiple sessions. Creates and maintains a living progress document that tracks completed work, decisions, refinements, and findings.
Specialized investigator for diagnosing infrastructure, configuration, and deployment failures: config changes, resource exhaustion, deployment regressions, cloud provider issues, and environment mismatches. Dispatched by debug-conductor.
Persists debugging investigation context (hypotheses, evidence, eliminated approaches) across sessions. Companion to systematic-debugging. Use when starting, resuming, or handing off a multi-turn debugging session.
Specialized investigator for diagnosing LLM/prompt behavior issues: tool selection failures, prompt regressions, context window problems, and parsing failures. Dispatched by debug-conductor as part of forked debugging.
Per-batch adversarial review for ANY code change. 3 critic personas score on 5 dimensions each. Score <8 average = REJECT + rework. Faster than full review, more rigorous than lint.
Hard gate. Use BEFORE describing, summarizing, or approving any generated output — files, PDFs, API responses, script results. Fires on the ACTION PATTERN of generating output then describing it. Prevents confabulation disguised as verification. If there is no tool call (view, read, grep, open) between generate and describe, the description is fiction. Fires BEFORE verification-before-completion.
Pre-commit quality gate - run lint, typecheck, test LOCALLY before committing. Prevents wasted CI time and embarrassing build failures.
Use when: committing or pushing code changes. Mandatory progressive review loop via sub-agent-code-reviewer. Skip only when the user explicitly says to skip review.
Multi-persona adversarial review for non-code deliverables (plans, skills, documents, designs after debate). Simulates 3 critic personas scoring on correctness, simplicity, testability, edge cases, and security. Score <7 = REJECT. Self-assessment trigger: invoke before presenting any non-code deliverable (see When to Use in skill body). For code PRs, use code-review-battery instead.
Code review gate - apply engineering rigor when reviewing PRs. Trace data flow, check blast radius, verify integration points.
Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technical rigor and verification, not performative agreement or blind implementation
Specialized investigator for testing hypotheses through reproduction attempts. Designs experiments, executes controlled tests, and reports whether a hypothesis can be confirmed or rejected. Dispatched by debug-conductor.
Use when presenting code changes to a human for review — dispatches code-review-battery. Self-fires on intent to present (see Cardinal Rule in skill body). Skips battery dispatch if valid sentinel exists for clean HEAD. Always runs battery before presenting, never after.
Use when validating feature requirements before design or implementation. Tests each requirement for falsifiability, measurability, and independence. Detects contradictions and guides resolution without resolving silently.
Proactive adversarial bug hunt — dispatches a parallel explore sub-agent to read the codebase with an adversarial mindset, then independently verifies each candidate to catch false positives and missed findings. Returns N worst bugs ranked by severity with exact file, line, mechanism, and failure mode. Use when you want to proactively find the highest-impact bugs in a codebase, not when debugging a known failure (use sp-debug for that).
Specialized investigator for diagnosing state consistency failures: replication lag, cache staleness, event ordering issues, cross-service data divergence, and eventual consistency bugs. Dispatched by debug-conductor.
Use when executing implementation plans with independent tasks in the current session
Use when encountering any bug, test failure, or unexpected behavior, before proposing fixes
Use when implementing any feature or bugfix, before writing implementation code
Specialized investigator for reconstructing incident timelines from distributed traces, logs, deployments, and metrics. Produces structured TimelineEvidence for the debug conductor. NOT a standalone skill — dispatched by debug-conductor as part of forked debugging.
Unified quality gate for commit and push: lint/build/test, style, adversarial code review, language audit, and IP scan. For push: adds sentinel check and proof-of-output requirement. Replaces all 5 individual gate skills plus pre-push-quality-gate.
Use before claiming any work is complete, fixed, or passing — and before writing any response that presents results to a human. Requires evidence before assertions. If code was changed, check battery sentinel (or dispatch battery) before the completion claim. See AUTO-FIRE section in skill body for self-assessment trigger conditions.
⚠️ EXPERIMENTAL - Write comprehensive context-free prompts before analyzing code. Validated in 20-round experiment but NOT production-ready. Always verify outputs manually.
Use when creating issues in your project tracker. Enforces formatting standards, required fields, label validation, duplicate checking.
Use BEFORE posting any comment or update to issue tickets. Prevents fabricated investigation summaries, status updates, and unverified claims. Evidence before assertion — no claims without citations.
Use when updating issues in your project tracker. Enforces fetch-before-edit workflow to prevent stale updates, validates field changes, detects concurrent modifications.
Use when adding URLs to issue descriptions or comments. Verifies all links before posting to prevent broken references.
Use when referencing issues in documentation, commits, or PRs. Verifies issue identifiers exist, validates cross-references.
Detect incomplete work in repositories from AI assistant crashes, context exhaustion, or mid-implementation distractions. Use before claiming work complete or when auditing accumulated debt.
Self-improvement cycle: scans session logs, failure autopsies, and decision logs for recurring patterns. Auto-generates skill updates or new skills. Tracks improvement metrics over time.
Use BEFORE claiming any audit, refactoring, or bulk-edit task is complete. Enforces exhaustive scope enumeration, item-by-item tracking, automated validation, and coverage metrics. Prevents incomplete work from being marked as done.
Post-mortem analyzer for incorrect assumptions and failed approaches. Produces root cause analysis (5-Why), pattern detection, and preventive actions. INVOKE after any approach that turned out wrong.
Verify ALL aspects of repository health before claiming work is complete. Checks CI workflows, GitHub Pages deployment, and any other workflows that affect repo status.
HARD GATE — Forces cross-validation, completeness verification, and confidence qualification before reporting ANY metric or percentage.
Structural lint for skill files: validates YAML frontmatter has required fields, checks line count limits, and enforces coordination metadata (group, order, internal) as errors. Reports missing Failure Modes sections as warnings. Does NOT check runtime behavior (use superpowers-doctor for that).
Industrial-grade integrity check for the local skill ecosystem. Iterates across EVERY installed skill with 27 harsh diagnostic checks spanning 4 severity tiers. Finds broken YAML, name mismatches, dead references, trigger collisions, orphaned installs, oversized skills, content corruption, reference file drift, CRLF line endings, UTF-8 BOM, structural defects, stale/dirty managed checkouts, TODO archive regressions, reviewer-dispatch rendering issues, and agent content drift. Modeled after brew doctor.
Use when investigating bugs, inconsistencies, conducting any search/grep task, OR when the user requests rigorous/thorough/comprehensive analysis. Routed to by thinking-orchestrator for confirmation-bias, negative-finding, and depth-challenge triggers. Prevents confirmation bias by forcing search for the WRONG thing, not just confirming the RIGHT thing exists.
Meta-orchestrator that auto-detects required skill chain, executes with quality gates between steps, and auto-retries on failures. User says "build X" — this handles brainstorming through verification.
Use when acting as the reviewer agent for a ~/.codex/superpowers-review/ request.md → response.md file protocol handoff
Use when designing a new superpowers skill family from scratch — orchestrates the full research → brainstorm → harsh review → prioritize → document cycle. Produces a prioritized skill roster, architecture decision, infrastructure map, and blocker list. Does NOT build skills.
Enforce coding standards before any commit. Checks shebang, error handling, help flags, verbose flags, line limits, ShellCheck compliance, and syntax validation.
Use when a primary implementation plan has identified risks that could invalidate the approach. Generates machine-agnostic fallback TODOs for the top 2-3 risks, each with enough context for a different agent to execute cold.
Use when initializing a new git repo with AI guidance, upgrading existing repos with inadequate AI guidance, or when user says "set up AI guidance" or "add AGENTS.md" - detects repo state and offers appropriate workflow.
Use when: user wants a single high-conviction innovation answer — the smartest, most radically innovative, accretive, useful, and compelling addition to this project right now. Skip when: incremental ideas (brainstorming), bug fixes (systematic-debugging), implementation planning (plan-and-execute), or ops/repair tasks.
Use when sending work to a separate reviewer agent or executing reviewer findings via the ~/.codex/superpowers-review/ request.md → response.md file protocol
General-purpose orchestrator for challenge → plan → stress-test → phased execution. Produces a plan, stress-tests it, then enrolls each phase as an autonomous TODO with deliverables, success criteria, and built-in quality gates. Between phases, runs structured retrospectives that drive improvements into all upcoming TODOs.
HARD GATE — Forces quantitative evaluation with a decision matrix before ANY question to the user. If the agent can score options numerically, it MUST choose highest-scoring and PROCEED. Only escalate when top 2 score within 10% AND decision is irreversible.
The genesis capability — create new skills from natural language descriptions, observed patterns, or codebase analysis. Makes superpowers-plus self-extending.
Dynamically enumerates ALL installed skills at runtime, distinguishing superpowers (auto-triggered) from explicit skills. Never stale — always reflects current installation.
Helps the AI coding assistant break out of spirals and stuck loops. Routed to by thinking-orchestrator for stuck-loop and circular-reasoning triggers. When triggered (by user or self-detection), pauses to consult a fresh sub-agent with zero shared context.
Hub skill for thinking and metacognition. Routes to the correct thinking skill based on context — adversarial-search, think-twice, verification-before-completion, exhaustive-audit-validation, or completeness-check. Load this skill when ANY thinking trigger fires; it will dispatch to the right child.
Low-level archive engine for completed tasks in TODO.md. Companion to todo-management; routine housekeeping should usually go through todo-maintenance.sh.
Use when: detecting deferral language in agent output. Captures loose ends immediately via todo-crud.sh and blocks completion claims if unresolved items exist. ENFORCEMENT, not CRUD.
Use when capturing tasks, tracking work, triaging priorities, querying task history, or executing multi-step plans.
Update superpowers-plus to latest, reruns the install cascade (superpowers-core fork → superpowers-plus → configured overlays), and verify with sp-doctor. Supports --branch to update a specific superpowers-plus branch.
Use when extracting domain knowledge from a user through structured interviewing to produce a written artifact (wiki page, reference doc, problem space overview). NOT for feature design — use brainstorming for that.
Use when user asks to incorporate, merge, or add external research (from Perplexity, web searches, ChatGPT, etc.) into existing documents - prevents misinterpreting "incorporate" as "review", strips artifacts, preserves document voice, and confirms scope before editing.
Invoke when stuck (2+ failed attempts, uncertainty, or guessing) OR manually to research technical/domain questions via Perplexity MCP. ALWAYS announce invocation and track stats.
Audit public repositories for proprietary IP before commit/push. Prevents leakage of internal references, URLs, ticket IDs, and confidential content to public repositories regardless of hosting platform (GitHub, GitLab, Bitbucket, Codeberg, SourceHut, self-hosted, etc.).
Use when asked to audit a git repository for security issues, check for secrets or credentials in code, scan for dependency vulnerabilities, or review a repo's security posture. Use instead of writing ad-hoc scanning scripts. Covers Python, Node.js, Go, Rust, and shell projects.
Use when you need to scan project dependencies for CVEs, upgrade vulnerable packages, validate that everything still compiles and passes tests, then commit and push the fixes - works with npm, Go, Python, Rust, and Flutter projects.
- **wiki-secret-audit**: Scanning wiki for exposed secrets
Use when adding repository links, code references, internal wiki links, or external URLs to documentation. Invoke BEFORE writing any link to prevent hallucination. Also invoked by wiki-orchestrator as HARD GATE (Stage 3, after content generation, before publish).
Use when wiki pages have been edited multiple times and may contain duplicated sections, obsolete content, or structural defects. Runs as Stage 2.5 in wiki-orchestrator pipeline (between Content Generation and Link Verification). Also available standalone. Gate type is ADVISORY with escalation for HIGH severity.
Use when wiki content contains factual claims about decisions, timelines, who-said-what, or technical facts that could be fabricated. Verifies against git history, issue tickets, meeting transcripts, and PRs. Invoked by wiki-orchestrator as ADVISORY gate.
Deterministic structural markdown gate for wiki publishing. Catches malformed tables, escaped wiki-link artifacts, unbalanced code or callout fences, heading hierarchy defects, and missing TOC on manual-TOC platforms with 4+ headings before publish.
Orchestrates BULK and MULTI-PAGE documentation projects — reorganizing multiple pages, cross-referencing across sections, publishing coordinated updates. Runs quality pipeline (de-dup, link-verification, secret-scan, slop-detection, fact-check). NOT for single-page edits (use platform-specific editing skills from _adapters/).
Conductor skill for full wiki refactoring. Orchestrates a 7-phase pipeline — discovery, deduplication, information architecture, writing plan, progressive rewrite + review, quality metrics, and safe delivery. Enforces PRD protection (hard gate), human checkpoint after planning, scope caps, and content snapshot/drift detection.
Use when scanning wiki pages for exposed secrets, after security incidents, or during periodic security reviews. Detects credentials, API keys, tokens that may have been published before secret detection.
Use when wiki pages reference codebase details (versions, repos, configs) that may drift. Verifies claims against authoritative sources and auto-applies fixes by default.
Use when analyzing text to calculate a slop score (0-100) that measures AI slop density. Read-only analysis — does NOT rewrite text (use eliminating-ai-slop for rewrites). Invoke for CVs, cover letters, marketing copy, drafts, tooltip definitions, documentation prose, or any text where you need to quantify machine-generated patterns before deciding whether to edit.
Use when writing or editing ANY prose a human will read. Covers messaging (Teams, Slack, Discord), email, social/professional (LinkedIn, Twitter), documentation (wiki, README, commits, PRs), and business writing (meeting notes, status updates, tickets). Operates in interactive mode (confirms before rewriting) or automatic mode (GVR loop). Does NOT fire for AI-to-AI content (prompts, system instructions, agent config).
Enforces best practices for Markdown table construction. Invoke when deciding table vs list format, or when formatting multi-column data. Prevents visual noise, redundancy, and accessibility issues.
Use when writing plans, roadmaps, or phased work to enforce quality gates — prevents fabricated timelines, ensures dependency ordering, and requires exit criteria.
HARD GATE — Scans content for profanity and unprofessional language before publishing to wiki or committing user-facing documentation.
Use when creating or updating README.md files - enforces best practices, applies AI slop detection, quickstart-first structure.
Use when reviewing skill files for prose quality, markdown formatting, and style conventions. NOT for creating new skills — see Creation Checklist within this skill.
Documentation and authoring workflow router: audit docs vs code drift, sync docs after changes, optimize prompts and SKILL.md files, validate GLFM and Markdown formatting, summarize files/URLs/images with fidelity enforcement. Use when: docs are out of date, CLAUDE.md needs improving, SKILL.md needs optimizing, checking if documentation matches code, summarizing files or URLs.
Agentic development framework for Claude Code — disciplined workflow routing, TDD enforcement, safety hooks, systematic debugging, and code review
No description provided.
Verification-first engineering toolkit for Claude Code. 15 skills across a 5-phase spine (Investigate → Design → Implement → Verify → Ship), 8 specialist agents, an interactive setup wizard. Every skill has rationalizations + evidence requirements. Built for senior ICs and tech leads.
Share bugs, ideas, or general feedback.
Metacognitive advisor that monitors Claude Code and provides feedback
Self-improving AI workflow system. Crystallize requirements before execution with Socratic interview, ambiguity scoring, and 3-stage evaluation.
Own this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claimOwn this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claim