Safe refactoring methodology. Behavior-preserving transformations only. Tests green before AND after. Triggers: refactor, clean, simplify, restructure, extract, inline.
From kernelnpx claudepluginhub ariaxhan/kernel-claude --plugin kernelThis skill is limited to using the following tools:
reference/refactor-research.mdEnables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
<core_principles>
<common_refactors>
<ai_code_cleanup> AI-generated code has specific patterns that need cleanup (GitClear 2026):
The verification bottleneck: AI generates faster than humans can review. Refactoring must be reviewable—small, atomic, tested. </ai_code_cleanup>
<anti_patterns> <block id="big_bang">Large refactors in one commit. Impossible to review or revert.</block> <block id="no_tests">Refactoring without test coverage. You can't verify behavior preservation.</block> <block id="feature_mixing">Adding features during refactor. Separate concerns, separate commits.</block> <block id="premature_abstraction">Abstracting before you have 3 concrete examples. Wait for patterns to emerge.</block> <block id="refactor_while_there">"While I'm here, I'll also..." No. Separate contract.</block> </anti_patterns>
<vibe_coding_crisis> From research (2026): developers accept AI output without understanding. 60% decline in refactored code—velocity over health. Copy-paste has overtaken abstraction for first time.
Refactoring is the antidote: systematically improve what AI generates. But it requires understanding the code. No understanding = no safe refactor. </vibe_coding_crisis>
<!-- Updated 2026-03-30: AI code review best practices, Claude Code best practices --><agentic_refactor_safety> When refactoring AI-generated code, additional risks apply:
Phantom abstraction: AI frequently creates abstractions that look useful but are used in only one place. Rule: if an abstraction has exactly one call site, inline it. Wait for a second use case before abstracting.
Comment drift: AI comments often describe what the code WAS doing before an edit, not what it does now. Audit every comment for accuracy during refactor — stale comments are worse than no comments.
Parallel agent conflicts: If multiple agents touched the same file, check git log for overlapping changes. The "final" file may be a merge artifact, not intentional code.
Scope creep detection: Before starting, write down EXACTLY what you're changing. After finishing, diff your changes against that list. Any extras are scope creep — revert them and open a separate task. </agentic_refactor_safety>
<on_complete> agentdb write-end '{"skill":"refactor","type":"<extract|inline|rename|simplify>","files_touched":<N>,"tests_status":"green","behavior_changed":false}'
Record what was refactored and verify tests remained green throughout. </on_complete>
</skill>