Coding workflow covering discovery, planning, implementation, and verification. Invoke whenever task involves any interaction with code — writing, modifying, debugging, refactoring, or understanding codebases. Runs discovery protocol before language-specific skills engage.
Executes a disciplined coding workflow with discovery, planning, implementation, and verification for any code-related task.
npx claudepluginhub xobotyi/cc-foundryThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Discover before assuming. Verify before shipping.
Every coding failure traces to one of three root causes:
This skill prevents all three.
Every task follows this sequence. No exceptions.
Discover → Plan → Implement → Verify
The loop exists because each step prevents a category of failure. Skipping discovery causes wrong assumptions. Skipping planning causes scope creep. Skipping verification ships broken code.
The threshold: if you can describe the diff in one sentence, skip planning. Otherwise, plan first.
This pattern must become automatic in your reasoning:
<cognitive-interrupt> WHENEVER you find yourself: - Using a method/type/interface without having read its definition - Using words like "probably", "likely", "should have", "typically" - Recalling an API from memory instead of reading current source - Planning changes to code you haven't read in this session - Assuming a method signature, type structure, or interface shapeSTOP. Ask yourself:
Every unverified assumption is a potential compile failure, runtime bug, or behavioral regression. </cognitive-interrupt>
<assumption-markers> These words in your reasoning are RED FLAGS: - "probably" → You don't know. Read it. - "likely" → You're guessing. Check it. - "should have" → Assumption. Verify it. - "typically" → General knowledge, not this codebase. Read it. - "I remember" → Memory is unreliable. Read it now. - "usually" → This codebase may differ. Check it. </assumption-markers>Before planning or implementing, map the territory.
<discovery-protocol>Discovery is cheap. Debugging wrong assumptions is expensive.
Before writing code, establish:
<planning-checklist>Success criteria — What does "done" look like? Define measurable outcomes, not vague goals.
Bad: "improve the API" Good: "add pagination to /api/users, 100 items/page, response under 200ms"
Scope — What files change? What stays untouched? Explicitly bound the change. Don't "helpfully improve" adjacent code.
Risks — What could break? If modifying shared code, trace all callers first.
Verification strategy — How will you prove it works? Tests > manual check > "it looks right". Define this BEFORE writing any implementation code.
Don't one-shot complex features.
<incremental-rules> - One logical change at a time - Verify each change works before moving to the next - If a change touches 5+ files, break it into smaller steps - Leave the codebase in a clean, working state at every step - For multi-step tasks: track completed steps and remaining work </incremental-rules>Agents overcomplicate by default. Actively resist this.
<simplicity-rules> - Prefer functions over classes when either works - Avoid inheritance unless the problem demands it - Prefer explicit over implicit — no magic - Keep permission checks and validation visible at the call site, not hidden in middleware the next reader won't find - Use descriptive names — longer is better than ambiguous - Do the simplest thing that works, then optimize if measured performance requires it </simplicity-rules>Before inventing a new pattern, search the codebase for existing ones. Consistency beats novelty.
<pattern-rules> - Search for similar features/components as reference - Match the existing error handling strategy - Use the same testing patterns found in adjacent tests - Follow the project's naming conventions - Read CLAUDE.md and lint config for project-specific rules </pattern-rules>Verification is the single highest-leverage activity. Code that "looks right" but hasn't been tested is unverified code.
<verification-protocol>Run the tests — If tests exist, run them. If they don't exist for the code you changed, write them.
Check for regressions — If you modified existing behavior, run the full relevant test suite, not just new tests.
Validate against success criteria — Revisit the criteria defined during planning. Does the implementation actually meet them? Not "probably meets them" — actually meets them.
Review your own diff — Read it as if reviewing someone else's code. Look for:
Type-check and lint — If the project has type checking or linting, run it. Don't ship code with known warnings.
Context is a finite resource with diminishing returns. Every token consumed reduces reasoning quality on the next problem.
<context-rules>| Pattern | Why It Fails | Fix |
|---|---|---|
| Coding before reading | Builds on wrong assumptions | Discovery protocol first |
| Assuming signatures | Compile/runtime errors | Read the definition |
| "Looks right" verification | Misses regressions | Run tests, validate criteria |
| One-shotting complex work | Broken intermediate state | Work incrementally |
| Overcomplicating | Abstraction bloat, dead code | Simplest thing that works |
| Touching unrelated code | Scope creep, surprise breaks | Stay within defined scope |
| Ignoring existing patterns | Inconsistent codebase | Search for examples first |
| Filling context exploring | Degrades reasoning quality | Use subagents for research |
| No tests before "done" | Shipping unverified code | Always test before complete |
| Not pushing back | Building wrong thing | Surface concerns, question premises |
This skill runs BEFORE language-specific skills.
Workflow:
IMPORTANT: Return to the verification protocol of this skill before declaring any task complete.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.