From the-coder
Coding workflow covering discovery, planning, implementation, and verification. Invoke whenever task involves any interaction with code — writing, modifying, debugging, refactoring, or understanding codebases. Runs discovery protocol before language-specific skills engage.
npx claudepluginhub xobotyi/cc-foundry --plugin the-coderThis skill uses the workspace's default tool permissions.
**Discover before assuming. Verify before shipping.**
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Discover before assuming. Verify before shipping.
Every coding failure traces to one of three root causes:
This skill prevents all three.
Every task follows this sequence. No exceptions.
Discover → Plan → Implement → Verify
The loop exists because each step prevents a category of failure. Skipping discovery causes wrong assumptions. Skipping planning causes scope creep. Skipping verification ships broken code.
The threshold: if you can describe the diff in one sentence, skip planning. Otherwise, plan first.
This pattern must become automatic in your reasoning:
WHENEVER you find yourself: - Using a method/type/interface without having read its definition - Using words like "probably", "likely", "should have", "typically" - Recalling an API from memory instead of reading current source - Planning changes to code you haven't read in this session - Assuming a method signature, type structure, or interface shapeSTOP. Ask yourself:
Every unverified assumption is a potential compile failure, runtime bug, or behavioral regression.
These words in your reasoning are RED FLAGS: - "probably" → You don't know. Read it. - "likely" → You're guessing. Check it. - "should have" → Assumption. Verify it. - "typically" → General knowledge, not this codebase. Read it. - "I remember" → Memory is unreliable. Read it now. - "usually" → This codebase may differ. Check it.Before planning or implementing, map the territory.
Discovery is cheap. Debugging wrong assumptions is expensive.
Before writing code, establish:
Success criteria — What does "done" look like? Define measurable outcomes, not vague goals.
Bad: "improve the API" Good: "add pagination to /api/users, 100 items/page, response under 200ms"
Scope — What files change? What stays untouched? Explicitly bound the change. Don't "helpfully improve" adjacent code.
Risks — What could break? If modifying shared code, trace all callers first.
Verification strategy — How will you prove it works? Tests > manual check > "it looks right". Define this BEFORE writing any implementation code.
Don't one-shot complex features.
- One logical change at a time - Verify each change works before moving to the next - If a change touches 5+ files, break it into smaller steps - Leave the codebase in a clean, working state at every step - For multi-step tasks: track completed steps and remaining workAgents overcomplicate by default. Actively resist this.
- Prefer functions over classes when either works - Avoid inheritance unless the problem demands it - Prefer explicit over implicit — no magic - Keep permission checks and validation visible at the call site, not hidden in middleware the next reader won't find - Use descriptive names — longer is better than ambiguous - Do the simplest thing that works, then optimize if measured performance requires itBefore inventing a new pattern, search the codebase for existing ones. Consistency beats novelty.
- Search for similar features/components as reference - Match the existing error handling strategy - Use the same testing patterns found in adjacent tests - Follow the project's naming conventions - Read CLAUDE.md and lint config for project-specific rulesVerification is the single highest-leverage activity. Code that "looks right" but hasn't been tested is unverified code.
Run the tests — If tests exist, run them. If they don't exist for the code you changed, write them.
Check for regressions — If you modified existing behavior, run the full relevant test suite, not just new tests.
Validate against success criteria — Revisit the criteria defined during planning. Does the implementation actually meet them? Not "probably meets them" — actually meets them.
Review your own diff — Read it as if reviewing someone else's code. Look for:
Type-check and lint — If the project has type checking or linting, run it. Don't ship code with known warnings.
Context is a finite resource with diminishing returns. Every token consumed reduces reasoning quality on the next problem.
This skill runs BEFORE language-specific skills.
Workflow:
IMPORTANT: Return to the verification protocol of this skill before declaring any task complete.