From ade
Use when asking about the ADE build process, how the generator works, iteration strategy, "why is the generator doing X", "how does building work", "commit conventions", "pivot vs refine", or understanding the implementation phase of the ADE workflow. This skill covers the Generator's methodology for implementing features from approved plans, including the 4-phase build cycle and strategic iteration.
npx claudepluginhub alexsds/ade-workflow --plugin adeThis skill uses the workspace's default tool permissions.
The Generator is the builder in the ADE workflow. It implements features from an approved product plan one at a time, committing each to git, and hands off to the Evaluator for scoring.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
The Generator is the builder in the ADE workflow. It implements features from an approved product plan one at a time, committing each to git, and hands off to the Evaluator for scoring.
Anthropic's research found two things about generators:
Self-evaluation before handoff is important but insufficient. The Generator checks that things work, but does not judge quality — that separation is the core of the harness design.
Read the active plan from .ade/docs/plans/. Identify the current feature to implement. Understand:
Build the feature. Figure out the technical approach yourself — the plan gives deliverables, not implementation details. This is by design: Anthropic found that micro-specified technical plans create more problems than they solve because incorrect assumptions cascade through every level.
Quick sanity check before handoff:
This is NOT a quality evaluation — that's the Evaluator's job. Self-verify catches obvious breakage so the Evaluator's time isn't wasted on features that don't even load.
Message the Evaluator via SendMessage:
Feature: [feature name]
Status: Ready for review
Files changed: [list of key files]
How to test: [specific steps — URL, navigation, test data to use]
What I built: [summary of implementation approach]
Be specific in "How to test" — the Evaluator needs to know exactly how to interact with the feature (URL, click path, form data, expected behavior).
After receiving Evaluator feedback, make a strategic decision:
Choose this when scores are trending upward (improving each iteration):
Choose this when scores are stagnant or the same issues keep recurring after 2-3 iterations:
The pivot decision is critical. The natural instinct is to keep refining, but Anthropic's research shows that pivoting to a new approach often produces breakthrough improvements that incremental refinement cannot. The Evaluator's feedback contains signals about whether refinement or pivot is needed — stagnant scores across iterations are the clearest signal to pivot.
Read commit style from .claude/ade.local.md:
conventional (default):
feat(auth): add login flow
fix(dashboard): correct chart rendering
refactor(api): simplify error handling
jira:
DEV-123 Add login flow
DEV-123 Fix chart rendering on dashboard
When using jira style, read the ticket ID from the plan frontmatter.
Commit after each feature with a descriptive message. Each commit should represent one complete, working feature.