Development workflow automation: specify → open → execute pipeline with parallel research agents, hook-based guards, and PR state management
npx claudepluginhub team-attention/hoyeonDevelopment workflow automation plugin: specify → open → execute pipeline with parallel research agents, hook-based guards, and PR state management
Claude Code marketplace entries for the plugin-safe Antigravity Awesome Skills library and its compatible editorial bundles.
Production-ready workflow orchestration with 79 focused plugins, 184 specialized agents, and 150 skills - optimized for granular installation and minimal token usage
Curated collection of 141 specialized Claude Code subagents organized into 10 focused categories
All you need is requirements. A Claude Code plugin that derives requirements from your intent, verifies every derivation, and delivers traced code — without you writing a plan.
Quick Start · Philosophy · The Chain · Commands · Agents
AI can build anything. The hard part is knowing what to build — precisely.
Most AI coding fails at the input, not the output. The bottleneck isn't AI capability. It's human clarity. You say "add dark mode" and there are a hundred decisions hiding behind those three words.
Most tools either force you to enumerate them upfront, or ignore them entirely. Hoyeon does neither — it derives them. Layer by layer. Gate by gate. From intent to verified code.
You don't know what you want until you're asked the right questions.
Requirements aren't artifacts you produce before coding. They're discoveries — surfaced through structured interrogation of your intent. Every "add a feature" conceals unstated assumptions. Every "fix the bug" hides a root cause you haven't named yet.
Hoyeon's job is to find what you haven't said.
You say: "add dark mode toggle"
│
Hoyeon asks: "System preference or manual?" ← assumption exposed
"Which components need variants?" ← scope clarified
"Persist where? How?" ← decision forced
│
Result: 3 requirements, 8 sub-requirements, 4 tasks — all linked
This is not just process. It's built on three beliefs about how AI coding should work.
Get the requirements right, and the code writes itself. Get them wrong, and no amount of code fixes it.
Most AI tools jump straight to tasks — "create file X, edit function Y." But tasks are derivatives. They change when requirements change. If you start from tasks, you're building on sand.
Hoyeon starts from goals and derives downward through a layer chain:
Goal → Decisions → Requirements → Sub-requirements → Tasks
Requirements are refined from multiple angles before a single line of code is written. Interviewers probe assumptions. Gap analyzers find what's missing. UX reviewers check user impact. Tradeoff analyzers weigh alternatives. Each perspective sharpens the requirements until they're precise enough to generate verifiable sub-requirements.
The chain is directional: requirements produce tasks, never the reverse. If requirements change, sub-requirements and tasks are re-derived. This is why Hoyeon can recover from mid-execution blockers — the requirements are still valid, only the tasks need adjustment.
LLMs are non-deterministic. The system around them doesn't have to be.
An LLM given the same prompt twice may produce different code. This is the fundamental challenge of AI-assisted development. Hoyeon's answer: constrain the LLM with programmatic control so that non-determinism doesn't propagate.
Three mechanisms enforce this:
spec.json as single source of truth — Every agent reads from and writes to the same structured spec. No agent invents its own context. No information lives only in a conversation. The spec is the shared memory that survives context windows, compaction, and agent handoffs.
CLI-enforced structure — hoyeon-cli validates every merge to spec.json. Field names, types, required relationships — all checked programmatically before the LLM ever sees the data. The CLI doesn't suggest structure; it rejects invalid structure.
Derivation chain as contract — Goal → Decisions → Requirements → Sub-requirements → Tasks are linked. Each layer references the one above it. A sub-requirement traces to a requirement. A task traces to requirements via fulfills. If the chain breaks, the gate blocks. This means: if you have valid requirements, the system will produce a result — deterministically routed, even if the LLM's individual outputs vary.
The LLM does the creative work. The system ensures it stays on rails.
If a human has to check it, the system failed to automate it.
Every sub-requirement in spec.json is a testable behavioral statement:
{
"id": "R1.1",
"behavior": "Clicking dark mode toggle switches theme to dark"
}