From hcf
Creates structured TDD implementation plans with task breakdowns, dependencies, and test requirements for new features, multi-file changes, or multi-step tasks.
npx claudepluginhub markshust/hcf --plugin hcfThis skill uses the workspace's default tool permissions.
Transform feature requests into structured plans with task breakdowns, dependencies, and TDD requirements for parallel autonomous execution.
Generates detailed TDD implementation plans from specs for multi-step tasks before coding, with bite-sized 2-5min steps covering files, failing tests, code, runs, and git commits.
Generates TDD implementation plans with bite-sized tasks, exact file paths, tests, code, commands, and commits for multi-step features from specs. Integrates with conductor tracks.
Share bugs, ideas, or general feedback.
Transform feature requests into structured plans with task breakdowns, dependencies, and TDD requirements for parallel autonomous execution.
Approach this phase like a solution architect meeting with a client to flush out scope and requirements. Your job is not to ask everything — it's to think hard about the shape of the solution before asking grounded questions. Find the integration points, then enumerate the design axes the user probably hasn't thought about.
Quick scope check first. If the ask already specifies data model, scope boundaries, and integration points (e.g., "Add an email_verified_at column to users and a SendVerificationEmail job"), do a brief existence-check (do the named files/models exist?) and skip ahead to Phase 2 — the brainstorm below is for under-specified asks.
1. Codebase discovery
Read files specifically related to the ask. Architecture context is already loaded via the <architecture> block at the top of this skill — do not re-read .claude/architecture.md. Instead:
book*, *Book*, library/reading-list files)2. Permutation & assumption brainstorm
For each noun and verb in the ask, enumerate plausible interpretations. Then enumerate the hidden axes — design decisions the user almost certainly hasn't specified but the implementer needs to know. Surface the non-obvious ones a senior engineer would catch and a junior would miss.
Focus on permutations that meaningfully change scope, data model, or architecture. Avoid minutiae (button colors, naming bikesheds, trivial config defaults).
Example for "track books I'm reading":
3. Diff against codebase
Produce a short "what I found vs. what you asked" comparison. Examples:
User model with auth — books would extend it for per-user lists"ResourceController pattern; book routes would conform"Issue Detection: If the user references a GitHub issue (e.g., "#18", "issue 18", a GitHub issue URL), capture it for the ## Related Issues field in _plan.md. Use Closes #N for issues that will be fully resolved by this plan, or Relates to #N for partial/tangential references. If no issue is mentioned, set the field to "none".
Surface findings to the user, then ask categorized questions. The goal is to flush out enough to write a confident plan — like a solution architect leaving a client meeting with the spec they need.
Format your response so users can skim past findings if they want — the Questions section must be self-sufficient.
Present in this order:
What I Found
- {codebase findings: existing models, patterns, integration points, greenfield vs. extension}
Key Permutations to Resolve {1-3 sentences naming the design axes that meaningfully shape scope/data model/architecture, derived from the brainstorm}
Questions
Must answer (these shape scope, data model, or architecture):
- {scope-shaping question}
- {data-model question}
- {integration question}
Will default if you don't specify (defaults stated):
- {dimension}: {proposed default}
- {dimension}: {proposed default}
Question quality guidance:
Once you understand the requirements, create the plan overview:
Create feature branch:
git checkout -b feature/{plan-name}
If the branch already exists (resuming a plan), check it out instead:
git checkout feature/{plan-name}
Create plan directory:
mkdir -p .claude/plans/{plan-name}
Use a kebab-case name derived from the feature (e.g., user-authentication, payment-processing).
Create _plan.md:
# Plan: {Feature Name}
## Created
{date}
## Status
planning | ready | in_progress | completed | blocked
## Objective
{1-2 sentence description of what this plan achieves}
## Related Issues
{list of GitHub issue references, e.g., "Closes #18", "Relates to #42", or "none"}
## Discovery Notes
{Brief summary of Phase 1 findings: existing models/patterns to extend, integration points, greenfield vs. existing code, key assumptions resolved during clarification. Captures context for future readers and resumed sessions.}
## Scope
### In Scope
- {bullet points of included functionality}
### Out of Scope
- {bullet points of explicitly excluded functionality}
## Success Criteria
- [ ] {measurable outcome 1}
- [ ] {measurable outcome 2}
- [ ] All tests passing
- [ ] Code follows project standards
## Task Overview
| Task | Description | Depends On | Status |
|------|-------------|------------|--------|
| 001 | {title} | - | pending |
| 002 | {title} | 001 | pending |
| ... | ... | ... | ... |
## Architecture Notes
{Any architectural decisions or patterns to follow}
## Risks & Mitigations
- {potential risk}: {mitigation strategy}
Create numbered task files. Follow these principles:
Task Sizing:
Dependency Rules:
Required Tasks:
README.md following the project's Package README Standards (see code-standards.md). This task depends on all other tasks so the README accurately reflects what was built.Task File Format ({NNN}-{task-name}.md):
# Task {NNN}: {Title}
**Status**: pending
**Depends on**: [{comma-separated task numbers, or "none"}]
**Retry count**: 0
## Description
{2-4 sentences describing what this task accomplishes and why}
## Context
{Any relevant context the implementer needs to know}
- Related files: {list key files to modify or reference}
- Patterns to follow: {reference to existing patterns in codebase}
## Requirements (Test Descriptions)
Write requirements as exact test names. These become the test method names.
- [ ] `it creates a new user with valid email and password`
- [ ] `it rejects duplicate email addresses with validation error`
- [ ] `it hashes passwords before storing in database`
- [ ] `it returns user object with id after successful creation`
## Acceptance Criteria
- All requirements have passing tests
- Code follows code standards
- No decrease in test coverage
## Implementation Notes
(Left blank - filled in by programmer during implementation)
After creating all tasks, verify:
Dependency Visualization: Show the user a simple dependency tree:
001 ─┬─► 002 ─┬─► 005
│ │
└─► 003 ─┘
│
└─► 004 ────► 006
After validating dependencies, run all agents configured in the post-plan phase of pipeline.md.
1. Read the pipeline configuration:
Parse the ## post-plan section from the <pipeline> context included in CLAUDE.md. Each bullet point is an agent name to spawn.
2. For each agent in the post-plan list, spawn it sequentially:
Use the Agent tool with subagent_type="{agent-name}" and pass the plan name and project context.
The subagent prompt must include:
## Project Architecture
{paste the COMPLETE content of <architecture> verbatim}
3. After each subagent completes, read any updated plan files to prepare the recap for the user.
Default pipeline (ships with HCF):
## post-plan
- devils-advocate
Users can add, remove, or reorder agents in their project's .claude/pipeline.md. For example, adding a security-reviewer agent after devils-advocate.
Present the refined plan along with what the devil's advocate changed:
Here's the plan I've created for {feature name}:
Tasks: {N} total Parallel batches: ~{estimate based on dependencies}
# Task Dependencies 001 {title} none 002 {title} 001 ... ... ... Post-Plan Review
The plan was automatically reviewed by the post-plan pipeline agents. Here's what was refined:
{summarize changes from each agent that ran, e.g.:}
- Added missing dependency: task 005 now depends on 003 (shared interface needed)
- Split task 008 into 008 and 009 (too large for single TDD cycle)
- Added edge case requirements to task 004 (empty state handling)
- Fixed method signature in task 012 (verified against source)
{if there are deferred items from any agent, list them:} Items for your consideration:
- {Items the user may want to weigh in on}
Does this breakdown look correct? Would you like to:
- Approve and proceed
- Add/remove tasks
- Adjust dependencies
- Modify requirements for a specific task
Make adjustments based on feedback.
Once approved:
_plan.md status to readyPlan created: {plan-name}
Location: .claude/plans/{plan-name}/
Total tasks: {N}
Independent tasks (batch 1): {count of tasks with no dependencies}
Ready to begin autonomous implementation?
- Yes, start now - I'll trigger
plan-orchestrateimmediately- No, I'll run it later - Say "run the {plan-name} plan" anytime to start
If user chooses to start now:
plan-orchestrate skill with the plan name to begin parallel executionIf user chooses later:
Requirements MUST be written as test descriptions. They should:
Be specific and testable:
it returns 401 when authentication token is missingit handles authentication errorsDescribe behavior, not implementation:
it sends welcome email after successful registrationit calls EmailService.sendWelcome()Cover edge cases explicitly:
it rejects passwords shorter than 8 charactersit validates passwordBe atomic (one assertion per requirement):
it stores user in database + it returns created userit stores user in database and returns it# Task 001: Create User Model
**Status**: pending
**Depends on**: none
**Retry count**: 0
## Description
Create the User Eloquent model with authentication fields and basic validation.
## Context
- Related files: app/Models/User.php (may exist, needs modification)
- Patterns to follow: Existing models in app/Models/
## Requirements (Test Descriptions)
- [ ] `it creates a user with email and password`
- [ ] `it hashes the password automatically when setting`
- [ ] `it validates email is required`
- [ ] `it validates email format is valid`
- [ ] `it validates email is unique`
- [ ] `it validates password minimum length is 8 characters`
## Acceptance Criteria
- All requirements have passing tests
- Migration exists for users table
- Model follows existing patterns
# Task 003: Create Registration Endpoint
**Status**: pending
**Depends on**: 001, 002
**Retry count**: 0
## Description
Create POST /api/register endpoint that creates new users and returns JWT token.
## Context
- Related files: routes/api.php, app/Http/Controllers/AuthController.php
- Patterns to follow: Existing API controllers
## Requirements (Test Descriptions)
- [ ] `it returns 201 with user data on successful registration`
- [ ] `it returns JWT token in response`
- [ ] `it returns 422 when email already exists`
- [ ] `it returns 422 when email format is invalid`
- [ ] `it returns 422 when password is too short`
- [ ] `it stores user in database on success`
## Acceptance Criteria
- All requirements have passing tests
- Route registered in api.php
- Uses form request for validation
Always end with the clear next steps showing how to execute the plan.