From long-task
Use when ATS doc exists (or auto-skipped) but feature-list.json not yet created - scaffold project artifacts and populate features from Design §10.2
npx claudepluginhub suriyel/longtaskforagent --plugin long-taskThis skill uses the workspace's default tool permissions.
**LANGUAGE RULE**: You MUST respond to the user in Chinese (Simplified). All generated documents, reports, and user-facing output must be written in Chinese. Skill names, code identifiers, and JSON field names remain in English.
Orchestrates multi-session projects by implementing one feature per cycle from feature-list.json through TDD pipeline with quality gates and code review.
Guides GitHub Spec-Kit CLI integration for 7-phase constitution-based spec-driven feature development, managing .specify/specs/ directories with phases: Constitution, Specify, Clarify, Plan, Tasks, Analyze, Implement.
Orchestrates full R&D lifecycle from User Story to CD via mandatory 9-step checklist requiring human confirmation per step. Ensures observability plans and E2E test coverage for new features.
Share bugs, ideas, or general feedback.
LANGUAGE RULE: You MUST respond to the user in Chinese (Simplified). All generated documents, reports, and user-facing output must be written in Chinese. Skill names, code identifiers, and JSON field names remain in English.
Run once after both SRS and design are approved. Scaffolds all persistent artifacts, populates features from Design §10.2 (FRs already right-sized at Requirements phase), and prepares the project for iterative Worker cycles.
Announce at start: "I'm using the long-task-init skill to scaffold the project."
This skill reads from three approved documents:
| Document | Location | Provides |
|---|---|---|
| SRS | docs/plans/*-srs.md | Functional requirements (FR-xxx), NFRs (NFR-xxx), constraints (CON-xxx), assumptions (ASM-xxx), interface requirements (IFR-xxx), glossary, user personas, acceptance criteria |
| Design | docs/plans/*-design.md | Tech stack, architecture, data model, API design, testing strategy |
| ATS | docs/plans/*-ats.md | Requirement→scenario mapping, required test categories per requirement (constrains downstream feature-st via srs_trace lookup) |
You MUST create a TodoWrite task for each step and complete them in order:
Read the approved SRS, design, and ATS documents from docs/plans/
docs/plans/*-srs.md — for requirements, constraints, assumptions, NFRs, glossary, personasdocs/plans/*-design.md — for tech stack, architecture decisionsdocs/plans/*-ats.md — for requirement→category mapping (constrains ui flag and downstream feature-st category requirements via srs_trace)Run scripts/init_project.py to scaffold deterministic artifacts:
python scripts/init_project.py <project-name> --path . --lang <language>
<project-name> — from the SRS title<language> — one of python|java|typescript|c|cpp from the design doc tech stack--line-cov, --branch-cov, --mutation-score to override thresholds (defaults: 90/80/80)feature-list.json, CLAUDE.md (appended), task-progress.md, RELEASE_NOTES.md, examples/, docs/plans/validate_features.py, check_configs.py, check_devtools.py, check_jinja2.py, check_real_tests.py, validate_guide.py, get_tool_commands.py, validate_st_cases.py, validate_increment_request.py, validate_bugfix_request.py, check_st_readiness.py, check_ats_coverage.py, check_mcp_providers.py) into project scripts/
3b. MCP Provider Setup (SKIP if no enterprise MCP required):tool-bindings.json at project root using docs/templates/tool-bindings-template.json as a guidepython scripts/check_jinja2.py
→ Exit 1: present installation guide to user (pip install jinja2); wait for user to install; re-run check to confirm exit 0
→ Exit 0: continuepython scripts/apply_tool_bindings.py tool-bindings.json --output-dir .long-task-bindings
→ Verify: "N templates rendered to .long-task-bindings/"python scripts/check_mcp_providers.py tool-bindings.json
→ Exit 1: present installation instructions to user (the script outputs exact claude mcp add commands); wait for user to install and restart session; re-run check to confirm exit 0
→ Exit 0: continueVerify tech_stack and quality_gates in feature-list.json:
language, test_framework, coverage_tool, mutation_tool match the design docquality_gates thresholds if needed (defaults: line 90%, branch 80%, mutation 80%)python scripts/get_tool_commands.py feature-list.json
real_test config in feature-list.json:
marker_pattern matches the project's chosen real test identification methodmock_patterns covers the project's mock framework keywordstest_dir points to the correct test directoryGenerate long-task-guide.md — Create a project-tailored Worker session guide:
skills/long-task-work/SKILL.md — Worker workflowskills/long-task-quality/SKILL.md — verification enforcementskills/long-task-quality/coverage-recipes.md — coverage/mutation tool setupskills/using-long-task/references/architecture.md — TDD workflow detailspython scripts/get_tool_commands.py feature-list.json)"ui": true):
tool-bindings.json exists and capability_bindings.ui_tools.tool_mapping is present: use the enterprise tool names from tool-bindings.json throughout the guide (not Chrome DevTools MCP names)navigate_page, click, etc.)Environment Commands section with:
source .venv/bin/activate, conda activate myenv, nvm use 20)pytest --cov=src tests/)mutmut run)Service Commands section (only if project has server processes): reference env-guide.md as the authoritative source for start/stop/restart commands; list health check URLs; include reminder about the Restart ProtocolConfig Management section: describe how to add/update a config value for this project (e.g., "append KEY=value to .env" for dotenv projects, "set key=value in application.properties" for Spring Boot projects, "export KEY=value" for system-env-only projects). This section is referenced by the Worker Config Gate when prompting users for missing values.Real Test Convention section: identification method (marker/folder/naming, adapted to project language), run command to execute only real tests, example real test for this project's tech stackpython scripts/validate_guide.py long-task-guide.md --feature-list feature-list.json
Generate env-guide.md — Create an explicit service lifecycle guide at the project root (user-editable):
.env.example for *_PORT= variablesenv-guide.md with the following sections:Header note (top of file):
User-editable. Claude reads this file before managing services. Update when ports change or new services are added.
Services table:
| Service Name | Port | Start Command | Stop Command | Verify URL |
|---|---|---|---|---|
| (one row per service) |
Start All Services — for each service:
# Unix/macOS
[start command] > /tmp/svc-<slug>-start.log 2>&1 &
sleep 3
head -30 /tmp/svc-<slug>-start.log
# → Extract PID and port from output; record both in task-progress.md
# Windows alternative
cmd /c "start /b [command] > %TEMP%\svc-<slug>-start.log 2>&1"
timeout /t 3 /nobreak >nul
powershell "Get-Content $env:TEMP\svc-<slug>-start.log -TotalCount 30"
Verify Services Running — for each service:
curl -f http://localhost:<port>/health # or appropriate health endpoint
Stop All Services — kill by PID (primary) or port (fallback):
# By PID (preferred — use PID recorded in task-progress.md)
kill <PID> # Unix/macOS
taskkill /F /PID <PID> # Windows
# By port (fallback)
lsof -ti :<port> | xargs kill -9 # Unix/macOS
for /f "tokens=5" %a in ('netstat -ano ^| findstr :<port>') do taskkill /F /PID %a # Windows
Verify Services Stopped — ports must show no output:
lsof -i :<port> # Unix/macOS — expect no output
netstat -ano | findstr :<port> # Windows — expect no output
Restart Protocol (4 steps):
head -30 → extract new PID/port → update task-progress.mdscripts/svc-<slug>-start.sh / scripts/svc-<slug>-start.ps1 containing the full sequence; update env-guide.md "Start All Services" to call bash scripts/svc-<slug>-start.sh instead of inline commands; same for stop sequences (scripts/svc-<slug>-stop.sh). This keeps env-guide.md readable while versioning the logic in scripts/env-guide.md with a header note "No server processes — environment activation only" and only the activation command from long-task-guide.mdGenerate init.sh / init.ps1 — Create real, runnable bootstrap scripts:
references/init-script-recipes.md (in the long-task-init skill directory) for per-tool templates and best practicesinit.sh for Unix/macOS, init.ps1 for Windowsgit cloneenv-guide.md commands, not hooksPopulate SRS fields in feature-list.json — from the SRS document:
constraints[] — copy CON-xxx items from SRS "Constraints" section; each a concise stringassumptions[] — copy ASM-xxx items from SRS "Assumptions & Dependencies" section; each a concise stringcategory: "non-functional" features with srs_trace (e.g. ["NFR-001"]) and optionally measurable verification_steps; coverage/mutation gates do not apply to NFR featuresPopulate features from Design §10.2 — FRs are already right-sized at the Requirements phase (G1-G6 over-size + S1-S4 under-size heuristics). The design document's Task Decomposition table (§10.2) maps right-sized FRs to prioritized features with dependency ordering. Populate feature-list.json features[]:
srs_trace: copy the "Mapped FRs" column — the array of FR IDs this feature implements (e.g. ["FR-003", "FR-004", "FR-005"])title + description: derive from the §10.2 Feature name + the grouped FRs' descriptionspriority: P0/P1 → "high", P2 → "medium", P3 → "low"dependencies: from §10.3 Dependency Chain diagramstatus: always "failing""ui": true, "ui_entry": "/path" (mandatory — specify the URL where this feature's UI is accessed); include at least one [devtools]-prefixed verification step asserting positive visual presence of the feature's primary rendered output (not just error absence). Example: "[devtools] /game | EXPECT: canvas#game-board with rendered game elements (snake segments, food item, score display), game board grid visible | REJECT: blank canvas, empty game container, 'undefined' in score"verification_steps is OPTIONAL — if provided, consolidate acceptance criteria from all mapped FRs into behavioral scenarios (Given/When/Then):
"Login page displays correctly" → no action, no assertion"[devtools] Navigate /login → EXPECT: email input, password input, 'Sign In' button; fill valid creds → click Sign In → EXPECT: redirect to /dashboard, user name in header; REJECT: console errors, broken images""Given a registered user, when POST /api/orders with valid payload, then response 201 with order ID; and GET /api/orders/{id} returns the created order with correct fields""ui": true features: every [devtools] step MUST describe a multi-step interaction chain (navigate → interact → verify → interact → verify)ui: true."ui": true) MUST list their backend API dependency features in dependencies[].srs_trace (no orphaned requirements)srs_trace contains at least one FR (no empty traces)Populate required_configs — from the SRS document (IFR-xxx interface requirements) and design doc:
envfilerequired_by; provide check_hint with setup instructions
9b. Generate scripts/check_configs.py — project-specific config checker (LLM-generated, not copied from plugin):tech_stack.language and the design doc:
.env pattern → use load_dotenv-style KEY=VALUE parsingsrc/main/resources/application.properties or application.yml.env or config/ directorypython scripts/check_configs.py feature-list.json [--feature <id>]required_configs[] from feature-list.jsonenv-type config via os.environ, each file-type config via os.path.existsname and check_hint--dotenv or format flag needed — the loading logic is built in for this projectscripts/check_configs.py is available as a reference template if usefulGenerate .env.example — from required_configs:
env-type config, write a commented template line:
# <name> — <description>
# Hint: <check_hint>
# Required by features: <required_by ids>
<KEY>=
.gitignore (e.g., .env); .env.example is safe to commitValidate:
python scripts/validate_features.py feature-list.json
Scaffold project skeleton (dirs, configs, dependency manifests) — based on design doc architecture
Git init + initial commit
Run init script and verify environment:
init.sh (or init.ps1), verify environment setup completes without errorslong-task-guide.md → confirm tests execute (may all fail at this point — that's expected)env-guide.mdUpdate task-progress.md — update ## Current State with initial progress (0/N features passing), then append Session 0 entry (include SRS + design doc references)
Begin first Worker cycle — REQUIRED SUB-SKILL: Invoke long-task:long-task-work
When a Worker cycle introduces a new backend service, changes a service port, or discovers that the actual start/stop command differs from env-guide.md, update env-guide.md:
scripts/svc-<slug>-start.sh / scripts/svc-<slug>-stop.sh and update env-guide.md to reference the scriptscripts/svc-* changes in the same git commit as the featureenv-guide.md must always reflect commands that actually work. Any time a command is proven correct (during TDD Green or after fixing a failure), env-guide.md must be updated to match.
Root structure:
{
"project": "project-name",
"created": "2025-01-15",
"tech_stack": {
"language": "python|java|typescript|c|cpp",
"test_framework": "pytest|junit|vitest|gtest|...",
"coverage_tool": "pytest-cov|jacoco|c8|gcov|...",
"mutation_tool": "mutmut|pitest|stryker|mull|..."
},
"quality_gates": {
"line_coverage_min": 90,
"branch_coverage_min": 80,
"mutation_score_min": 80
},
"constraints": ["Hard limit — one string per item"],
"assumptions": ["Implicit belief — one string per item"],
"required_configs": [
{
"name": "Display name",
"type": "env|file",
"key": "ENV_VAR (for env type)",
"path": "path/to/file (for file type)",
"description": "What this config is for",
"required_by": [1, 3],
"check_hint": "How to set it up"
}
],
"features": [...]
}
Each feature:
{
"id": 1,
"category": "core",
"title": "Feature title",
"description": "What it does",
"priority": "high|medium|low",
"status": "failing|passing",
"srs_trace": ["FR-001", "FR-002"],
"verification_steps": ["step 1", "step 2"],
"dependencies": [],
"ui": false,
"ui_entry": "/optional-path"
}
| File | Purpose |
|---|---|
feature-list.json | Structured task inventory with status |
CLAUDE.md | Cross-session navigation index (appended) |
task-progress.md | Session-by-session progress log |
RELEASE_NOTES.md | Living release notes (Keep a Changelog format) |
examples/ | Runnable examples directory |
init.sh / init.ps1 | Environment bootstrap (LLM-generated) |
env-guide.md | Service lifecycle commands — start/stop/restart/verify with output capture; user-editable |
long-task-guide.md | Worker session guide with env activation + direct test commands (LLM-generated, validated) |
.env.example | Template for required env configs (safe to commit) |
After all artifacts are scaffolded and feature-list.json is created:
python scripts/check_retro_auth.py feature-list.json
AskUserQuestion to ask user:
"检测到 Skill 反馈 API 已配置({endpoint})。是否授权在本项目中搜集 Skill 改进建议并在项目结束后上报?搜集内容包括:用户反馈修正、技能缺陷分析。不包含项目代码或业务数据。" Options: "授权 (Recommended)" / "不授权"
"retro_authorized": true in feature-list.json root"retro_authorized": false in feature-list.json rootCalled by: long-task-ats (Step 12) or using-long-task (when ATS doc exists, no feature-list.json)
Reads: docs/plans/*-srs.md (requirements) + docs/plans/*-design.md (architecture) + docs/plans/*-ats.md (test strategy constraints)
Chains to: long-task-work (after initialization complete)
Produces: feature-list.json + all scaffolded artifacts listed above