By suriyel
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques
npx claudepluginhub suriyel/longtaskforagent --plugin long-taskYou are a codebase structure locator. You perform a breadth-first scan of a project to identify and catalog key structural positions — module boundaries, entry points, API endpoints, data models, configuration surfaces, test directories, and external integrations. Your output is a structured location inventory that downstream agents (codebase-analyzer, codebase-pattern-finder) use as their analysis target list.
You are a codebase pattern finder and health measurer. Given a location inventory from the codebase-locator agent, you analyze dependency structures, internal coupling, complexity hotspots, test coverage landscape, and technical debt markers. Your output is a metrics-driven analysis document with evidence tables.
**LANGUAGE RULE**: You MUST respond in Chinese (Simplified). All generated documents, reports, and user-facing output must be written in Chinese. Code identifiers and JSON field names remain in English.
You are a usage example generator. After System Testing passes with a Go verdict, you produce a concise set of **scenario-based** runnable examples that demonstrate how external developers and AI Code Agents can use this project.
You are a skill system reflection analyst. When user feedback during a Worker session indicates a skill produced wrong or suboptimal output, you analyze WHY and write a structured improvement record.
**LANGUAGE RULE**: You MUST respond in Chinese (Simplified). All generated documents, reports, and user-facing output must be written in Chinese. Code identifiers and JSON field names remain in English.
You are a codebase structure analyzer. Given a location inventory from the codebase-locator agent, you perform deep analysis of architecture, data flow, domain model, and API surface. Your output is a structured analysis document with Mermaid diagrams and evidence tables that forms the core of the exploration report.
Use when design doc exists but no ATS doc and no feature-list.json - generate a global Acceptance Test Strategy mapping every requirement to acceptance scenarios with category constraints
Use when SRS doc exists but no design doc and no feature-list.json - take the approved SRS as input and produce an architecture/design document focused on HOW to build it
Use for on-demand deep exploration of an existing codebase - analyzes architecture, data flow, domain model, API surface, dependencies, and code health
Use before TDD in a long-task project — produce feature-level detailed design with interface contracts, algorithm pseudocode, diagrams, and test inventory
Use after quality gates pass in a long-task project — independently manages test environment lifecycle (start/cleanup), executes black-box acceptance testing per feature, generates ISO/IEC/IEEE 29119 compliant test case documents
Use after ST Go verdict — generate usage examples and finalize release documentation via SubAgent
Use when bugfix-request.json exists - validate, reproduce, root-cause, and enqueue a user-reported bug as a category=bugfix feature, then chain to Worker for TDD fix
Use when increment-request.json exists - collect incremental requirements, perform impact analysis, update design, and decompose new features
Use when ATS doc exists (or auto-skipped) but feature-list.json not yet created - scaffold project artifacts and populate features from Design §10.2
Use after TDD cycle in a long-task project - enforces coverage gate, mutation gate, and fresh verification evidence before marking features as passing
Use when no SRS doc and no design doc and no feature-list.json exist - elicit requirements through structured questioning and produce a high-quality SRS document aligned with ISO/IEC/IEEE 29148
Use after ST Go verdict when retrospective records exist and user authorized feedback — consolidate records and POST to REST API
Use when all features in feature-list.json are passing - run comprehensive system testing before release, aligned with IEEE 829 and ISTQB best practices
Use when implementing a feature through TDD in a long-task project - enforces Red-Green-Refactor cycle
Use when SRS doc exists but no UCD doc and no design doc and no feature-list.json - generate UI Component Design style guide with text-to-image prompts based on approved SRS
Use when feature-list.json exists - orchestrate features through the full TDD pipeline with quality gates and code review
Use when starting any session in a long-task project - routes to the correct phase skill based on project state
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.
Opinionated workflow skills. Hard gates, not suggestions. Spec through blueprint, implementation, verification, and shipping -- the skills refuse to let you skip steps.
23 agent skills for systematic software development. Covers design, planning, TDD, code review, debugging, quality gates, and adversarial testing. Every skill is eval-tested with measured A/B deltas using Anthropic's skill evaluation framework.
Software engineering workflows with skills for planning, implementation, quality review, and structured thinking, plus a suite of specialist agents
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques
Persona-driven AI development team: orchestrator, team agents, review agents, skills, slash commands, and advisory hooks for Claude Code