Formal predictive waterfall lifecycle framework for AI/ML products — 8 phases, 8 formal gates (A–H), 33 agents, 15 skills, full artefact management. Designed for regulated environments, requirements traceability, and formal handovers.
npx claudepluginhub nsalvacao/nsalvacao-claude-code-plugins --plugin waterfall-lifecycleGenerate a waterfall lifecycle artefact from a template, pre-populated with project context.
Validate requirements baseline completeness against all Gate B mandatory artefacts and RTM coverage before submitting for gate review.
Initiate a formal gate review by invoking the gate-reviewer agent with the evidence package for the specified gate.
Prepare inter-phase handover package by validating Gate C artefacts, generating handover summary, and assessing phase transition readiness.
Initialize waterfall-lifecycle framework for a new project. Creates lifecycle-state.json, directory structure, and bootstraps Phase 1.
Display current waterfall lifecycle state including active phase, gate statuses, and outstanding blockers.
Start a waterfall lifecycle phase after confirming previous phase gate was passed or waived.
Review the requirements set for completeness, testability, and AI/NFR coverage against Phase 2 exit criteria.
Use this agent when constructing the project charter, setting up governance, and assembling the Gate A initiation pack at Phase 1 of the waterfall lifecycle. <example> Context: All Phase 1 artefacts are complete and the team needs to consolidate them into a governance-ready package for the initiation gate. user: "We have the problem statement, feasibility, and risk register — now we need a project charter and gate pack" assistant: "I'll use the delivery-framing agent to construct the project charter, define governance structure, and assemble the complete Gate A initiation pack from all Phase 1 artefacts." <commentary> Project charter and gate pack are the final mandatory Gate A artefacts — this agent synthesises all upstream Phase 1 work into the governance submission. </commentary> </example> <example> Context: Sponsor is questioning whether the AI justification in the charter is robust enough and whether the governance forum will approve it. user: "The sponsor wants to make sure the AI reasoning in the charter will stand up to scrutiny at the gate — can you review it?" assistant: "I'll use the delivery-framing agent to review the AI justification against Gate A criteria, strengthen the fallback scenario documentation, and confirm the charter meets the sign-off authority requirements." <commentary> Delivery framing validates charter completeness and governance readiness — a weak AI justification at gate leads to rejection or rework that delays project start. </commentary> </example>
Use this agent when assessing whether a proposed project is feasible across technical, data, AI/ML, organisational, financial, and legal dimensions at Phase 1 of the waterfall lifecycle. <example> Context: A stakeholder wants to use machine learning for fraud detection but data availability and regulatory compliance are uncertain. user: "We want to build a fraud detection model — is this actually feasible given our current data and GDPR constraints?" assistant: "I'll use the feasibility-assessment agent to evaluate technical, data, AI/ML, legal, organisational, and financial feasibility across all six dimensions and produce a documented verdict." <commentary> Multi-dimension feasibility assessment is required before Gate A — this agent structures the analysis and produces the mandatory feasibility artefacts. </commentary> </example> <example> Context: Project team is debating whether AI is the right approach or whether rule-based logic would suffice. user: "Do we actually need AI here, or are we over-engineering this with ML?" assistant: "I'll use the feasibility-assessment agent to run the AI justification test and document whether AI, rules-based logic, or a hybrid approach is the appropriate technical direction." <commentary> AI justification must be formally documented at Gate A — this agent ensures the decision is evidence-based and the fallback scenario is explicit. </commentary> </example>
Use this agent when starting Phase 1 of the waterfall lifecycle — defining the problem, articulating the vision, and mapping stakeholders. <example> Context: A sponsor has identified a process inefficiency and wants to formalise it before requesting budget. user: "We have a manual reconciliation process that takes 3 days per month — help me define the problem properly" assistant: "I'll use the problem-value-context agent to articulate the problem statement, quantify the impact, frame a vision statement, and map all affected stakeholders." <commentary> Formal problem definition is required at Gate A — this agent structures the evidence-backed problem statement and stakeholder map needed for the initiation gate pack. </commentary> </example> <example> Context: Business owner has a vague idea about improving customer onboarding but hasn't formalised scope or ownership. user: "We want to improve onboarding — who should be involved and what are we actually trying to achieve?" assistant: "I'll use the problem-value-context agent to clarify the problem scope, define measurable success, and identify all stakeholder groups before committing to delivery." <commentary> Stakeholder mapping and vision articulation must precede any feasibility work — this agent ensures the initiative has a clear, shared understanding of the problem before phase progression. </commentary> </example>
Use this agent when identifying initial risks, capturing assumptions, logging clarifications, and running compliance checks at Phase 1 of the waterfall lifecycle. <example> Context: Project team is preparing for Gate A and needs to formalise risks, assumptions, and open decisions before presenting to the governance forum. user: "We need to document what could go wrong and what we're assuming before the initiation gate" assistant: "I'll use the risk-compliance-screening agent to build the initial risk register, capture assumptions, log clarifications, and run a compliance check against known regulatory requirements." <commentary> A risk register with ≥3 identified risks is a mandatory Gate A exit criterion — this agent ensures the team enters the gate with documented risk awareness. </commentary> </example> <example> Context: Legal and compliance status is unclear and the team is unsure whether the project needs a Data Protection Impact Assessment. user: "Do we need a DPIA for this project? What compliance obligations should we be tracking?" assistant: "I'll use the risk-compliance-screening agent to assess DPIA applicability, identify all compliance obligations, and log them in the clarification register with owners and due dates." <commentary> Compliance screening at initiation prevents late-stage regulatory blockers — this agent surfaces obligations early and assigns resolution owners. </commentary> </example>
Use this agent when specifying AI/ML-specific requirements, acceptance thresholds, model constraints, data requirements, and fallback behavior at Phase 2 (Requirements and Baseline) of the waterfall lifecycle. <example> Context: The business requirements set is complete and the team needs to translate AI-related business requirements into measurable AI/ML specifications with acceptance thresholds. user: "We have the business requirements set — now we need to define the AI acceptance criteria, data requirements, and what happens if the model underperforms" assistant: "I'll use the ai-requirements-engineer agent to specify measurable AI acceptance thresholds (precision, recall, F1, latency), document data requirements for training and validation, define fallback behavior for underperformance scenarios, and produce the ai-requirements-specification.md linked to the relevant REQ-IDs." <commentary> AI requirements must be specified with concrete measurable thresholds — vague AI acceptance criteria lead to disagreements at test time about whether the system has met its targets. </commentary> </example> <example> Context: The sponsor is asking whether the AI system needs to explain its decisions and what the team will do if the model drifts post-deployment. user: "The sponsor wants to know if we need explainability and how we'll handle model drift — where does this fit in requirements?" assistant: "I'll use the ai-requirements-engineer agent to specify explainability requirements (what decisions must be explained, to whom, and in what format), define model drift monitoring thresholds, and document the retraining and fallback triggers. These will be captured in the ai-requirements-specification.md." <commentary> Explainability and drift monitoring requirements are frequently missed at requirements stage and cause post-deployment compliance issues — they must be explicit and measurable. </commentary> </example>
Use this agent when building the requirements traceability matrix, authoring the glossary, freezing the requirements baseline, and assembling the Gate B pack at Phase 2 (Requirements and Baseline) of the waterfall lifecycle. <example> Context: All three upstream Phase 2 subfases are complete and the team needs to consolidate all requirement artefacts into a frozen baseline and assemble the Gate B submission pack. user: "We have the business requirements, AI requirements, and NFRs all complete — now we need to freeze the baseline and prepare for Gate B" assistant: "I'll use the baseline-manager agent to build the RTM linking all REQ-IDs to acceptance criteria and test references, produce the glossary, freeze the requirements baseline with a version number, and assemble the Gate B pack with all 10 mandatory artefacts and a gate readiness assessment." <commentary> Baseline freeze is the critical control point that prevents requirements from changing without formal change control — the baseline-manager enforces this boundary before the project moves into system design. </commentary> </example> <example> Context: The RTM has been built but two requirements have no corresponding test reference, and one AI requirement is not linked back to any business requirement. user: "The RTM shows coverage gaps — two requirements have no test ref and one AI req has no parent business requirement" assistant: "I'll use the baseline-manager agent to investigate the coverage gaps: identify the two requirements missing test references and flag them for test planning in Phase 3, resolve the orphaned AI requirement by either linking it to an existing REQ-ID or escalating to requirements-articulation for a new requirement to be added, and confirm 100% RTM coverage before freezing the baseline." <commentary> RTM coverage gaps discovered at gate are far cheaper to resolve than gaps discovered at test — the baseline-manager's coverage validation is the last opportunity to close them before design begins. </commentary> </example>
Use this agent when defining and validating performance, security, scalability, compliance, and availability non-functional requirements (NFRs) at Phase 2 (Requirements and Baseline) of the waterfall lifecycle. <example> Context: The business requirements set and AI requirements specification are complete and the team needs to define measurable NFRs covering performance, security, and compliance before design begins. user: "We need to define NFRs — the system has to be fast, secure, and GDPR-compliant, but we haven't documented the specific targets yet" assistant: "I'll use the nfr-architect agent to define measurable NFRs across all five categories: performance (response time, throughput), security (authentication, data protection), scalability (peak load, growth targets), compliance (GDPR, applicable frameworks), and availability (uptime SLA, RTO/RPO). Each NFR will have a numeric target, a test approach, and a priority rating." <commentary> NFRs without measurable targets are unverifiable at test time — the nfr-architect agent ensures every NFR has a specific numeric threshold and a defined test approach so that pass/fail can be determined objectively. </commentary> </example> <example> Context: The compliance team has flagged that the AI system may fall under EU AI Act high-risk classification and specific NFRs around auditability and human oversight must be documented. user: "Compliance says this might be a high-risk AI system under the EU AI Act — what NFRs do we need to add?" assistant: "I'll use the nfr-architect agent to map the EU AI Act high-risk requirements to specific NFRs: auditability (complete audit log with retention period), human oversight (override capability, escalation thresholds), data governance (training data documentation, bias monitoring), and accuracy/robustness requirements. Each will be specified with measurable targets and linked to the relevant compliance framework." <commentary> Regulatory NFRs for AI systems are mandatory Gate B artefacts in regulated environments — missing them at requirements stage causes design rework and potential compliance failures post-deployment. </commentary> </example>
Use this agent when eliciting, structuring, and validating business requirements at Phase 2 (Requirements and Baseline) of the waterfall lifecycle. <example> Context: Gate A has been approved and the team needs to capture and formalise the business requirements before any design work begins. user: "We have the project charter signed off — now we need to elicit and structure the business requirements from stakeholders" assistant: "I'll use the requirements-articulation agent to run stakeholder elicitation sessions, assign REQ-IDs, validate each requirement against SMART criteria, define acceptance criteria, and produce the business-requirements-set.md ready for Gate B." <commentary> Business requirements elicitation is the first action in Phase 2 — without a complete and SMART-validated requirements set, all downstream design and build work is at risk. </commentary> </example> <example> Context: The requirements set has been drafted but several requirements are vague or overlap, and one stakeholder group has not been consulted. user: "Some requirements feel ambiguous and we're not sure all stakeholder groups have been covered — can you review the set?" assistant: "I'll use the requirements-articulation agent to audit the existing requirements for SMART compliance, identify ambiguous or conflicting entries, flag missing stakeholder inputs, and produce a revised business-requirements-set.md with all issues resolved." <commentary> Requirements quality gates catch ambiguity early — vague requirements at this stage translate into scope gaps and rework in build and test. </commentary> </example>
Use this agent when performing security design review, privacy design review, constructing the control matrix, authoring the AI control design note, and assembling the Gate C design approval pack at Phase 3 (Architecture and Solution Design) of the waterfall lifecycle. <example> Context: Detailed design (subfase 3.2) has delivered the LLD, interface specifications, data flow design, AI/ML design package, test design package, and operational design package. The team needs to assure the design through security review, privacy review, control matrix construction, AI governance, and Gate C submission assembly. user: "Detailed design is complete — we need to run the security and privacy reviews, build the control matrix, author the AI controls note, and prepare the Gate C pack" assistant: "I'll use the control-design agent to build the control matrix mapping every risk in the risk register to at least one control with type, implementation, owner, and test reference; run the security design review covering threat model, attack surface, controls, and residual risks; conduct the privacy design review including PII inventory, GDPR article mapping, data flows for personal data, and privacy controls; author the AI control design note for bias monitoring, drift detection, model governance, explainability, and human oversight; and assemble the Gate C design approval pack with all 8 mandatory artefacts assessed for completeness and all 8 exit criteria evaluated." <commentary> Gate C is the governance checkpoint that authorises Phase 4 to begin. Assembling the design approval pack without first completing the control matrix and security/privacy reviews is a governance failure — Phase 4 should not start on a design that has unmitigated CRITICAL threats or uncontrolled risks. </commentary> </example> <example> Context: The security design review has identified a CRITICAL threat: the inference API endpoint lacks rate limiting and authentication, exposing the model to adversarial probing. The control matrix shows that this risk has no assigned control because it was not in the Phase 3 risk register. user: "Security review found a CRITICAL unmitigated threat on the inference API — it's not in the risk register and has no control" assistant: "I'll use the control-design agent to address this blocking finding: add the adversarial probing risk to the risk register with CRITICAL severity and OPEN status; add a corresponding control to the control matrix — preventive control implementing rate limiting and API key authentication on the inference endpoint, owned by the Security Architect, with test reference in the security test scope; update the security design review to record the finding as mitigated with the control in place; and re-evaluate Gate C readiness — this finding must be resolved before the design approval pack can be submitted as gate-ready." <commentary> A CRITICAL unmitigated threat discovered at control-design is the right time to find it — far cheaper than discovering it at penetration testing or in production. The control-design agent's role is to force this kind of finding to the surface and block gate submission until it is resolved. </commentary> </example>
Use this agent when expanding HLD components into implementation-ready specifications, defining interface contracts, data flow design, AI/ML design, test design, and operational design at Phase 3 (Architecture and Solution Design) of the waterfall lifecycle. <example> Context: Solution architecture (subfase 3.1) has delivered the HLD, ADR set, integration diagram, and security architecture. The team now needs to expand each HLD component to implementation-level specification before control-design and Gate C. user: "HLD and ADRs are complete — we need to go from high-level design to implementation-ready specs for all components" assistant: "I'll use the detailed-design agent to expand each HLD component into an LLD with data model, API contracts, sequence diagrams, and error handling; define complete interface specifications for all endpoints; produce data flow diagrams with protection classification; author the AI/ML design package covering model architecture, training pipeline, inference, and monitoring; define the test design package with scope, test types, and entry/exit criteria; and produce the operational design package covering deployment, monitoring, alerting, and DR/BC procedures." <commentary> The LLD and interface specifications are the primary inputs for Phase 4 build — any ambiguity or incompleteness here translates directly into implementation rework. Completeness at the interface contract level (request/response schemas, error codes, versioning) is the single most important quality criterion for this subfase. </commentary> </example> <example> Context: The AI/ML design package review finds that the monitoring design for the inference pipeline does not define fallback behaviour when the model returns a low-confidence prediction, and the test design package does not include performance testing scope despite a strict 200ms latency NFR. user: "The AI/ML design package has no fallback for low-confidence predictions and the test design package is missing performance testing despite the 200ms NFR" assistant: "I'll use the detailed-design agent to address both gaps: update the AI/ML design package to define the fallback implementation (e.g., rule-based fallback, confidence threshold, human escalation path), add monitoring instrumentation for prediction confidence scores, and update the test design package to include a performance test scope covering the 200ms latency NFR with defined test environment, load profile, and pass/fail criteria aligned to NFR-PERF-001." <commentary> AI/ML fallback implementation is a design-level decision — leaving it undefined at this stage means engineers will make ad hoc decisions during build that may not align with governance expectations. Performance test scope omission at design level typically means performance testing is added as an afterthought without proper environment or baseline definition. </commentary> </example>
Use this agent when translating a frozen requirements baseline into a high-level design, defining the architectural structure of the system, and assembling the HLD and ADR set that will govern all subsequent design and build work at Phase 3 (Architecture and Solution Design) of the waterfall lifecycle. <example> Context: Gate B has been approved and the frozen requirements baseline is available. The team needs to produce the HLD, ADR set, integration diagram, environment strategy, and security architecture before detailed design can begin. user: "Gate B is approved — we have the frozen requirements baseline and need to start Phase 3 with the high-level design" assistant: "I'll use the solution-architecture agent to decompose the requirements baseline into HLD components, document every major architectural decision as an ADR with rejected alternatives, produce the context and integration diagrams, define the environment strategy for dev/test/staging/production, and author the security architecture covering authentication, authorisation, and encryption. The resulting HLD and ADR set form the architectural contract for subfases 3.2 and 3.3." <commentary> The HLD and ADR set are the governance anchor for Phase 3 — all subsequent detailed design must conform to the architectural decisions made here. Documenting rejected alternatives in ADRs prevents re-litigating closed decisions during build. </commentary> </example> <example> Context: An ADR review reveals that the chosen event-streaming architecture conflicts with two NFRs around data consistency and recovery point objective, and the security architect has flagged that the integration diagram does not show authentication on two external API calls. user: "The ADR review found conflicts with NFR-consistency and NFR-RPO, and two external API calls are missing authentication in the integration diagram" assistant: "I'll use the solution-architecture agent to address both issues: re-evaluate the event-streaming ADR against the NFR constraints, assess whether a CQRS/saga pattern resolves the consistency and RPO conflict or whether a different architectural pattern must be selected, update the ADR with revised decision and consequences, and update the integration diagram to document the authentication mechanism on both flagged external API calls before the HLD is handed over to detailed-design." <commentary> Conflicts between architectural decisions and NFRs discovered at subfase 3.1 are far cheaper to resolve than conflicts discovered at build or test. The ADR set's explicit rejected-alternatives record makes trade-off re-evaluation structured rather than ad hoc. </commentary> </example>
Use this agent when the user asks to generate, create, or fill a lifecycle artefact or template. <example> Context: Project is in Phase 1 (Opportunity and Feasibility) and needs the initial feasibility study artefact to satisfy Gate A evidence. user: "Generate the feasibility study for our new ERP replacement initiative" assistant: "I'll use the artefact-generator to create the feasibility study from the Phase 1 template, populated with your initiative context and validated against the schema." <commentary> Artefact creation request with clear phase context — artefact-generator selects the Phase 1 template, gathers mandatory field data, and produces a validated document. </commentary> </example> <example> Context: Gate C review requires a completed architecture decision record that has not been produced yet. user: "We need the architecture decision record for the database technology choice — can you generate it?" assistant: "I'll use the artefact-generator to produce the architecture decision record using the Phase 3 template, incorporating your technology evaluation notes." <commentary> Artefact required for gate evidence — artefact-generator maps the gate requirement to the correct template and generates a fully validated document. </commentary> </example>
Use this agent when the user asks to run a gate review, formal approval checkpoint, or quality gate in the waterfall lifecycle. <example> Context: Phase 2 (Requirements and Baseline) work is complete and the team needs Gate B approval before starting Phase 3 (Architecture and Solution Design). user: "We've completed all Phase 2 deliverables — can you run Gate B review?" assistant: "I'll use the gate-reviewer agent to assess Gate B evidence against the gate criteria reference and produce a formal gate review report with pass/fail/waived determination." <commentary> Formal gate review with specific gate identifier — gate-reviewer reads criteria, checks artefact-to-obligation matrix, and issues a tri-state decision. </commentary> </example> <example> Context: Sponsor wants to assess Gate F readiness before approving the Release and Transition phase. user: "Check if we're ready for Gate F — I need to brief the Sponsor before sign-off" assistant: "I'll use the gate-reviewer to evaluate Gate F readiness, checking all required artefacts against the gate criteria reference and identifying any evidence gaps." <commentary> Pre-gate readiness check for stakeholder communication — gate-reviewer provides evidence status, gap analysis, and sign-off authority guidance. </commentary> </example>
Use this agent when the user asks about lifecycle status, current phase, navigation between phases, gate progression, or needs routing to the appropriate phase agent. <example> Context: Project team is returning after a break and wants to know their current position in the waterfall lifecycle. user: "What phase are we in and what should we do next?" assistant: "I'll use the lifecycle-orchestrator agent to read your lifecycle state, report the current phase and gate status, and route you to the appropriate phase agent." <commentary> The user needs lifecycle orientation — the orchestrator reads lifecycle-state.json and routes to the right phase agent with full context. </commentary> </example> <example> Context: Project Manager received Gate C approval and wants to start Phase 4 (Build and Integration). user: "Gate C was approved, how do I kick off Phase 4?" assistant: "I'll use the lifecycle-orchestrator to confirm the Gate C outcome, update lifecycle state, and route you to the Phase 4 build-planning agent." <commentary> Phase transition request requires state validation and routing — exactly what the orchestrator handles across the 8-phase waterfall model. </commentary> </example>
Use this agent when the user asks for metrics, reports, performance analysis, KPI tracking, trend analysis, or delivery health checks across any phase. <example> Context: End of Phase 4 (Build and Integration) and the Project Manager wants a metrics health check before Gate D. user: "Give me a metrics summary for Phase 4 — are we on track for Gate D?" assistant: "I'll use the metrics-analyst to assess Phase 4 delivery metrics, build quality indicators, and Gate D readiness based on current lifecycle data." <commentary> Phase-end metrics review with gate readiness angle — analyst reads lifecycle state and artefacts to produce a health dashboard with GREEN/AMBER/RED status per metric. </commentary> </example> <example> Context: Phase 7 (Operate, Monitor and Improve) periodic review requires analysis of operational performance against baseline SLOs. user: "How are we performing against our Phase 7 operational targets? Any anomalies?" assistant: "I'll use the metrics-analyst to compare current operational and quality metrics against the SLOs defined in the operations baseline, and flag any trend anomalies requiring attention." <commentary> Operational metrics review with anomaly detection — analyst compares actuals against baseline, identifies deviations, and recommends corrective actions. </commentary> </example>
Use this agent when the user asks to add, update, or query risks, assumptions, clarifications, dependencies, or evidence across any phase. <example> Context: Project is in Phase 3 (Architecture and Solution Design) and the team has identified that a key vendor may not deliver the integration library on time. user: "We might not get the vendor integration library before Phase 4 starts — how do we track this?" assistant: "I'll use the risk-assumption-tracker to log this as a dependency and a blocking assumption, assess its impact on the Phase 4 start date, and set a resolution deadline before Gate C." <commentary> New dependency and assumption identified — tracker logs both entries with impact rating, owner, and resolution plan tied to the Gate C deadline. </commentary> </example> <example> Context: Pre-Gate E review requires a register summary to confirm no HIGH/CRITICAL items will block gate passage. user: "Generate a register summary for Gate E — I need to know if any risks or assumptions will block the review" assistant: "I'll use the risk-assumption-tracker to produce a consolidated summary across all 5 registers, flagging all HIGH/CRITICAL risks and any stale or unresolved assumptions before Gate E." <commentary> Pre-gate register consolidation — tracker surfaces blockers and provides the gate-reviewer with a clean evidence summary. </commentary> </example>
This skill should be used when creating, completing, or reviewing a waterfall lifecycle artefact. Ensures artefacts follow templates, satisfy completeness requirements, and are ready for gate evidence submission with correct closure obligations.
This skill should be used when preparing for or executing a gate review. Provides binary pass/fail/waived checklists for all 8 gates A-H of the waterfall-lifecycle framework. Use this skill as a pre-flight check before invoking the gate-reviewer agent.
This skill should be used when preparing, validating, or executing inter-phase handovers in the waterfall lifecycle. Covers artefact inventory, open items log, risk register transfer, phase transition readiness checklist, and formal handover sign-off.
This skill should be used when enforcing compliance with a signed waterfall phase contract — checking mandatory fields, verifying exit criteria completeness before a gate review, and blocking gate requests when contract gaps are found. Distinct from the phase-contract skill (which creates contracts); this skill enforces them.
This skill should be used when creating or validating a waterfall phase contract — the formal agreement that gates phase entry and exit. Phase contracts in waterfall are binding: no phase starts without approved entry criteria and no gate review without met exit criteria.
This skill should be used when authoring, reviewing, or validating requirements in a waterfall lifecycle project — including functional requirements, AI requirements, NFRs, and acceptance criteria. Covers ID assignment, SMART criteria, category assignment, and requirement quality gates.
This skill should be used when identifying, assessing, logging, or reviewing risks in a waterfall lifecycle project. Covers all 5 register types: risk, assumption, clarification, dependency, evidence — with waterfall-specific risk categories and formal escalation paths for gate reviews.
This skill should be used when creating or maintaining a Requirements Traceability Matrix (RTM), validating traceability coverage, detecting orphaned requirements, and preparing the RTM for Gate B submission.
AI-native product management for startups. Transform Claude into an expert PM with competitive research, gap analysis using the WINNING filter, PRD generation, and GitHub Issues integration.
Comprehensive UI/UX design plugin for mobile (iOS, Android, React Native) and web applications with design systems, accessibility, and modern patterns
Intelligent prompt optimization using skill-based architecture. Enriches vague prompts with research-based clarifying questions before Claude Code executes them