From product-lens
Use when the user wants to evaluate whether an existing app should add a specific feature. Analyzes demand fit, journey contribution, build cost, and strategic value. Produces a GO/DEFER/KILL verdict with conditional follow-up: integration map (GO) or alternative directions (DEFER/KILL). Works on local projects only (requires code access).
npx claudepluginhub n0rvyn/indie-toolkit --plugin product-lensThis skill uses the workspace's default tool permissions.
Parse the input to identify:
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides implementation of event-driven hooks in Claude Code plugins using prompt-based validation and bash commands for PreToolUse, Stop, and session events.
Parse the input to identify:
Project path — where the app's code is located
Feature description — what feature is being evaluated
This skill requires a local project with code access. If the user specifies an external app name without a path, inform them: "Feature assessment requires code access to analyze build cost and integration points. Please provide a path to a local project."
Read the project's README (or equivalent top-level docs) to confirm the product description. If unclear, ask for a one-sentence product description.
Check for platform indicators:
.xcodeproj, .xcworkspace, Package.swift with iOS platform → iOSpackage.json, next.config, vite.config → Webpubspec.yaml → Flutter (cross-platform)android/ directory → AndroidLocate the plugin's reference files by searching for **/product-lens/references/feature-assess/_calibration.md. From the same parent directory, resolve absolute paths to:
_calibration.md (always)_verdict.md (always)dimensions/01-demand-fit.md through dimensions/04-strategic-value.md (all 4)modules/integration-map.md (always)modules/alternative-directions.md (always)Dispatch two agents in a single message (parallel execution):
Agent 1: app-context-scanner with:
Agent 2: market-scanner with:
Wait for both agents to complete before proceeding. The dimension evaluators need both the app context and market data.
Read _calibration.md once — this will be injected as preamble into every feature-dimension-evaluator prompt.
Determine the platform variant to use:
### iOS sections from each dimension file### Default sectionsFor each of the 4 dimension files, read the file and extract:
**Core question:**)## Universal Sub-Questions)### [Platform] section within ## Platform-Specific Sub-Questions). If iOS, also extract the iOS core question variant (the blockquote under the ### iOS heading).## Signal Anchors)## Evidence Sources)For each dimension, merge the universal + platform-specific sub-questions into a single numbered list (universal questions first, then platform-specific, numbered sequentially).
Result: 4 self-contained dimension payloads.
Also prepare market data excerpts per dimension from the market-scanner output:
Dispatch all 4 feature-dimension-evaluator agents in a single message (parallel execution). Each agent receives:
_calibration.mdWait for all 4 to complete.
For each of the 4 returned results, verify:
## [Dimension Name existsPositive, Neutral, NegativeHigh, Medium, Low### Q sub-sections matches the expected sub-question count for that dimension**Evidence:** and **Assessment:** fields**Anchor match:** field exists in the Dimension Signal sectionIf any dimension fails validation:
> ⚠️ This dimension's output did not fully comply with the evaluation template.Extract from each valid result:
Read _verdict.md. Apply the verdict rules to the 4 dimension signals:
KILL if ANY of:
GO if ALL of:
DEFER if:
For DEFER: list the conditions that could flip the verdict. Identify which Low-confidence signals are most likely to change with more information.
Read the appropriate module file from references/feature-assess/modules/. If the module has platform additions (### iOS), extract them and append to the base instructions.
If verdict is GO:
Dispatch feature-followup-generator with:
integration-map.md (pre-merged with platform additions)If verdict is DEFER or KILL:
Dispatch feature-followup-generator with:
alternative-directions.mdWait for completion.
If Integration Map: verify:
## Integration Map section exists### Reusable Infrastructure section exists### New Infrastructure Required section exists### Modification Scope section with table exists### Integration Points section exists### Implementation Sequence section with numbered steps existsIf Alternative Directions: verify:
## Alternative Directions section exists### Lower-Cost Variants, ### Complementary Features, ### Clarify Before DecidingIf validation fails: re-dispatch once with correction note. If still non-compliant, include with warning annotation.
Assemble the final report:
# Feature Assessment: [Feature Name] for [App Name]
> **Feature:** [feature description]
> **App:** [app name] — [one-sentence app description]
> **Platform:** [platform]
## Verdict: [GO / DEFER / KILL]
[One paragraph explaining the verdict, citing the key signals that drove it]
(If DEFER: list the conditions that could flip the verdict)
## Signal Overview
| Dimension | Signal | Confidence | Key Evidence |
|-----------|--------|------------|--------------|
| Demand Fit (需求契合) | [signal] | [confidence] | [key evidence] |
| Journey Contribution (旅程贡献) | [signal] | [confidence] | [key evidence] |
| Build Cost (实现代价) | [signal] | [confidence] | [key evidence] |
| Strategic Value (战略价值) | [signal] | [confidence] | [key evidence] |
## Dimension Details
[All 4 dimension evaluation results in order, each preserving its internal structure
(## heading, ### QN sections, ### Dimension Signal)]
## [Integration Map OR Alternative Directions]
[from feature-followup-generator output]
Display the assembled report.
Post-processing: