From product-lens
Use when the user wants to evaluate a product — assess demand, market viability, moat, and execution quality from an indie developer perspective. Works on local projects (by reading code) or external apps (via web search).
npx claudepluginhub n0rvyn/indie-toolkit --plugin product-lensThis skill uses the workspace's default tool permissions.
Parse the input to determine evaluation target:
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides implementation of event-driven hooks in Claude Code plugins using prompt-based validation and bash commands for PreToolUse, Stop, and session events.
Parse the input to determine evaluation target:
/path/to/project)"Bear", "Notion")If the target is a local project, read its README (or equivalent top-level docs) to understand what it does. If no README exists or the product purpose is unclear, ask the user for a one-sentence product description.
Local project — check for platform indicators:
.xcodeproj, .xcworkspace, Package.swift with iOS platform → iOSpackage.json, next.config, vite.config → Webpubspec.yaml → Flutter (cross-platform)android/ directory → AndroidExternal app:
Locate the plugin's reference files by searching for **/product-lens/references/_calibration.md. From the same parent directory, resolve absolute paths to:
_calibration.md (always)_scoring.md (always)dimensions/01-demand-authenticity.md through dimensions/06-execution-quality.md (all 6)modules/kill-criteria.md, modules/feature-audit.md, modules/elevator-pitch.md, modules/pivot-directions.md, modules/validation-playbook.md (all 5)Dispatch the market-scanner agent with:
Wait for market-scanner to complete before proceeding. The dimension evaluators need this data.
Read _calibration.md once — this will be injected as preamble into every dimension-evaluator prompt.
Determine the platform variant to use:
### iOS sections from each dimension file### Default sectionsFor each of the 6 dimension files, read the file and extract:
**Core question:**)## Universal Sub-Questions)### [Platform] section within ## Platform-Specific Sub-Questions). If iOS, also extract the iOS core question variant (the blockquote under the ### iOS heading).## Scoring Anchors)## Evidence Sources)For each dimension, merge the universal + platform-specific sub-questions into a single numbered list (universal questions first, then platform-specific, numbered sequentially).
Result: 6 self-contained dimension payloads.
Also prepare market data excerpts per dimension from the market-scanner output:
Dispatch all 6 dimension-evaluator agents in a single message (parallel execution). Each agent receives:
_calibration.mdstandardWait for all 6 to complete.
For each of the 6 returned results, verify:
## [Dimension Name exists### Q sub-sections matches the expected sub-question count for that dimension**Evidence:** and **Assessment:** fields**Anchor match:** field exists in the Dimension Score section### Next Action section exists after ### Dimension ScoreIf any dimension fails validation:
> ⚠️ This dimension's output did not fully comply with the evaluation template.Extract from each valid result:
Read the 5 module files from references/modules/. For each module file that has a ## Platform Additions or ## Platform Constraints section:
### iOS section and append it to the module's base instructionsDispatch the extras-generator agent with:
Wait for completion.
Read _scoring.md. Compute the weighted total from the 6 dimension scores using default equal weights (or a user-specified preset if requested).
Assemble the final report by combining dimension results and extras output:
# Product Lens: [Product Name]
> [One-sentence product description]
## Elevator Pitch Test
[from extras-generator output — the Elevator Pitch Test section]
## Evaluation Overview
| Dimension | Score | Justification |
|-----------|-------|---------------|
| Demand Authenticity (需求真伪) | [stars] | [justification from dimension result] |
| Journey Completeness (逻辑闭环) | [stars] | [justification] |
| Market Space (市场空间) | [stars] | [justification] |
| Business Viability (商业可行) | [stars] | [justification] |
| Moat (护城河) | [stars] | [justification] |
| Execution Quality (执行质量) | [stars] | [justification] |
| **Weighted Total** | **X.X** | |
## Priority Actions
[Ordered list of Next Actions from dimensions scored ≤3★, sorted by score ASC (lower = more urgent).
Each item: "**[Dimension Name] (N★):** [Next Action text from dimension-evaluator output]"
If no dimensions scored ≤3★, write: "All dimensions scored ≥4★. See individual dimension Next Actions for consolidation opportunities."]
## Dimension Details
[All 6 dimension evaluation results in order, each preserving its internal structure
(## heading, ### QN sections, ### Dimension Score)]
## Feature Necessity Audit
[from extras-generator output]
## Kill Criteria
[from extras-generator output]
## Pivot Directions
[from extras-generator output]
## Validation Playbook
[from extras-generator output — experiments targeting dimensions scored ≤3★]
Validate extras output:
## Elevator Pitch Test section exists with **Verdict:** field## Kill Criteria section exists with >=3 numbered items## Feature Necessity Audit section exists (content or skip notice)## Pivot Directions section exists with >=2 named directions## Validation Playbook section exists with >=2 numbered experiments (each with Do/Success/Fail/Timeline fields), or a skip notice if all dimensions scored >=4★If validation fails: re-dispatch extras-generator once with a correction note. If still non-compliant, include with warning annotation.
If the evaluation target is a local project:
docs/08-product-evaluation/ directory exists. If not, create it.docs/08-product-evaluation/{YYYY-MM-DD}-evaluate.md*-evaluate.md). If found, append a ## Changes Since Last Evaluation section comparing dimension scores and highlighting movements >=1★.
| Dimension | Previous | Current | Change | tableDisplay the assembled report.
Post-processing:
docs/08-product-evaluation/ (if local project)