From product-lens
Use for a quick demand reality check on a product idea or project. Runs only the demand validation dimension and Elevator Pitch test. Fast first-pass filter before committing to build.
npx claudepluginhub n0rvyn/indie-toolkit --plugin product-lensThis skill uses the workspace's default tool permissions.
Lightweight evaluation: only runs Demand Authenticity and the Elevator Pitch test. No full agent dispatch; runs directly in the main context for speed. This is a 5-minute first-pass filter.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides implementation of event-driven hooks in Claude Code plugins using prompt-based validation and bash commands for PreToolUse, Stop, and session events.
Lightweight evaluation: only runs Demand Authenticity and the Elevator Pitch test. No full agent dispatch; runs directly in the main context for speed. This is a 5-minute first-pass filter.
Do not deep-dive into code. This is a surface-level scan.
Same platform detection logic as evaluate skill:
.xcodeproj / Package.swift → iOSpackage.json → WebLocate reference files by searching for **/product-lens/references/dimensions/01-demand-authenticity.md.
Read:
dimensions/01-demand-authenticity.md — extract:
### iOS if iOS detected, otherwise ### Default)modules/elevator-pitch.md — extract:
### iOS under ## Platform Constraints if iOS detected)Answer the dimension's sub-questions using available evidence:
Score 1-5 stars with one-sentence justification.
Attempt to write:
Judge: Clear / Vague / Cannot articulate
Run 2-3 WebSearches:
"best [category] app [year]""[problem description] alternative""[category] indie app" (if relevant)Note top 3-5 alternatives and their approach. This is not a full market scan — just enough to answer "is someone already doing this well?"
Output one of three verdicts:
| Verdict | Criteria | Meaning |
|---|---|---|
| Pass | Demand >=3★ AND Pitch not "Cannot articulate" AND alternatives are beatable | Green light: proceed to full evaluation or building |
| Caution | Demand 3★ with Pitch "Cannot articulate", OR Demand 2★ with Pitch "Clear", OR dominant alternative exists but has clear gaps | Investigate further: run full /evaluate before committing |
| Fail | Demand <=2★ AND no differentiation angle, OR Pitch "Cannot articulate" AND problem already solved well by free tools | Reconsider: this idea needs significant rethinking |
When the verdict is Caution, append a ## Validation Playbook section to the output. Select 2-3 methods from the list below based on what is uncertain:
Each selected method must include: method description, success criteria, failure criteria, and timeline (all <=2 weeks).
If the evaluation target is a local project:
docs/08-product-evaluation/ directory exists. If not, create it.docs/08-product-evaluation/{YYYY-MM-DD}-demand-check.md# Demand Check: [Product Name]
## Elevator Pitch
> **Tagline:** [<=30 chars]
> **Description:** [first sentence]
> **Verdict:** [Clear / Vague / Cannot articulate] — [reasoning]
## Demand Authenticity (需求真伪) [★★★☆☆]
[Sub-question answers with evidence]
## Alternatives Found
| Alternative | Approach | Pricing | Threat Level |
|-------------|----------|---------|--------------|
| [name] | [how it solves the problem] | [price] | [High/Medium/Low] |
## Verdict: [Pass / Caution / Fail]
[One paragraph explaining the verdict and suggested next steps]
## Validation Playbook
(Only present when verdict is Caution)
Before committing to build, validate these uncertainties:
1. **[Uncertainty]:** [Method] — Success: [criteria] / Fail: [criteria] — Timeline: [<=2 weeks]
2. **[Uncertainty]:** [Method] — Success: [criteria] / Fail: [criteria] — Timeline: [<=2 weeks]