Product decision plugin for indie developers: idea checks, product evaluation, portfolio intelligence, recent feature review, and PKOS-first local fact publishing
npx claudepluginhub n0rvyn/indie-toolkit --plugin product-lensScans a local project codebase to produce a structured app context summary. Reports facts about the app's core features, data models, user journey, tech stack, and architectural patterns. Does not evaluate or judge. Examples: <example> Context: Need app context before evaluating whether to add a feature. user: "Scan app context for /path/to/my-ios-app" assistant: "I'll use the app-context-scanner agent to analyze the codebase." </example>
Evaluates a single product dimension from an indie developer perspective. Receives pre-merged sub-questions (universal + platform-specific) and scoring anchors directly in its prompt. Produces structured per-sub-question analysis with evidence citations and a 1-5 star dimension score. Examples: <example> Context: Evaluating Demand Authenticity for a local iOS project. user: "Evaluate Demand Authenticity for Delphi at /path/to/project" assistant: "I'll use the dimension-evaluator agent to assess demand authenticity." </example> <example> Context: Deep-dive teardown of Moat for an external app. user: "Teardown moat for Bear notes app" assistant: "I'll use the dimension-evaluator agent in deep mode to analyze the moat." </example>
Generates the mandatory extra modules for a product evaluation: Kill Criteria, Feature Necessity Audit, Elevator Pitch Test, Pivot Directions, and Validation Playbook. Receives dimension scores and module-specific instructions from the skill. Examples: <example> Context: Full evaluation complete, need extra modules. user: "Generate extras for Delphi evaluation" assistant: "I'll use the extras-generator agent to produce Kill Criteria, Feature Audit, Elevator Pitch, and Pivot Directions." </example> <example> Context: External app evaluation, no code access. user: "Generate extras for Bear notes evaluation" assistant: "I'll use the extras-generator agent. Feature Audit will be skipped since this is an external evaluation." </example>
Use this agent to group recent file and commit changes into likely feature slices so `recent-feature-review` can judge coherent chunks of work instead of raw files.
Evaluates a single feature assessment dimension from an indie developer perspective. Receives pre-merged sub-questions, app context, and signal anchors directly in its prompt. Produces structured per-sub-question analysis with a signal (Positive/Neutral/Negative) and confidence level (High/Medium/Low). Examples: <example> Context: Evaluating Demand Fit for adding a tagging feature to a notes app. user: "Evaluate Demand Fit for adding tags to NoteApp at /path/to/project" assistant: "I'll use the feature-dimension-evaluator agent to assess demand fit." </example> <example> Context: Evaluating Build Cost for adding sync to a todo app. user: "Evaluate Build Cost for adding cloud sync to TodoApp" assistant: "I'll use the feature-dimension-evaluator agent to assess build cost." </example>
Generates follow-up content for a feature assessment: either an Integration Map (for GO verdicts) or Alternative Directions (for DEFER/KILL verdicts). Receives module-specific instructions, app context, and dimension signals from the skill. Examples: <example> Context: Feature assessment returned GO verdict, need integration map. user: "Generate integration map for adding tags to NoteApp" assistant: "I'll use the feature-followup-generator agent to produce the integration map." </example> <example> Context: Feature assessment returned KILL verdict, need alternatives. user: "Generate alternative directions for adding social features to NoteApp" assistant: "I'll use the feature-followup-generator agent to suggest alternatives." </example>
Use this agent to package product-lens results into PKOS exchange artifacts. It writes stable frontmatter and body sections, but never chooses final vault destinations or PKOS tags beyond the exchange schema.
Use this agent to gather market data for product evaluation. Searches for competitors, pricing, market signals, and discovery channels. Returns structured data (not opinions) for dimension-evaluator agents to consume. Examples: <example> Context: Need market data before evaluating a note-taking app. user: "Gather market data for a Markdown note-taking app targeting iOS" assistant: "I'll use the market-scanner agent to research the market." </example> <example> Context: Comparing apps and need competitor landscape. user: "Find competitors for a habit tracking app" assistant: "I'll use the market-scanner agent to scan the competitive landscape." </example>
Use this agent to gather repository facts for portfolio scans and progress pulses. It reports observable signals only: repo markers, recent activity clues, docs/tests movement, TODO density, and obvious product-shipping indicators.
Use this agent to compare an older product-lens verdict with new evidence and explain whether the prior conclusion should stand, improve, weaken, or reverse.
Use when the user wants to compare multiple products or projects to decide which to focus on. Evaluates each app and produces a scoring matrix with recommendations.
Use for a quick demand reality check on a product idea or project. Runs only the demand validation dimension and Elevator Pitch test. Fast first-pass filter before committing to build.
Use when the user wants to evaluate a product — assess demand, market viability, moat, and execution quality from an indie developer perspective. Works on local projects (by reading code) or external apps (via web search).
Use when the user wants to evaluate whether an existing app should add a specific feature. Analyzes demand fit, journey contribution, build cost, and strategic value. Produces a GO/DEFER/KILL verdict with conditional follow-up: integration map (GO) or alternative directions (DEFER/KILL). Works on local projects only (requires code access).
Use for periodic scans over a project root such as ~/Code. Builds a root-level picture of active projects, current risks, and PKOS exchange artifacts for downstream ingestion.
Unified entry point for product decisions. Use when the user or another AI asks whether to pursue an idea, evaluate a product, review recent features, reprioritize projects, or refresh a prior verdict.
Use when the system needs observable progress facts for one or more projects. Reports acceleration, stalls, and drift without claiming fake completion percentages.
Use when the system needs to judge recently built features or recent commit slices. Reviews whether recent work strengthens the core loop or creates drift.
Use when the system must decide what to focus on next across multiple projects. Converts recent signals into portfolio decisions with blockers and next actions.
Use when the user wants a deep dive into a specific evaluation dimension for a product. Example: teardown moat, teardown journey. Goes deeper than the standard evaluation on one dimension.
Use when prior product-lens conclusions need to be checked against new evidence. Produces delta-oriented judgments instead of full re-evaluations.
Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.
Uses power tools
Uses Bash, Write, or Edit tools
Use this agent when you need expert assistance with React Native development tasks including code analysis, component creation, debugging, performance optimization, or architectural decisions. Examples: <example>Context: User is working on a React Native app and needs help with a navigation issue. user: 'My stack navigator isn't working properly when I try to navigate between screens' assistant: 'Let me use the react-native-dev agent to analyze your navigation setup and provide a solution' <commentary>Since this is a React Native specific issue, use the react-native-dev agent to provide expert guidance on navigation problems.</commentary></example> <example>Context: User wants to create a new component that follows the existing app structure. user: 'I need to create a custom button component that matches our app's design system' assistant: 'I'll use the react-native-dev agent to create a button component that aligns with your existing codebase structure and design patterns' <commentary>The user needs React Native component development that should follow existing patterns, so use the react-native-dev agent.</commentary></example>
Access thousands of AI prompts and skills directly in your AI coding assistant. Search prompts, discover skills, save your own, and improve prompts with AI.
Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.
Orchestrate multi-agent teams for parallel code review, hypothesis-driven debugging, and coordinated feature development using Claude Code's Agent Teams
Comprehensive startup business analysis with market sizing (TAM/SAM/SOM), financial modeling, team planning, and strategic research