Business monetization evaluation and strategy — analyzes projects or business descriptions using evidence-based tier evaluation (Strong/Possible/Weak) with explicit assumption chains, legal/compliance gating, and user-reviewed market research. Produces an honest business case where every revenue figure traces back to cited sources and stated assumptions. Use this skill whenever: evaluating monetization options, pricing strategy, revenue models, business model analysis, or when the user says "monetize", "revenue model", "pricing strategy", "how to make money", "business model", "open source monetization", "SaaS pricing", "freemium strategy", or any variation of business monetization planning. Also trigger when the user asks about competitor pricing, market sizing, or go-to-market strategy.
From cksnpx claudepluginhub cardinalconseils/claude-starter --plugin cksThis skill is limited to using the following tools:
references/cost-categories.mdreferences/models-catalog.mdreferences/report-template.mdworkflows/cost-analysis.mdworkflows/discover.mdworkflows/evaluate.mdworkflows/report.mdworkflows/research.mdworkflows/roadmap.mdEnables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
This skill evaluates monetization models for projects and businesses using evidence-based tier evaluation — not numeric scores. Revenue projections are explicit assumption chains where every variable cites its source. Research is user-reviewed before evaluation. Legal/compliance constraints are first-class filters that can block models before scoring. The output is an honest business case designed to surface the right questions, not just polished answers.
/monetize → discover → research → cost-analysis → evaluate → report → roadmap → /cks:new handoff
Each phase is independently invokable via /monetize:{phase}.
Each phase dispatches a dedicated agent:
| Phase | Agent | Role |
|---|---|---|
| discover | monetize-discoverer | Scans codebase, asks business context questions |
| research | monetize-researcher | Queries Perplexity/WebSearch for market intelligence |
| cost-analysis | cost-researcher → cost-analyzer | Researches tech stack costs, builds unit economics |
| evaluate | monetize-evaluator | Evidence-based tier evaluation with assumption chains |
| report | monetize-reporter | Combines all artifacts into business case |
| roadmap | (inline) | Creates phase briefs and updates ROADMAP.md |
Determine input mode from arguments:
| Condition | Mode | Behavior |
|---|---|---|
| No arguments | A — Self-analyze | Scan current project codebase |
| Argument is a local directory path | B — Analyze target | Scan target project codebase |
| Argument is quoted text / description | C — Business description | Pure strategy, no code scan |
Mode B accepts local paths only (not URLs). For remote repos, the user should clone first.
Before starting, check if .monetize/ exists:
.monetize/context.md for the date. Ask: "Previous assessment found (dated {date}). Archive and start fresh, or update with new research?"
mkdir -p .monetize/archive/{date} && mv .monetize/*.md .monetize/archive/{date}/The /cks:monetize command orchestrates the flow by dispatching agents in sequence.
Each agent loads this skill via skills: monetize for domain expertise.
Individual phases can be invoked via /cks:monetize-{phase} sub-commands.
Each phase saves its output. If interrupted, the next /monetize invocation detects
existing artifacts and resumes from the last incomplete phase.
Before starting any phase, verify its prerequisites exist:
| Phase | Requires |
|---|---|
| Research | .monetize/context.md |
| Cost Analysis | .monetize/context.md (+ .monetize/research.md recommended) |
| Evaluate | .monetize/context.md + .monetize/research.md + .monetize/cost-analysis.md |
| Report | .monetize/evaluation.md |
| Roadmap | docs/monetization-assessment.md |
If missing, prompt: "Run /monetize:{missing_phase} first."
| File | When to Read |
|---|---|
references/models-catalog.md | During evaluate phase — contains the 12 models with criteria |
references/cost-categories.md | During cost-analysis phase — cost category detection and provider reference |
references/report-template.md | During report phase — template for the assessment document |
| Failure | Behavior |
|---|---|
PERPLEXITY_API_KEY missing | Fall back to WebSearch-based research (no API key needed). Note "Source: WebSearch" in research output |
| Perplexity rate limit / timeout | Retry once after 5s. On 2nd failure, save partial results, flag gaps in report |
| Codebase scan finds nothing (A/B) | Fall back to Mode C behavior — full questionnaire |
| User abandons mid-questionnaire | Save partial context. Next run offers to resume or restart |
This skill ships with opinionated defaults. Review and adapt to your needs:
references/models-catalog.mdreferences/cost-categories.mdworkflows/report.mdworkflows/evaluate.mdRead, Write, Grep, Glob, Agent, WebSearch, WebFetch, Bash, AskUserQuestion.| File | Purpose |
|---|---|
.monetize/context.md | Discovery context |
.monetize/research.md | Perplexity research with citations |
.monetize/cost-research-raw.md | Raw provider pricing data |
.monetize/cost-analysis.md | Unit economics, margins, scaling curves |
.monetize/evaluation.md | Evidence-based tier evaluation + stack recommendation |
docs/monetization-assessment.md | Final business case report |
docs/ROADMAP.md | Updated with monetization phases (preview entries) |
.monetize/phases/*.md | PRD-ready phase briefs for /cks:new |