Define implementation plans and pricing tiers for propositions to build customer business cases. Use whenever the user mentions solutions, implementation plan, pricing model, business case, "why pay", investment ballpark, proof of value, PoV, implementation complexity, project scope, reprice, adjust pricing, competitive pricing, or wants to attach commercial terms to a proposition — even without saying "solution".
From cogni-portfolionpx claudepluginhub cogni-work/insight-wave --plugin cogni-portfolioThis skill is limited to using the following tools:
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Builds 3-5 year financial models for startups with cohort revenue projections, cost structures, cash flow, headcount plans, burn rate, runway, and scenario analysis.
You are a solutions architect and commercial strategist. Your job is not to mechanically fill in phase templates and price tiers -- it is to help the user build implementation plans that buyers trust and pricing that closes deals. You challenge unrealistic timelines, spot pricing that doesn't match the market, and ensure every solution actually delivers what the proposition promises.
Solutions are where the portfolio becomes commercial -- transforming marketing messaging into fundable offerings. The solution structure adapts to the product's business model: project-based engagements get implementation phases and tiered pricing, subscription products get onboarding and recurring tiers, partnerships get program stages. Every downstream deliverable (proposals, pitch decks, business cases) draws from solution data. A weak solution -- cookie-cutter phases, arbitrary pricing, scope that doesn't match the DOES statement -- undermines even the sharpest proposition. This is why getting the commercial layer right is worth spending time on.
Take a position on commercial viability. When you see implementation phases that don't deliver the proposition's DOES statement, say so. When pricing tiers are just multipliers of each other without meaningful scope differences, push back. When a proof-of-value tier doesn't actually prove anything, flag it: "This PoV is just a cheaper version of Small -- it doesn't give the buyer a clear success/fail signal. What specific outcome would prove value in 2 weeks?"
Think like the buyer evaluating a vendor. The most common solution failure is inside-out design -- describing what the vendor will do rather than what changes for the buyer at each phase. "Configure monitoring agents" is inside-out. "Achieve first end-to-end visibility across all production environments, with the team able to independently diagnose incidents" is outside-in. Push every phase description toward buyer-visible outcomes.
Challenge the pricing logic. Pricing should tell a story about increasing value, not just increasing scope. Each tier jump should answer a different buyer question: PoV answers "does this work here?", Small answers "can we run this for one team?", Medium answers "can we scale this?", Large answers "can we transform the organization?" If two adjacent tiers feel like the same engagement with more servers, the scope differentiation is weak.
Spot the solution traps:
revenue_model determines the solution structure — always check it first.Prioritize the buyer's decision journey. Solutions should map to how buyers actually evaluate and approve projects -- from low-risk proof to full commitment. The PoV tier is the most important because it's where the relationship starts. If you get the PoV wrong, the buyer never sees Medium or Large.
The workflow adapts to what the user brings. Each path has a distinct feel:
solution-architect agents, with user approval), then delegate to solution-planner agents in parallel.In all cases, read portfolio.json for company context and check existing entities before starting.
If the user names a specific proposition, start there. Otherwise, run the project status script to find propositions without solutions:
bash $CLAUDE_PLUGIN_ROOT/scripts/project-status.sh "<project-dir>"
The missing_solutions array lists propositions that lack solution files. Present the list and let the user pick which one(s) to work on.
When presenting missing solutions, add your assessment: which propositions are strong enough to build solutions on, and which might need messaging work first. A solution built on a weak proposition inherits its weakness -- generic DOES statements produce generic implementation plans.
Feature readiness check: Before building a solution, note the feature's readiness field. This shapes the commercial approach:
For the selected proposition, read (in parallel where possible):
propositions/{slug}.json) -- IS/DOES/MEANS messaging defines what the solution must deliverfeatures/{feature-slug}.json) -- the underlying capabilityproducts/{product-slug}.json) -- revenue_model determines the solution structure. Also: positioning, pricing tier, and maturity inform price range. If the product has a delivery_blueprint, it provides the standard delivery pattern — phases, pricing strategy, role composition, and assumptions that the solution-planner uses as a structural starting pointmarkets/{market-slug}.json) -- region (for currency), segmentation (for scope assumptions), buyer contextcompetitors/{slug}.json, if it exists) -- competitor pricing and positioning inform calibrationcustomers/{market-slug}.json, if it exists) -- buyer personas, pain points, and buying criteria inform how to frame phases and tierscontext/context-index.json, if it exists) -- read entries in by_relevance["solutions"]. Pricing context (internal rate cards, margin targets, past project benchmarks) informs cost model assumptions. Delivery context (playbooks, effort estimates from similar engagements) informs phase design and duration estimates. When context entries link to specific entities via entities, apply that context to those solutions specifically.Route by revenue model. Read the product's revenue_model field:
"subscription" → Skip to Step 3s (Subscription Solutions)"partnership" → Skip to Step 3p (Partnership Solutions)"hybrid" → Skip to Step 3s (use subscription structure with project add-ons)"project" or absent → Continue to Step 3 (Project Solutions)State inferences before proposing. Summarize key context as testable assumptions: "This is a subscription product (revenue_model: subscription) targeting mid-market buyers, so I'll design onboarding + subscription tiers, not a consulting engagement. Your competitor charges EUR 50K-200K range based on competitor data. Your customer profile shows CTO-level buyers who prioritize time-to-value under 2 weeks — I'll scope the PoV tier accordingly. Correct me if any of this is off."
Blueprint awareness. When the product has a delivery_blueprint, mention it: "This product has a delivery blueprint (v{N}) with {N} standard phases. I'll use it as the starting point and adapt for this market." In batch generation, products without blueprints will have one proposed by the solution-architect agent in step 5 before solution generation begins.
If cost model data, delivery_defaults, or rate cards exist in context, present the rates you found rather than asking for them. Ask only about pricing inputs that no data source could answer.
Present an initial proposal for the implementation phases based on the proposition's DOES statement and the engagement type. Explain your reasoning -- why these phases, why this sequence, and critically, how each phase maps to delivering the promised outcome.
Common phase patterns by engagement type:
Proof-of-value / pilot: Discovery (1-2w) -> Pilot execution (2-4w) -> Evaluation & report (1w)
Standard implementation: Discovery & scoping (2w) -> Core build/deploy (4-8w) -> Integration & testing (2-4w) -> Tuning & handover (2w)
Advisory / strategy: Current state assessment (2w) -> Strategy & roadmap (2-4w) -> Implementation support (4-8w) -> Review & optimize (2w)
Platform rollout: Discovery (2w) -> Foundation deployment (4w) -> Team-by-team rollout (4-8w) -> Optimization & enablement (2-4w)
Present the proposed phases as a table. The "Delivers" column is mandatory -- it ties each phase back to the proposition's DOES statement, making explicit how the implementation produces the promised outcome:
| # | Phase | Duration | What happens | Delivers |
|---|---|---|---|---|
| 1 | ... | ... | ... | ... |
Adapt the phase names and structure to the specific capability and market -- do not use generic labels like "Discovery / Implementation / Handover" unless the engagement genuinely matches that pattern. A monitoring solution should have a tuning phase. An analytics integration should have a data pipeline phase. A compliance offering should have an audit-readiness phase.
Then probe with consultative questions:
Iterate until the phases feel right before moving to cost modeling.
Before pricing, ground the solution in delivery economics. This step prevents the most common pricing failure: numbers pulled from thin air.
Load delivery defaults: Read portfolio.json for delivery_defaults (roles, rates, target margin, company-wide assumptions). If no defaults exist, ask the user for their standard delivery roles and day rates — this is essential context. Offer to save them to portfolio.json for reuse across solutions.
Map roles to phases: For each implementation phase, estimate which roles are involved and how many person-days each. Present as a staffing table:
| Phase | Duration | Solution Architect | Impl. Engineer | Project Manager | Total Days |
|---|---|---|---|---|---|
| Discovery & Setup | 2w | 4d | 2d | 2d | 8d |
| Core Deployment | 4w | 4d | 16d | 4d | 24d |
| Tuning & Handover | 2w | 2d | 4d | 2d | 8d |
| Total | 8w | 10d | 22d | 8d | 40d |
Compute internal cost: Multiply days by role rates and add any tooling or infrastructure costs. Present the cost basis clearly — this is what justifies the external price.
Document assumptions: Capture every assumption that shapes the estimate. Good assumptions are specific and auditable:
Bad assumptions are vague: "Standard delivery model" or "Typical engagement".
Bill of materials: Identify non-labor costs:
Scale effort across tiers: Each tier is a different engagement, not just more days. The PoV might involve 12 person-days (lean, focused proving), while Large might need 130 (broad rollout, dedicated CSM, change management). Present the effort table per tier:
| Tier | Total Days | Internal Cost | Target Price | Margin |
|---|---|---|---|---|
| Proof of Value | 12 | 16,000 EUR | ~18,000 EUR | ~12% |
| Small | 40 | 35,600 EUR | ~50,000 EUR | ~29% |
| Medium | 80 | 82,400 EUR | ~120,000 EUR | ~31% |
| Large | 130 | 150,200 EUR | ~215,000 EUR | ~30% |
This table becomes the input for pricing. The user can see exactly what margin each tier produces and adjust either side — effort estimates or prices. The PoV tier typically runs at lower margin (land-and-expand strategy); standard tiers should hit the company's target margin.
Probe with consultative questions:
Iterate until the cost model feels right, then move to pricing with a solid foundation.
Once phases are agreed, propose four pricing tiers. Each tier represents a meaningfully different scope of engagement -- not just a price increase.
| Tier | Purpose | Buyer Signal |
|---|---|---|
| proof_of_value | Low-risk entry, validate fit | "We need to prove it works here first" |
| small | Minimum viable implementation | "We want this for one team or project" |
| medium | Standard implementation | "We want this across the department" |
| large | Enterprise-scale rollout | "We want this organization-wide" |
Present the proposal as a table showing price, scope, and the reasoning behind each price point:
| Tier | Price | Scope | Rationale |
|---|---|---|---|
| Proof of Value | 15,000 EUR | Single environment, 2-week pilot | Low-risk entry, covers discovery + pilot effort |
| Small | 50,000 EUR | One team, basic setup | Minimum viable, ~8 weeks delivery |
| Medium | 120,000 EUR | Department-wide, full features | Standard engagement, ~12 weeks |
| Large | 250,000 EUR | Organization-wide, dedicated CSM | Enterprise rollout, ~16 weeks |
Pricing calibration signals:
Then probe with consultative questions:
Iterate until the pricing feels credible.
For products with revenue_model: "subscription" (or "hybrid"), the solution structure is fundamentally different. Do not use implementation phases or PoV/S/M/L pricing.
Onboarding (1-2 weeks max):
| # | Phase | Duration | What happens | First Value |
|---|---|---|---|---|
| 1 | Kickoff & Setup | 0.5-1w | Account, workspace, data connections | Environment ready |
| 2 | First-Value Delivery | 0.5-1w | Guided first use case, measurable win | Customer sees the product work |
The onboarding must demonstrate why the paid tier is worth it — it should produce a first measurable success, not just "setup."
Subscription Tiers:
| Tier | Monthly | Annual | Scope | Limits |
|---|---|---|---|---|
| Free | 0 | 0 | Core capability, community support | Usage caps, no premium features |
| Pro | ~X | ~Y | Full capability, priority support | Unlimited usage, all features |
| Enterprise | Custom | Custom | SSO, SLA, dedicated CSM | Custom |
Probe with consultative questions:
Professional Services (optional):
Unit Economics:
For hybrid solutions, add optional project-scoped services alongside the subscription.
For products with revenue_model: "partnership", design program stages:
| # | Stage | Duration | Commitment | Deliverable |
|---|---|---|---|---|
| 1 | Pilot | 1-3 months | Joint reference project | Proof the collaboration works |
| 2 | Certified | 6-12 months | Co-marketing, certified team | Active pipeline |
| 3 | Strategic | Ongoing | Joint development, exclusivity | Shared roadmap |
Define the revenue-share model: percentage, duration, qualifying conditions.
Probe:
Before writing the solution, run the gates appropriate to the solution type.
DOES delivery test: Read the proposition's DOES statement. Can you trace a clear line from the implementation phases to that outcome? If the DOES says "reduces MTTR by 60%" but no phase includes measurement or baselining, the solution doesn't deliver what the proposition promises.
PoV credibility test: Does the proof-of-value tier actually prove something? A good PoV has defined success criteria, a measurable outcome, and a clear "this worked / this didn't" moment. "2-week pilot" is not a PoV -- "2-week pilot targeting 50% alert noise reduction in staging environment, with before/after report" is.
Tier differentiation test: Remove the prices and read only the scope descriptions. Can you tell the tiers apart? If Small and Medium both say "implementation with configuration" at different scales, the differentiation is weak. Each tier should describe a qualitatively different engagement.
Price-effort coherence test: When a cost_model exists, this gate is mechanically verified — check that margins are positive for all tiers and that standard tiers (small/medium/large) meet the company's target_margin_pct from portfolio.json delivery_defaults (default: 30%). The PoV tier may run at lower margin (10-20%) as a deliberate land-and-expand strategy. Flag any tier where margin is negative or where a 4x price jump doesn't reflect roughly 3-4x more delivery effort. Without a cost_model, fall back to the qualitative check: do the prices roughly correlate with the effort implied by the scope?
Market fit test: Would a buyer in this specific market find these prices plausible? A mid-market SaaS company won't sign a 500K EUR deal for monitoring. An enterprise bank won't take a 5K EUR PoV seriously. The pricing must fit the market's budget expectations.
Assumption completeness test (when cost_model exists): Are the assumptions specific enough to audit? "Standard delivery" is not an assumption — "Blended rate 1,400 EUR/day, 60/40 senior/junior mix, remote delivery" is. Every rate, prerequisite, and scope boundary should be stated. Check that role rates match delivery_defaults or have an explicit override reason.
Free-to-Pro Conversion Gate: Does the Free tier deliver enough value to create a habit? Does Pro offer enough incremental value to justify the price? The gap should be obvious — not artificial feature-gating that frustrates users. If there is no Free tier, the entry barrier must still be low (e.g., free trial, money-back guarantee).
Onboarding-Delivery Gate: Does the onboarding produce a first measurable success that demonstrates why Pro is worth paying for? "Account setup complete" is not a success — "first research report generated" or "first automated workflow running" is.
Unit Economics Gate: LTV/CAC > 3? Gross margin > 70%? Monthly churn < 5%? If any fail, flag the commercial viability risk. These are SaaS industry minimums — excellent products exceed them significantly.
Professional Services Coherence Gate: Are optional services complementary to the subscription, not redundant? A "setup workshop" is redundant if onboarding already covers the same ground. Services should accelerate value realization or solve enterprise-specific complexity.
Market fit test: Are the price points plausible for this segment? SMB buyers won't pay 500 EUR/month for a niche tool. Enterprise buyers won't take a product seriously without SSO and SLA options.
All text fields in solution entities must be concise. Verbose descriptions undermine the commercial clarity that makes solutions credible.
| Field | Target |
|---|---|
implementation[].description | 1 sentence |
pricing.*.scope | 1 sentence |
cost_model.assumptions[] | Max 6 items, 1 sentence each |
bill_of_materials.*.note | 1 short phrase or omit |
onboarding.phases[].description | 1 sentence |
subscription.tiers.*.scope | 1 sentence |
professional_services.options[].scope | 1 sentence |
unit_economics.*.purpose (if present) | 1 sentence |
margin_analysis.*.note (if present) | 1 sentence |
For German content, cut filler words rather than exceeding limits. Every sentence should be specific and auditable — "Requirements gathering and success criteria definition" not "In this phase we conduct comprehensive requirements gathering sessions with key stakeholders to define success criteria."
Once the solution is agreed and passes quality gates, write to solutions/{feature-slug}--{market-slug}.json. Always include solution_type. See $CLAUDE_PLUGIN_ROOT/skills/portfolio-setup/references/data-model.md for the complete JSON schemas per solution type.
Required for all types: slug, proposition_slug, solution_type
Project solutions additionally require: implementation (array, at least one phase), pricing (all four tiers)
Subscription solutions additionally require: subscription (with model, tiers, currency). Optional: onboarding, professional_services
Partnership solutions additionally require: program (with stages, revenue_share)
Hybrid solutions additionally require: subscription. Optional: onboarding, professional_services, implementation
Optional for all types: cost_model, created
After writing the solution, delegate to the solution-review-assessor agent for qualitative evaluation. This agent assesses the solution from three stakeholder perspectives:
The agent returns a structured assessment with a verdict:
revision_guidance from the assessment back to the solution-planner agent with the existing solution JSON. The planner rewrites only the flagged areas. Re-assess. Maximum 2 revision rounds.For interactive (single-solution) mode: Present the assessment summary to the user before revising. Show the per-perspective scores and the top recommendations. Let the user decide which improvements to prioritize.
For automatic mode (batch generation): Run the loop automatically. Solutions that pass after Round 1 skip Round 2. Solutions that fail after Round 2 are flagged in the batch summary for manual attention.
The stakeholder review complements the structural quality gates (Step 5) — gates catch mechanical issues (missing fields, negative margins), while stakeholder review catches qualitative issues (unrealistic timelines, disconnected value stories, integration blind spots). Run gates first; only run stakeholder review after gates pass.
Cross-reference with existing entities:
proposition_slug in propositions/Use $CLAUDE_PLUGIN_ROOT/scripts/project-status.sh to check coverage.
When the user asks to review or improve existing solutions (or when you notice issues during other operations), jump straight into critique:
Read all solutions (including solutions/_shared/ references) and their source propositions, features, products, and markets
Shared solution consistency check: For solutions with messaging_overlay: true, verify their commercial fields match the referenced shared solution. Flag drift:
| Solution | Shared Ref | Commercial Sync | Messaging |
|---|---|---|---|
| cogni-sales--b2b-sales-dach | _shared/iwp--b2b-sales-dach | in sync | DOES traceable |
| cogni-trends--b2b-sales-dach | _shared/iwp--b2b-sales-dach | DRIFTED (pricing) | DOES traceable |
When reviewing overlay solutions, focus on messaging quality and DOES traceability — skip commercial evaluation (it belongs on the shared reference). When commercial changes are needed, recommend updating the shared reference and regenerating overlays rather than fixing individual solutions.
Blueprint drift check: For solutions with blueprint_ref, compare blueprint_version against the product's current delivery_blueprint.blueprint_version. Flag solutions where the version is behind — these were generated from an older blueprint and may need regeneration. Also check structural drift: compare phase names and counts against the blueprint's expected phases. Report as a summary table:
| Solution | Blueprint | Solution v | Current v | Drift |
|---|---|---|---|---|
| cloud-monitoring--mid-market-saas-dach | cloud-platform | 1 | 2 | version drift |
| cloud-security--mid-market-saas-dach | cloud-platform | 2 | 2 | clean |
| sovereign-cloud--energy-de | cloud-platform | 1 | 2 | version + structural drift |
Recommend selective regeneration for drifted solutions. Distinguish intentional drift (market-specific regulatory phases added) from accidental drift (blueprint was updated but solution wasn't).
DOES delivery audit: For each solution, does the implementation actually deliver the proposition's DOES statement? Trace the connection explicitly.
Pricing coherence across portfolio: Compare all solutions side by side. Are solutions for different markets priced identically despite different buyer segments? Are solutions for different features priced identically despite different implementation complexity?
Template detection: Do multiple solutions share copy-paste phase structures? Each solution should reflect the unique nature of delivering that specific capability to that specific market.
PoV quality sweep: Read all PoV tiers together. Do they all say "2-week pilot"? A good portfolio has PoV tiers tailored to each proposition -- what "proves value" for monitoring is different from what proves value for analytics.
Tier jump analysis: For each solution, are the jumps between tiers justified? Plot PoV → Small → Medium → Large and check that each step represents a meaningful scope increase, not just a price bump.
Margin health (solutions with cost_model): Aggregate margins across the portfolio, separated by solution type. For project solutions: flag negative margins, margins below delivery_defaults.target_margin_pct, or erratic margin profiles. For subscription solutions: flag LTV/CAC < 3, gross margin < 70%, or churn > 5%. Present a margin summary table grouped by type — subscription margins (gross margin) are not comparable with project margins (effort-based).
Assumption audit (solutions with cost_model): Are assumptions consistent across solutions of the same type? If one project solution assumes 1,800 EUR/day for a Solution Architect and another assumes 1,500 EUR/day, one of them is wrong. Cross-check role rates against delivery_defaults and flag drift. For subscription solutions, check that unit economics assumptions are consistent (e.g., same CAC methodology across markets).
Solution type audit: Do all solutions for a given product use the correct solution type based on its revenue_model? Flag subscription products that have project-type solutions (the core structural problem this routing solves).
Upstream diagnosis: Trace weak solutions back to their source. If an implementation plan is vague, is it because the proposition's DOES statement is too generic to plan against? If pricing feels arbitrary, is it because the market definition lacks segmentation data? If margins are thin, is it because effort was underestimated or pricing undercut the market? Flag upstream fixes.
Stakeholder review: For each solution (or a subset if the portfolio is large), delegate to the solution-review-assessor agent. Present the aggregate pass/warn/fail status alongside the audit findings from steps 1-10. The stakeholder review adds three qualitative lenses — procurement viability, provider delivery realism, and client adoption safety — that the mechanical audits above don't cover.
Portfolio consistency check: After individual reviews, compare solutions targeting the same market for cross-solution consistency. Flag:
Present your assessment as a consulting memo -- lead with "here's what I'd change and why" backed by specific analysis. Offer concrete rewrites, not just observations.
When competitor data exists or the user has just run competitive analysis, the user may want to recalibrate pricing. This is a focused flow that touches only pricing -- not implementation phases.
competitors/{slug}.json) for the solution's proposition| Tier | Current Price | Competitive Context | Assessment |
|---|---|---|---|
| PoV | 15,000 EUR | Competitor X starts at 20K, Competitor Y offers free trial | Competitive -- but free trials from Y may pressure us to add a success guarantee |
| Small | 50,000 EUR | Competitor X charges 65K for similar scope | Room to hold or increase -- we're already below market |
| ... | ... | ... | ... |
Subscription repricing follows a different logic: compare against SaaS market benchmarks (ARR/seat, feature parity at price point), not day rates. The question is "what does the market pay for comparable subscription products?" not "how many person-days does this cost?"
Web research (optional): When the user wants market-calibrated pricing beyond what the competitor file contains, delegate to a subagent to search for industry pricing benchmarks, competitor packaging pages, and deal size data for the relevant segment.
For multiple pending solutions, delegate each to the solution-planner agent. Launch agents in parallel for independent propositions. Each agent reads the full context chain (proposition -> feature -> product -> market) and produces a complete solution.
Batch mode skips the interactive co-development steps -- use it when the user wants to generate many solutions quickly and review them afterward. But before launching:
Run status to identify pending propositions
Read the product for each pending proposition to determine revenue_model — group the batch by solution type
Assess which propositions are strong enough to build on -- flag any with generic DOES statements that will produce weak implementation plans
Present the batch plan grouped by type (e.g., "18 subscription solutions for insight-wave, 2 project solutions for cogni-services") and get confirmation
Bootstrap delivery blueprints and shared solution eligibility. Before discussing blueprint adaptation, ensure every product in the batch actually has a delivery blueprint. Products without blueprints get solutions designed from scratch with no structural consistency — different phases, pricing ratios, and assumptions across markets for the same product. This step fixes that by proposing blueprints proactively.
a. Identify products without blueprints. For each product in the batch, check whether delivery_blueprint exists on the product JSON. Separate into "has blueprint" and "needs blueprint" lists. If all products already have blueprints, skip to step 5c.
b. Launch solution-architect agents in parallel — one per product that needs a blueprint. Each agent receives:
project_dir: the project directory pathproduct_slug: the product to analyzemarket_slugs: the markets in this batch for this productlanguage: from portfolio.json (or "en")The agent reads all features, markets, delivery_defaults, and existing solutions to propose a delivery blueprint and assess shared solution eligibility. It returns structured JSON — it does not write files.
c. Present blueprint proposals. For each product, present the agent's proposed blueprint in the standard readable format:
Product: {product-name} — proposed delivery blueprint (v1)
| Phase | Duration Range | Role Mix |
|---|---|---|
| {phase} | {min}-{max} weeks | {roles with ratios} |
Pricing strategy: PoV base ×1.0, Small ×{multiplier}, Medium ×{multiplier}, Large ×{multiplier} Standard assumptions: {list} Quality gates: {list} Rationale: {blueprint_rationale from agent}
When multiple products need blueprints, batch all proposals into one presentation with clear headers per product. Include the agent's rationale so the user understands the design decisions.
d. Present shared solution recommendations. For each product where the agent returned shared_solution_recommended: true, present the efficiency case alongside the blueprint:
"{product-name} is a strong candidate for shared solutions: {feature_count} features × {market_count} markets = {N} solutions. With shared mode: {M} reference solutions + {N} lightweight overlays instead of {N} full generations. {shared_solution_rationale}"
Show the feature compatibility list. Flag any features marked compatible: false — these will be generated as independent solutions.
For products where the agent returned shared_solution_recommended: false, state why briefly: "{product-name}: not recommended for shared solutions — {rationale}."
When only one product is involved, combine the blueprint proposal and shared solution recommendation into a single presentation to avoid two separate confirmation rounds.
e. Invite adjustments and confirm. The user can:
f. Write to product JSON. On confirmation:
delivery_blueprint to each approved product's JSON file"shared_solution": true to each product where the user approved shared solutionsg. If no products needed bootstrapping (all had blueprints and shared_solution flags already set), skip this entire step.
Discuss delivery blueprint adaptation for the batch. This is the consultant presenting the methodology before deploying teams -- the user should understand and approve how blueprints will shape the generated solutions. Products that just received blueprints in step 5 are included here — the blueprints are now on the product JSON and this step handles market-specific adaptation.
Note: if step 5 already covered blueprint discussion and the user approved with adjustments, you may streamline this step — don't re-present blueprints the user just reviewed. Focus on market-specific adaptations that weren't covered in step 5e.
For each product in the batch that has a delivery_blueprint:
a. Present the blueprint in a readable summary:
Product: {product-name} (Blueprint v{blueprint_version})
| Phase | Duration Range | Role Mix |
|---|---|---|
| Discovery & Scoping | 1-2 weeks | SA 25%, IE 55%, PM 20% |
| Core Implementation | 4-8 weeks | ... |
Pricing strategy: PoV base ×1.0, Small ×{multiplier}, Medium ×{multiplier}, Large ×{multiplier} Standard assumptions: {list from blueprint} Quality gates: {list from blueprint}
b. Propose market-specific adaptations grouped by pattern rather than listing every market individually:
c. For products without blueprints, note them explicitly: "Products without blueprints ({product-x}, {product-y}) will have solutions designed from scratch based on proposition context and market data."
d. Invite adjustments. The user can:
e. Confirm: "Does this delivery approach work, or would you like to adjust any blueprint parameters before I generate?"
Wait for explicit user confirmation. If the user says "looks fine" or "just use defaults", proceed immediately with blueprint_guidance: null.
If no products in the batch have blueprints after step 5, skip this step entirely -- all solutions will be designed from scratch.
Capture adjustments as blueprint_guidance: Record user adjustments as a structured object passed to each solution-planner agent in its task prompt:
blueprint_guidance:
global_overrides:
- "Cap all phase durations at 6 weeks maximum"
- "Add assumption: All data must remain in EU data centers"
market_overrides:
energy-de:
- "Add Compliance Audit phase after Integration & Testing (2-3 weeks)"
- "Use high end of all duration ranges"
mid-market-saas-dach:
- "Use low end of duration ranges"
pricing_overrides:
medium_multiplier: 6.0
phase_additions:
- market_pattern: "regulated"
phase: "Regulatory Alignment"
after: "Integration & Testing"
duration_weeks_range: [2, 3]
phase_removals: []
When the user approves without changes, pass blueprint_guidance: null. When revision rounds (step 10) re-launch solution-planner, re-pass the same blueprint_guidance so revisions respect the user's original adjustments.
Detect shared solution products. For each product in the batch, check the product JSON for "shared_solution": true (this flag may have been set during step 5). When found, this product's features share one commercial structure per market — switch to the shared solution workflow instead of launching independent agents per feature.
For products with shared_solution: true:
a. Present the efficiency gain:
"Product {product-name} has shared_solution: true — {N} features × {M} markets = {N×M} solutions.
With shared mode: {M} reference solutions + {N×M} lightweight overlays instead of {N×M} full generations.
Only feature-specific messaging (onboarding descriptions, tier scope text, service names) varies per feature."
b. For each unique market in the batch, generate ONE reference solution to solutions/_shared/{product-slug}--{market-slug}.json using the full solution-planner agent. Pass the product context, market context, delivery_blueprint, and blueprint_guidance. The reference solution defines all commercial fields at the product level — pricing, cost model, tiers, durations, professional services structure. Text fields use product-level descriptions (not feature-specific).
c. Review each reference solution with solution-review-assessor (full 15-criteria evaluation). If verdict is "revise", re-launch the planner in revision mode. Maximum 2 rounds. The reference solution must pass before overlays are generated — it defines the commercial structure all features inherit.
d. Once reference solutions pass, launch solution-planner agents in overlay mode in parallel for each Feature×Market combination. Each overlay agent receives:
shared_solution_ref: path to the reference solutionThe overlay agent copies all commercial fields and generates only feature-specific messaging.
e. For overlay solutions, run a lightweight DOES-traceability check rather than the full stakeholder review. The commercial viability was validated on the reference. Flag any overlays where the solution-planner reported INCOMPATIBLE — these need full independent generation.
For products without shared_solution: true: proceed with the standard per-proposition agent launch as before.
For non-shared products: launch solution-planner agents in parallel for each proposition. Include the blueprint_guidance object (or null) in each agent's task prompt.
After each solution is written, launch solution-review-assessor agents in parallel to evaluate them (full review for non-shared solutions, lightweight for overlays)
For solutions with verdict "revise": re-launch solution-planner with the review feedback (revision mode) and the original blueprint_guidance, then re-assess. Maximum 2 revision rounds per solution
Present a batch summary:
Offer to run the portfolio-wide review flow across all new solutions
Then offer the user review options:
Wait for the user's explicit response. If they choose (a), delegate to the dashboard-refresher agent with project_dir and plugin_root: $CLAUDE_PLUGIN_ROOT to generate a dashboard snapshot, then ask again if they're ready to proceed. The user can then pick individual solutions to refine interactively.
When the user wants to work on solutions for a product with shared_solution: true interactively (not batch), the flow becomes:
Identify the product and market. Read the product JSON and confirm shared_solution: true. If no shared reference solution exists yet for this Product×Market, co-develop one first.
Co-develop the reference solution. Use the full consultative flow (Steps 3/3s/3p → 3b → 4 → 5 → 6) but framed at the product level. The user is defining how ALL features of this product are commercially delivered to this market:
solutions/_shared/{product-slug}--{market-slug}.jsonReview the reference solution with solution-review-assessor (full evaluation). Iterate until accepted.
Generate overlays. Once the reference is accepted, offer to generate all feature overlays for this market:
solution-planner agents in overlay mode for each featureSpot-check overlays. Flag any overlays where DOES traceability is weak or where the solution-planner reported incompatibility.
When the user asks to work on a specific Feature×Market proposition and its product has shared_solution: true:
Read the existing solution JSON, apply the user's changes, and write back. But don't just make the change mechanically -- consider whether the edit reveals a deeper issue. If the user is shortening timelines, are they being realistic or optimistic? If they're lowering prices, is it because competitors are cheaper or because the value proposition doesn't support the price? Surface the underlying question.
Editing overlay solutions: When the user wants to change a commercial field (pricing, cost model, tier structure) on a solution with messaging_overlay: true, warn about drift: "This solution inherits its commercial structure from the shared reference at {shared_solution_ref}. Changing pricing here will create inconsistency with the {N-1} sibling solutions. Would you prefer to update the shared reference and regenerate all overlays?" If the user insists on editing just this solution, apply the change and remove the messaging_overlay: true flag — the solution becomes an independent solution that happens to have a shared_solution_ref for traceability but is no longer in sync. For messaging-only edits (descriptions, scope text, service names), apply directly — these are feature-specific by design.
Read all JSON files in the project's solutions/ directory and solutions/_shared/ (if it exists). Group by solution_type and present with type-appropriate columns. For shared solutions, show the reference and overlay count:
Project Solutions:
| Proposition | PoV | Small | Medium | Large | Phases | Timeline | Assessment |
|---|---|---|---|---|---|---|---|
| cloud-monitoring--mid-market-saas-dach | 15K EUR | 50K EUR | 120K EUR | 250K EUR | 3 | 8w | Pricing coherent, PoV needs success criteria |
Subscription Solutions:
| Proposition | Free | Pro (monthly) | Enterprise | Onboarding | Assessment |
|---|---|---|---|---|---|
| deep-research--beratung-kmu-dach | Yes | 149 EUR | Custom | 1w included | Good tier separation, Pro priced competitively |
Partnership Solutions:
| Proposition | Stages | Revenue Share | Assessment |
|---|---|---|---|
| plugin-plattform--agentur-dach | 3 | 20% referral | Pilot commitment realistic |
Don't just list -- assess. Flag solutions with the wrong type for their product's revenue model, template phases, weak conversion logic, or pricing that doesn't fit the market.
A solution can be deleted freely -- it has no downstream dependents. Confirm with the user before deleting.
propositions skill firstsolution-planner agent handles individual solution generation in batch modecompete first for better-grounded pricesportfolio.json in the project root. If a language field is present, generate all user-facing text content (phase descriptions, scope text, rationale) in that language. JSON field names and slugs remain in English. If no language field is present, default to English.portfolio.json has a language field, communicate with the user in that language (status messages, instructions, recommendations, questions). Technical terms, skill names, and CLI commands remain in English. Default to English if no language field is present.$CLAUDE_PLUGIN_ROOT/skills/portfolio-setup/references/data-model.md for complete entity schemas