Resume, continue, or check status of a portfolio project. Use whenever the user mentions "continue portfolio", "resume portfolio", "pick up where I left off", "portfolio status", "what's next", "show progress", "where was I", "how far along", or opens a session that involves an existing cogni-portfolio project — even if they don't say "resume" explicitly.
From cogni-portfolionpx claudepluginhub cogni-work/insight-wave --plugin cogni-portfolioThis skill is limited to using the following tools:
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Builds 3-5 year financial models for startups with cohort revenue projections, cost structures, cash flow, headcount plans, burn rate, runway, and scenario analysis.
Session entry point for returning to portfolio work. This skill orients the user by showing where they left off and what to do next — think of it as the dashboard view that keeps multi-session projects on track.
Portfolio projects span multiple sessions and skills. Without a clear re-entry point, users lose context between sessions and waste time figuring out what they already did. This skill bridges that gap: it reads the project state, surfaces progress at a glance, and recommends the most valuable next step. The goal is to get the user back into productive flow within seconds.
Scan the workspace for portfolio projects:
find . -maxdepth 3 -name "portfolio.json" -path "*/cogni-portfolio/*"
Each match represents a project (extract the slug from the directory name). If no projects are found, say so and suggest the setup skill.
bash $CLAUDE_PLUGIN_ROOT/scripts/project-status.sh "<project-dir>" --health-check
The script returns JSON with counts, phase, next_actions, completion, claims, and stale_entities. The --health-check flag enables staleness detection — it compares upstream updated dates (or file mtimes as fallback) against downstream entities and flags propositions/solutions that may need refresh.
Show a concise, scannable dashboard. Lead with the company name and project slug, then the progress table:
| Entity | Count | Status |
|---|---|---|
| Products | N | |
| Features | N | |
| Markets | N | |
| Propositions | N / expected (E excluded) | pct% |
| Solutions | N / propositions | pct% |
| Packages | N / packageable | pct% |
| Competitors | N / propositions | pct% |
| Customers | N / markets | pct% |
| Claims | N total | V verified, D deviated, U unverified, P pending propagation. If claims.pending_stale > 0, append: "(S on stale entities — deferred)" |
| Communicate | N files | A accepted, R revise, J rejected (if > 0), STALE if upstream changed |
| Architecture | exists/missing | STALE if products/features changed since last generation |
| Purpose | N / total features | coverage percentage — low coverage limits architecture and customer narrative quality |
| Context | N entries | breakdown by category (e.g., 3 pricing, 2 competitive, 1 strategic) |
| Sources | N (D docs, U urls) | S stale, C current (only if source_lineage.has_registry is true) |
| Uploads | N | pending ingestion (if > 0) |
The Propositions row uses counts.expected_propositions as the denominator — this value already subtracts excluded pairs. Do NOT compute your own expected count by multiplying features × markets. Show as N / expected (E excluded) where E is counts.excluded_pairs. Only show the "(E excluded)" suffix when E > 0. When N equals expected, show 100% — excluded pairs are design decisions, not gaps.
If margin_health is present in the status output and has solutions_with_cost_model > 0, add a margin health line after the table:
If solutions_by_type is present, show the type breakdown: "N project, N subscription, N partnership".
If blueprint_status is present and has version_drifted > 0, add a blueprint drift line:
drifted_solutions). Recommend: "Run the solutions skill in review mode to check drift and selectively regenerate." Also show blueprint coverage: "N products have delivery blueprints, N solutions were generated from blueprints."After the table:
phase value into plain language (see reference below)quality_audit is present and has flagged entities (features_flagged or propositions_flagged non-empty), show them grouped by issue type before stale entities. Present actionable summaries, not raw data:
source_lineage.has_registry is true and drift is detected, show this section BEFORE stale entities (since source drift is often the root cause of entity staleness):
source_lineage.changed_uploads is non-empty: "N source documents have been re-uploaded with changes (list filenames). These affect M entities." Group affected entities by source. Recommend: "Run portfolio-lineage to assess impact, or portfolio-ingest to re-process the updated documents."source_lineage.new_uploads is non-empty: "N new files in uploads/ have not been ingested yet." Distinguish from changed re-uploads.source_lineage.stale_sources > 0 and no changed_uploads: "N source entries are marked as stale in the registry." Recommend running portfolio-lineage check to investigate.source_lineage.untracked_entities > 0: "N entities have no source lineage tracking." This is informational, not urgent — mention it after other drift warnings. Recommend running portfolio-lineage to backfill.stale_entities is non-empty, show them as priority actions before the regular next steps. Group by reason type: "N propositions need refresh because their upstream features were updated" is more useful than listing each one. If a stale entity also has quality warnings, lead with the quality issue (fix the root cause first, then refresh the proposition). When stale entities AND unverified claims coexist, note the interaction: if claims.pending_stale > 0, explain that those claims sit on entities about to be refreshed — verifying them now would be wasted work since the refresh will generate new claims. This helps the user understand why verify isn't the first recommended step despite having hundreds of pending claims.communicate.stale is true, highlight this prominently: "Communicate files may need refresh — upstream data changed since they were generated." Present the reason from communicate.stale_reason. Recommend running portfolio-communicate to regenerate. This appears alongside stale entity warnings since it represents the same class of problem (downstream output invalidated by upstream changes).architecture.stale is true, mention that the architecture diagram may be outdated because products or features changed since it was generated. Recommend running portfolio-architecture to refresh. If architecture.exists is false and features exist, suggest generating the architecture diagram as a visual checkpoint.purpose_coverage.total_features > 0 and purpose_coverage.with_purpose is less than half of total_features, note low purpose coverage: "N of M features have purpose statements. Adding purpose improves architecture diagrams and customer-facing materials." Recommend running the features skill to add purpose statements.counts.context_entries > 0, mention available context entries with a category breakdown. Read context/context-index.json for the by_category map to show counts per category. This helps the user understand what intelligence is available for downstream skills. If context exists but downstream skills haven't been run yet, highlight this: "N context entries from ingested documents are ready — these will automatically inform propositions, solutions, and other skills."counts.uploads > 0, always mention pending files regardless of phase. When source_lineage.has_registry is true, distinguish between new uploads (source_lineage.new_uploads) and re-uploads (source_lineage.changed_uploads): "N new uploads (never ingested) + M re-uploads (source changed since last ingestion)"excluded_pairs from the script output FIRST. These are confirmed design decisions recorded in feature files with explicit reasons — not guesses. If non-empty, state definitively: "N Feature × Market Paare bewusst ausgeschlossen (Design-Entscheidung)." Never use speculative language like "vermutlich" or "möglicherweise" for excluded pairs.missing_propositions — this array already excludes excluded pairs, so any entries here are genuine gaps. List them as actionable items.missing_propositions is empty, the proposition matrix is complete. Do NOT list excluded pairs as missing or suggest creating them. A brief mention of the exclusion count in the table row is sufficient.counts.excluded_pairs: 0 but missing_propositions contains pairs, cross-check by reading feature files for excluded_markets arrays as a fallback — the script may have failed to detect them.Keep the tone warm and oriented toward action — this is a welcome-back moment, not a status report. The user should feel oriented, not overwhelmed.
Present entries from next_actions sorted by priority (ascending). Lower priority numbers represent upstream work that must complete before higher-numbered downstream actions can produce quality output.
Presentation rules:
Common dependency pairs — explain these when both appear:
claims.pending_stale count tells you how many claims fall into this category.If the phase is complete, congratulate the user and suggest reviewing outputs or running portfolio-communicate for additional deliverables. If communicate files are stale (indicated by a communicate action in next_actions), mention that portfolio-communicate should be re-run to refresh customer-facing documentation.
| Phase | Meaning | What to do |
|---|---|---|
products | No products defined yet | Run products skill |
features | Products exist, no features | Run features skill |
markets | Features defined, no markets | Run markets skill |
customers | Markets defined, no customer profiles yet | Run customers skill (or skip to propositions for weaker messaging) |
propositions | Feature x Market pairs need messaging | Run propositions skill |
enrichment | Propositions exist, solution/competitor gaps remain | Run solutions, compete, and/or customers for remaining markets |
verification | Unverified or deviated claims pending | Run verify skill |
propagation | Resolved claims with corrections not yet applied to entity files | Run verify skill (Step 8 propagates corrections) |
communicate | All entities complete, claims clean, corrections propagated | Run communicate skill |
complete | All workflow stages finished | Review outputs or refresh communicate if upstream data changed |
This skill is the recommended re-entry point after heavy sessions. Portfolio work naturally spans multiple sessions — batch proposition generation, competitive analysis, solution design, and dashboard generation each consume significant context. Other portfolio skills proactively recommend /portfolio-resume when they detect a heavy session (multiple batch operations, 3+ skills invoked, or capstone operations like portfolio-dashboard/portfolio-communicate completed).
When presenting the status summary, acknowledge what the user accomplished in previous sessions if recent entity timestamps suggest a productive recent session. This continuity helps users feel their work persists and builds confidence in the multi-session workflow.
portfolio.json in the project root. If a language field is present, communicate with the user in that language (status messages, instructions, recommendations, questions). Technical terms, skill names, and CLI commands remain in English. If no language field is present, default to English.