From career-navigator
Builds and refreshes CareerNavigator/StoryCorpus.json by extracting interview story candidates from raw sources (journals, PKM notes, debriefs, resumes, and related documents). Runs as a one-time/offline preprocessing pass and as an incremental refresh when new source files are detected in {user_dir}.
npx claudepluginhub tmargolis/career-navigator --plugin career-navigatorThis skill uses the workspace's default tool permissions.
Create and maintain a persistent interview story corpus so downstream interview skills never need to read full raw journals repeatedly.
Guides Payload CMS config (payload.config.ts), collections, fields, hooks, access control, APIs. Debugs validation errors, security, relationships, queries, transactions, hook behavior.
Builds scalable data pipelines, modern data warehouses, and real-time streaming architectures using Spark, dbt, Airflow, Kafka, and cloud platforms like Snowflake, BigQuery.
Builds production Apache Airflow DAGs with best practices for operators, sensors, testing, and deployment. For data pipelines, workflow orchestration, and batch job scheduling.
Create and maintain a persistent interview story corpus so downstream interview skills never need to read full raw journals repeatedly.
{user_dir} and required pathsUse:
{user_dir}/CareerNavigator/StoryCorpus.json (target corpus){user_dir} (raw source discovery root)If StoryCorpus.json is missing, create it using the schema in step 5.
Scan {user_dir} recursively for likely story-bearing files, prioritizing:
Exclude:
{user_dir}/CareerNavigator/*.jsonIf StoryCorpus.json already exists:
If no prior corpus metadata exists, run full build once.
For each new/changed source:
"Extract any anecdote, decision, challenge, outcome, or project detail from this entry. Output structured JSON."
Each candidate should include:
StoryCorpus.json (Layer 2)Use this top-level shape:
{
"meta": {
"created": "YYYY-MM-DD",
"updated": "YYYY-MM-DD",
"version": "1.0",
"description": "Interview story corpus extracted from user-owned sources for prep and mock interview retrieval."
},
"stories": [
{
"story_id": "story-uuid",
"source": "journal | pkm | debrief | resume | other",
"source_path": "relative/path/to/file",
"source_entry_ref": "date heading or chunk id",
"date": "YYYY-MM-DD",
"raw_summary": "Concise evidence summary from extraction.",
"themes": ["technical_leadership", "crisis_management"],
"competencies": ["problem_solving", "ownership", "cross_functional"],
"result_signal": true,
"ownership_signal": true,
"star_ready": false,
"star": {
"situation": "",
"task": "",
"action": "",
"result": ""
},
"quality": {
"clarity": "low | medium | high",
"specificity": "low | medium | high",
"credibility": "low | medium | high"
},
"embedding": [],
"score_hint": 0.0,
"last_refreshed": "YYYY-MM-DD"
}
],
"source_index": [
{
"path": "relative/path",
"mtime": "ISO-8601",
"status": "processed | skipped",
"last_processed": "ISO-8601"
}
]
}
Merge behavior:
story_id where the same source entry is re-processed.After merge:
Report:
star_readyWhen this runs during launch/setup, suggest running story-retrieval inside prep workflows rather than re-mining.