npx claudepluginhub diegouis/provectus-marketplace --plugin proagent-hrWant just this command?
Then install: npx claudepluginhub u/[userId]/[slug]
Execute HR operations: draft-job-description, plan-interview, create-onboarding, performance-review, compensation-analysis, validate-cvs, generate-prescreening, score-prescreening, evaluate-hr-interview, evaluate-technical-interview, synthesize-final-recommendation, generate-resume, or analyze-growth.
<operation> [options]proagent-hr Run
Execute human resources and talent management operations based on the specified mode.
Variables
mode: $1 target: $2 (optional - role title, employee name, team name, Google Drive folder path, or file path depending on mode) options: $3 (optional - additional configuration such as scoring weights or batch size)
Instructions
Read the mode variable and execute the corresponding workflow below.
Mode: draft-job-description
Draft a comprehensive job description for an open role.
-
Gather Role Requirements
- If
targetis provided, use it as the role title (e.g., "Senior Backend Engineer") - Ask the hiring manager for: team context, reporting structure, key responsibilities, must-have vs. nice-to-have qualifications, and budget range
- Identify the seniority level and map to the company's career ladder framework
- If
-
Research Market Context
- Reference industry benchmarks for similar roles at comparable companies
- Identify competitive differentiators to highlight in the description
- Determine appropriate compensation range based on level, location, and market data
-
Draft the Job Description Structure the document with these sections:
- Role Summary: 2-3 sentence overview of the position and its impact
- About the Team: Team mission, size, tech stack or domain focus
- Key Responsibilities: 5-8 bullet points describing primary duties
- Required Qualifications: Must-have skills, experience, and credentials
- Preferred Qualifications: Nice-to-have skills that strengthen a candidacy
- What Success Looks Like: Measurable outcomes for the first 6-12 months
- Compensation & Benefits: Salary range, equity, benefits summary
- Equal Opportunity Statement: Standard EEO language
-
Validate and Output
- Review for bias-free, inclusive language (remove gendered terms, unnecessary jargon)
- Ensure compliance with salary transparency requirements
- Output the job description as a formatted document ready for Google Docs publishing
- If Google Docs MCP is available, create the document directly in the designated hiring folder
Mode: plan-interview
Design a structured interview process for a specific role.
-
Define Interview Stages
- If
targetis provided, use it as the role title to tailor the process - Design a multi-stage pipeline appropriate for the role level:
- Recruiter Screen (30 min): Role fit, salary expectations, availability
- Hiring Manager Interview (45 min): Experience deep-dive, team alignment
- Technical Assessment (60 min): Role-specific skills evaluation
- Behavioral Interview (45 min): Collaboration, problem-solving, values alignment
- Final Panel (60 min): Cross-functional stakeholder evaluation
- If
-
Generate Question Banks For each stage, create 8-12 questions covering:
- Technical competency aligned to job description requirements
- Problem-solving approach and critical thinking
- Collaboration style and conflict resolution
- Growth mindset and learning orientation
- Leadership potential (for senior roles)
- Include follow-up probes for each question
-
Build Evaluation Scorecards
- Create a scorecard for each interview stage
- Define 4-6 evaluation criteria per stage with weighted scoring (1-5 scale)
- Include behavioral anchors for each score level (what a 1, 3, and 5 looks like)
- Add a section for red flags and green flags
-
Coordinate Scheduling
- Generate a suggested timeline from posting to offer (target: 3-4 weeks)
- If Google Calendar MCP is available, create calendar holds for panel members
- Draft candidate communication templates for each stage transition
- Include rejection and advancement email templates
-
Output Deliver the complete interview kit:
- Interview stage overview with owners and durations
- Question banks per stage (formatted for interviewer use)
- Evaluation scorecards per stage
- Scheduling timeline and communication templates
Mode: create-onboarding
Create a comprehensive onboarding plan for a new hire.
-
Gather New Hire Context
- If
targetis provided, use it as the new hire's role or name - Determine: start date, role, team, manager, location (remote/hybrid/onsite)
- Identify role-specific tools, systems, and access requirements
- If
-
Generate Day-One Checklist
- Equipment and workspace setup (laptop, monitors, peripherals)
- Account provisioning (email, Slack, GitHub/GitLab, cloud services, HR systems)
- Policy acknowledgments (code of conduct, security policy, NDA, benefits enrollment)
- Welcome meeting with manager and buddy introduction
- Office tour or virtual workspace orientation
-
Build 30/60/90-Day Plan First 30 Days - Learn:
- Complete all mandatory training modules
- Shadow team members on key workflows
- Attend team meetings and learn recurring ceremonies
- Have 1:1s with each direct team member
- Complete first small deliverable or contribution
Days 31-60 - Contribute:
- Take ownership of defined work items independently
- Participate actively in team planning and retrospectives
- Begin building cross-functional relationships
- Complete role-specific certifications or training
- Receive and act on first informal feedback from manager
Days 61-90 - Own:
- Lead a project or initiative independently
- Contribute to process improvements or documentation
- Establish personal development goals with manager
- Complete 90-day review with formal feedback
- Transition from onboarding to ongoing performance cadence
-
Schedule Milestones
- If Google Calendar MCP is available, create recurring check-in events
- Schedule buddy meetings (weekly for first month, biweekly thereafter)
- Schedule manager 1:1s (weekly)
- Set reminders for 30-day, 60-day, and 90-day review conversations
-
Generate Welcome Communications
- Draft welcome email from manager with first-day logistics
- Create team introduction message for Slack with new hire bio
- Prepare onboarding resources packet with links to key documentation
- If Gmail MCP is available, send welcome email directly
Mode: performance-review
Facilitate a performance review cycle for an individual or team.
-
Set Up Review Cycle
- If
targetis provided, use it as the employee name or team scope - Determine review type: annual, mid-year, quarterly check-in, or probationary
- Identify review participants: self, peers (3-5), manager, skip-level (optional)
- Set timeline with deadlines for each phase
- If
-
Generate Review Templates Create structured templates covering:
- Self-Assessment: Goal achievement, key accomplishments, challenges faced, development areas, career aspirations
- Peer Feedback: Collaboration effectiveness, technical contributions, communication quality, areas for growth
- Manager Evaluation: Performance against objectives, competency assessment, potential rating, promotion readiness
- Align all templates to the company competency framework and role-level expectations
-
Synthesize Feedback
- Aggregate feedback across all sources into a unified narrative
- Identify consistent themes in strengths and development areas
- Flag discrepancies between self-assessment and peer/manager feedback
- Quantify goal achievement against defined OKRs or KPIs
-
Draft Review Narrative Structure the review document:
- Summary: Overall performance level and key achievements
- Strengths: 3-5 demonstrated strengths with specific examples
- Development Areas: 2-3 areas for improvement with actionable guidance
- Goal Achievement: Status of each objective with evidence
- Development Plan: Recommended actions for the next review period
- Compensation Recommendation: If applicable, include merit increase or promotion rationale
-
Prepare Calibration Materials
- Generate calibration summary with performance distribution across the team
- Create comparison view of similar-level employees for consistency
- If Google Docs MCP is available, publish review documents to the HR folder
Mode: compensation-analysis
Analyze compensation for a role, individual, or team.
-
Define Analysis Scope
- If
targetis provided, use it as the role title, individual name, or team name - Determine analysis type: market benchmarking, internal equity audit, or adjustment modeling
- If
-
Research Market Data
- Identify comparable roles at peer companies based on level, function, and location
- Gather compensation data points: base salary, bonus target, equity grants, total compensation
- Note data sources and freshness (flag data older than 6 months)
-
Analyze Internal Position
- Map the role to internal pay bands and salary grades
- Calculate compa-ratio (actual pay / midpoint of pay band)
- Compare against peers at the same level within the organization
- Identify any pay equity concerns across demographics or tenure
-
Build Compensation Report Structure the analysis:
- Market Positioning: Where the role/individual sits relative to market (25th, 50th, 75th percentile)
- Internal Equity: Compa-ratio analysis and peer comparison
- Total Compensation Breakdown: Base, bonus, equity, benefits valued in total
- Gap Analysis: Difference between current and target positioning
- Recommendations: Specific adjustment proposals with budget impact
- Risk Assessment: Retention risk if compensation is below market
-
Model Scenarios
- Create adjustment scenarios (e.g., move to 50th percentile, match competitor offer)
- Calculate budget impact for each scenario
- Provide timeline recommendation for implementing adjustments
Mode: validate-cvs
Validate candidate CVs against a job description using a multi-agent orchestration pipeline with blind review.
-
Gather Inputs
- If
targetis provided, use it as the Google Drive folder path containing CVs - Ask for:
- Job Description source: Google Drive path or local file path to the JD
- CV folder: Google Drive folder containing candidate CV PDFs (if not provided as
target) - Scoring weights (optional): Custom weights for Skills/Experience/Education/Certifications (default: 35/35/15/15)
- Batch size (optional): Number of CVs to process per batch (default: all)
- Use Google Drive MCP to list files in the CV folder and confirm the count with the user
- If
-
Extract Job Requirements
- Read the job description from Google Drive via MCP
- Parse into a structured requirements rubric:
- Must-Have Requirements: Minimum qualifications (pass/fail)
- Should-Have Requirements: Strong preferences
- Nice-to-Have Requirements: Bonus qualifications
- Extract scoring dimensions: skills, experience level, education, certifications, industry alignment
- Present the extracted rubric to the user and STOP for confirmation before proceeding
-
Parse CVs (Sequential — cv-parser agent)
- For each CV in the folder, dispatch the
cv-parseragent via the Task tool - The parser extracts structured data and separates PII into an identity envelope
- Store parsed profiles with anonymized candidate IDs (Candidate #001, #002, etc.)
- Track progress: "Parsed X of Y CVs"
- If a CV cannot be parsed, log the failure and continue with remaining CVs
- For each CV in the folder, dispatch the
-
Analyze Candidates (Parallel Fan-Out) For each parsed candidate, dispatch these agents in parallel via the Task tool:
- cv-skills-matcher: Score technical and professional skill match against JD
- cv-experience-validator: Validate work history timeline, progression, and relevance
- cv-red-flag-detector: Check for factual inconsistencies and integrity concerns
Each agent receives ONLY the anonymized profile (no PII) plus the JD rubric. All three agents run simultaneously for each candidate.
-
Aggregate Scores (Fan-In — cv-scoring-aggregator agent)
- Dispatch the
cv-scoring-aggregatoragent via the Task tool - Aggregator collects all parallel results and produces:
- Per-candidate composite scorecards
- Candidate comparison matrix ranked by overall score
- Tier classifications (Tier 1-4)
- Batch summary statistics
- Write results to a Markdown file and optionally to Google Sheets via MCP
- Dispatch the
-
Recruiter Review Gate — STOP
- Present the comparison matrix and batch summary to the user
- STOP and wait for recruiter approval before generating recommendations
- Options:
- [A]pprove: Proceed to recommendation generation
- [E]dit: Adjust weights, re-score specific candidates, or modify tier thresholds
- [R]edo: Re-run analysis with different parameters
-
Generate Recommendations (cv-recommendation-generator agent)
- Only after recruiter approval, dispatch the
cv-recommendation-generatoragent - Produces:
- Ranked shortlist with per-candidate advancement rationale
- Interview focus areas for each advancing candidate
- Pipeline summary with scheduling recommendations
- Action items for hiring manager, recruiter, and interviewers
- Reunite candidate numbers with names from the identity envelope in the final output
- If Gmail MCP is available, draft outreach emails for advancing candidates
- If Slack MCP is available, post completion notification to the hiring channel
- Only after recruiter approval, dispatch the
-
Persist Session State Throughout the pipeline, maintain a session state file (
cv_validation_session.json):{ "started_at": "ISO timestamp", "job_description": "source path", "cv_folder": "source path", "weights": { "skills": 35, "experience": 35, "education": 15, "certifications": 15 }, "total_cvs": 0, "parsed_cvs": [], "scored_cvs": [], "current_step": "parsing|analyzing|aggregating|reviewing|recommending", "status": "in_progress|paused_for_review|completed|failed" }- Update after each candidate is fully processed
- If the session is interrupted, resume from the last completed candidate on restart
Mode: generate-prescreening
Generate prescreening questionnaires and internal scoring rubrics for candidates advancing from CV screening.
-
Gather Inputs
- Locate the
cv_validation_session.jsonfrom the most recent CV screening batch - Identify candidates advancing to prescreening (Tier 1 and optionally Tier 2 candidates)
- Read each advancing candidate's:
- Anonymized CV profile
- CV screening scorecard (gaps, focus areas, red flags)
- The JD rubric used during screening
- If
targetis provided, use it as the path to the CV screening output folder
- Locate the
-
Generate Questionnaires (Parallel Fan-Out) For each advancing candidate, dispatch the
prescreening-question-generatoragent via the Task tool:- Pass the candidate's anonymized profile, CV scorecard, and JD rubric
- The agent produces:
- Candidate-facing questionnaire (8-12 questions targeting CV gaps + standard questions)
- Internal scoring rubric (recruiter-only, with behavioral anchors per question)
- All candidates for the same role are processed in parallel
-
Quality Review After all questionnaires are generated:
- Verify no illegal questions were included (age, marital status, children, religion, nationality, citizenship, disability, health, military service, arrest/criminal record, salary history)
- Verify each questionnaire targets the specific gaps identified in that candidate's CV screening
- Present a summary of generated questionnaires to the recruiter:
- Number of questionnaires generated
- Questions per candidate
- Common themes across candidates
- Any questions flagged for review
-
Output and Distribution
- Write questionnaires to individual files:
prescreening/candidate-NNN-questionnaire.md - Write rubrics to individual files:
prescreening/candidate-NNN-rubric.md(internal only) - If Gmail MCP is available, draft emails with questionnaire content for each candidate
- If Slack MCP is available, notify the hiring channel that prescreening questionnaires are ready
- Write questionnaires to individual files:
-
Update Pipeline Session
- Create or update
candidate_pipeline_session.jsonwith:- Reference to the
cv_validation_session.json - Per-candidate prescreening stage status: "questionnaire_sent"
- File paths for questionnaires and rubrics
- Reference to the
- Create or update
Mode: score-prescreening
Score candidate prescreening responses against internal rubrics and produce comparison matrix.
-
Gather Inputs
- Read
candidate_pipeline_session.jsonfor the active pipeline - For each candidate with prescreening responses:
- Read the candidate's prescreening responses
- Read the corresponding internal scoring rubric
- Read the original CV screening scorecard for gap tracking
- If
targetis provided, use it as the path to the folder containing candidate responses
- Read
-
Score Responses (Parallel Fan-Out) For each candidate with responses, dispatch the
prescreening-response-scoreragent via the Task tool:- Pass the candidate's responses, scoring rubric, and CV scorecard
- The agent produces:
- Per-question scores (1-5) with rationale
- Gap resolution tracking (Resolved/Partial/Unresolved)
- Consistency flags against CV claims
- Overall prescreening score (0-100)
- Advance/Hold/Decline recommendation
- All candidates are scored in parallel
-
Produce Comparison Matrix After all candidates are scored:
- Generate a prescreening comparison matrix sorted by overall score
- Include gap resolution summary per candidate
- Highlight consistency flags that need recruiter attention
- Write to
prescreening/prescreening-comparison-matrix.md - If Google Sheets MCP is available, write to Sheets for team sharing
-
Recruiter Gate — STOP Present the comparison matrix and scoring summary to the recruiter:
- STOP and wait for recruiter approval before advancing candidates
- Options:
- [A]pprove: Advance recommended candidates to interview stage
- [E]dit: Adjust recommendations for specific candidates
- [R]edo: Re-score with different rubric parameters
-
Update Pipeline Session
- Update
candidate_pipeline_session.jsonwith:- Per-candidate prescreening scores and gap resolution
- File paths for scorecards
- Recruiter gate approval status
- Candidates advancing to interview stage
- Update
Mode: evaluate-hr-interview
Evaluate HR interview outcomes with bias scanning and produce evaluation scorecards.
-
Gather Inputs
- Read
candidate_pipeline_session.jsonfor the active pipeline - For each candidate with completed HR interviews:
- Read interviewer notes, scorecards, or free-form feedback
- Read the candidate's prescreening scorecard (for cross-stage coherence)
- Read the JD rubric for context
- If
targetis provided, use it as the path to the folder containing interview notes
- Read
-
Evaluate Interviews (Parallel Fan-Out) For each candidate, dispatch the
hr-interview-evaluatoragent via the Task tool:- Pass the interview notes, prescreening scorecard, and JD rubric
- The agent performs (in sequence):
- Bias scan on interviewer notes — flags protected characteristic references, unanchored culture fit language, demographic-correlated adjectives
- Presents bias flags for review
- Evaluation scoring after bias review — scores Communication (20%), Role Motivation (20%), Collaboration (25%), Problem Solving (20%), Culture Alignment (15%)
- Cross-stage coherence check against prescreening responses
-
Bias Review Gate — STOP Before finalizing evaluations, present all bias flags across candidates:
- STOP and wait for recruiter review of bias findings
- For each flagged item, the recruiter must:
- Acknowledge: Flag is valid, exclude this evidence from scoring
- Dismiss: Flag is a false positive with documented reasoning
- Escalate: Flag requires additional investigation or re-interview
- Evaluations are only finalized after all flags are reviewed
-
HR Reviewer Gate — STOP After bias review, present finalized HR evaluation scorecards:
- STOP and wait for HR reviewer approval
- Options:
- [A]pprove: Accept evaluations as-is
- [E]dit: Modify specific dimension scores with justification
- [R]e-interview: Request re-interview for specific candidates
-
Output and Update
- Write evaluation scorecards:
evaluations/candidate-NNN-hr-evaluation.md - Update
candidate_pipeline_session.jsonwith HR evaluation scores, bias flag counts, and gate approvals - If Slack MCP is available, notify the hiring channel of HR evaluation completion
- Write evaluation scorecards:
Mode: evaluate-technical-interview
Evaluate technical interview outcomes with bias scanning and JD coverage analysis.
-
Gather Inputs
- Read
candidate_pipeline_session.jsonfor the active pipeline - For each candidate with completed technical interviews:
- Read technical interview notes, coding exercise scores, take-home feedback
- Read the JD rubric (must-have requirements for coverage mapping)
- Read previous stage scorecards for context
- If
targetis provided, use it as the path to the folder containing technical interview materials
- Read
-
Evaluate Interviews (Parallel Fan-Out) For each candidate, dispatch the
technical-interview-evaluatoragent via the Task tool:- Pass the technical interview materials, JD rubric, and previous scorecards
- The agent performs (in sequence):
- Bias scan — detects pedigree bias, style bias, familiarity bias, speed bias
- Presents bias flags for review
- JD coverage mapping — maps which must-have requirements were tested
- Evaluation scoring after bias review — scores Technical Depth (30%), Problem Solving Approach (20%), Code Quality (20%), System Design (20%), Technical Communication (10%)
-
Bias Review Gate — STOP Before finalizing evaluations, present all bias flags across candidates:
- STOP and wait for technical reviewer review of bias findings (pedigree, style, familiarity, speed bias)
- For each flagged item, the reviewer must:
- Acknowledge: Flag is valid, exclude this evidence from scoring
- Dismiss: Flag is a false positive with documented reasoning
- Escalate: Flag requires additional investigation or re-interview
- Evaluations are only finalized after all flags are reviewed
-
Coverage Gap Gate — STOP After bias review, present coverage analysis across all candidates:
- Identify must-have JD requirements that were NOT tested in any candidate's interview
- STOP and wait for technical reviewer decision:
- Accept: Coverage is sufficient, proceed with available data
- Re-interview: Schedule targeted follow-up for specific untested requirements
- Waive: Mark specific requirements as waived for this hiring round
-
Technical Reviewer Gate — STOP After coverage gap resolution, present finalized technical evaluation scorecards:
- STOP and wait for technical reviewer approval
- Options:
- [A]pprove: Accept evaluations as-is
- [E]dit: Modify specific dimension scores with justification
- [R]e-interview: Request re-interview for specific candidates
-
Output and Update
- Write evaluation scorecards:
evaluations/candidate-NNN-technical-evaluation.md - Write coverage map:
evaluations/interview-coverage-map.md - Update
candidate_pipeline_session.jsonwith technical evaluation scores, bias flag counts, coverage gaps, and gate approvals - If Slack MCP is available, notify the hiring channel of technical evaluation completion
- Write evaluation scorecards:
Mode: synthesize-final-recommendation
Synthesize all stage scorecards into final hire/reject recommendations.
-
Gather Inputs
- Read
candidate_pipeline_session.jsonfor the active pipeline - For each candidate with completed evaluations, collect:
- CV screening scorecard
- Prescreening scorecard (if conducted)
- HR interview evaluation
- Technical interview evaluation
- Read the stage weights (default: CV 15%, Prescreening 10%, HR 30%, Technical 45%)
- If
targetis provided, use it as the path to the pipeline session file
- Read
-
Synthesize Recommendations (Sequential — for Auditability) For each candidate, dispatch the
cross-stage-synthesizeragent via the Task tool sequentially (one at a time, not parallel):- Pass all stage scorecards, stage weights, and the JD rubric
- The agent performs:
- Score normalization and weighted composite calculation
- Cross-stage consistency analysis (trending up/down/variable)
- Legal defensibility checklist — must pass before recommendation
- Final recommendation: Hire—Strong, Hire—Standard, Hire—Conditional, Hold, or Reject
- Post-decision documentation: onboarding flags (if hiring) or candidate communication guidance (if rejecting)
-
Batch Summary After all candidates are synthesized:
- Produce a batch recommendation summary with all candidates ranked
- Run disparate impact analysis if demographic data is available (post identity reunion)
- Generate pipeline statistics (advancement rates, score distributions, stage-by-stage trends)
- Write to
recommendations/final-recommendation-summary.md - If Google Sheets MCP is available, write final results to Sheets
-
Hiring Manager Gate — STOP Present the final recommendation summary:
- STOP and wait for hiring manager approval
- Options:
- [A]pprove All: Accept all recommendations
- [A]pprove with Changes: Modify specific candidate decisions with justification
- [R]eview Individual: Deep-dive into specific candidate's cross-stage data
-
Post-Approval Actions After hiring manager approval:
- Reunite candidate numbers with names from identity envelopes
- If Gmail MCP is available:
- Draft offer emails for Hire candidates
- Draft decline emails for Reject candidates using the communication guidance
- If Slack MCP is available:
- Post final hiring decisions to the hiring channel
- Notify relevant team leads of incoming hires
- If Rube/Composio MCP is available:
- Update candidate status in ATS (Greenhouse, Lever, BambooHR)
- Update
candidate_pipeline_session.jsonwith:- Final decisions per candidate
- Pipeline status: "completed"
- Hiring manager gate approval
Mode: generate-resume
Generate a tailored, ATS-optimized resume for a specific job description.
Reference skill: awesome-claude-skills/tailored-resume-generator/SKILL.md
-
Gather Inputs
- If
targetis provided, use it as the path to the job description file - Ask for:
- Job Description: Full text or file path to the target job posting
- Candidate Background: Existing resume file, or work history, education, skills, and achievements
- Format Preference (optional): Markdown, plain text, or formatting guidance for Word/PDF
- Resume Style (optional): Chronological, functional, or hybrid (default: chronological)
- If
-
Analyze Job Requirements
- Extract must-have qualifications, key skills, soft skills, industry knowledge, and ATS keywords
- Prioritize requirements into critical (deal-breakers), important (strongly desired), and nice-to-have (bonus)
- Identify company values and cultural fit indicators from the job description
-
Map Experience to Requirements
- For each job requirement, identify matching experience from the candidate's background
- Find transferable skills for career transitions where no direct match exists
- Note gaps to address or de-emphasize
- Identify unique strengths to highlight
-
Generate the Tailored Resume Structure the resume with:
- Professional Summary: 3-4 lines leading with years of experience, top required skills, industry experience, and unique value proposition
- Technical/Core Skills: Grouped by category matching job requirements, using exact terminology from the JD
- Professional Experience: Achievements quantified with metrics, reordered to prioritize most relevant, using action verbs and job description keywords
- Education: Degrees, certifications, relevant coursework
- Optional Sections: Certifications, publications, awards, projects as applicable
-
Optimize for ATS
- Use standard section headings (Professional Experience, Education, Skills)
- Incorporate exact keywords from the job description naturally
- Avoid tables, graphics, or complex formatting
- Include both acronyms and full terms (e.g., "SQL (Structured Query Language)")
-
Provide Strategic Recommendations
- Strengths analysis: what makes the candidate competitive
- Gap analysis: requirements not fully met with suggestions to address them
- Interview preparation tips and key talking points
- Cover letter hooks: 2-3 opening lines for the cover letter
Mode: analyze-growth
Analyze a developer's recent coding patterns and generate a personalized growth report.
Reference skill: awesome-claude-skills/developer-growth-analysis/SKILL.md
-
Access Chat History
- Read the chat history from
~/.claude/history.jsonl(JSONL format withdisplay,project,timestamp,pastedContentsfields) - Filter for entries from the past 24-48 hours based on the current timestamp
- If
targetis provided, use it to scope the analysis to a specific project or time range
- Read the chat history from
-
Analyze Work Patterns Extract and analyze:
- Projects and Domains: Backend, frontend, DevOps, data, etc.
- Technologies Used: Languages, frameworks, and tools in conversations
- Problem Types: Performance optimization, debugging, feature implementation, refactoring, setup
- Challenges Encountered: Repeated questions, multi-attempt problems, knowledge gap indicators
- Approach Patterns: Methodical, exploratory, experimental problem-solving styles
-
Identify Improvement Areas
- Identify 3-5 specific, evidence-based, actionable improvement areas prioritized by impact
- Each area must include: why it matters, what was observed (specific evidence from chat history), concrete recommendation, and time-to-skill-up estimate
-
Generate Growth Report Structure the report:
- Work Summary: 2-3 paragraphs on projects, technologies, and focus areas
- Improvement Areas (Prioritized): Each with why it matters, evidence, recommendation, and time estimate
- Strengths Observed: 2-3 things the developer is doing well
- Action Items: Priority-ordered list derived from improvement areas
-
Curate Learning Resources
- Use Rube MCP (
RUBE_SEARCH_TOOLS) to search HackerNews for articles related to each improvement area - For each area, include 2-3 relevant articles with title, date, relevance description, and link
- Prioritize posts with high engagement (comments, upvotes)
- Use Rube MCP (
-
Deliver the Report
- Present the complete report in the CLI
- Use Rube MCP (
RUBE_MANAGE_CONNECTIONS,RUBE_MULTI_EXECUTE_TOOL) to send the report to the developer's Slack DMs - Break the report into logical sections for Slack formatting
- Confirm delivery in the CLI output
Error Handling
- If required context is missing (role title, employee name), prompt the user with specific questions before proceeding
- If Google Docs or Gmail MCP is unavailable, output documents as formatted Markdown for manual publishing
- If market compensation data cannot be retrieved, note the limitation and use available internal data with caveats
- If a CV file cannot be read or parsed, log the error and continue with remaining candidates — do not abort the batch
- If a parallel analysis agent fails for a candidate, note the gap in the aggregation and proceed with available results
- If the Google Drive MCP is unavailable, fall back to reading local file paths
- If
~/.claude/history.jsonlis not found or empty, inform the user that developer growth analysis requires Claude Code chat history - If HackerNews search via Rube MCP returns no results, provide generic learning resource recommendations based on the identified improvement areas
- If the candidate provides insufficient background for resume generation, ask targeted follow-up questions for work history, skills, and achievements
- All outputs include a "Next Steps" section with clear action items and owners