From jobops
Performs expert HR assessment of candidates against job postings using custom scoring rubrics, domain knowledge, and resume analysis.
npx claudepluginhub reggiechan74/jobops --plugin jobopsThis skill uses the workspace's default tool permissions.
Read `.jobops/config.json`. If missing, stop with:
Assesses candidates against job postings using pre-created scoring rubrics, generating detailed reports with scores and evidence.
Evaluates job postings (JD text or URL) against your profile with A-F match score, archetype analysis, compensation research, positioning strategy, and interview prep.
Generates structured interview scorecards and guides for any role with competencies, behavioral questions, and 1-4 scoring rubrics. Use for hiring rubrics, scorecards, or assessment criteria.
Share bugs, ideas, or general feedback.
Read .jobops/config.json. If missing, stop with:
JOBOPS NOT CONFIGURED Run /jobops:setup to initialize your workspace.
Use config.directories.<key> for all file paths in this skill.
Use config.preferences.cultural_profile if this skill generates resume-style content.
Use config.preferences.default_jurisdiction if this skill has jurisdiction-sensitive logic (crisis/legal skills accept --jurisdiction=<ISO-3166-2> to override).
For each template used by this skill, resolve the full path as:
{config.templates.base_dir}/{config.templates.active.<template_name>}/
Templates referenced by this skill: assessment_rubric_framework, assessment_report_structure
This skill writes to a per-application folder. Before writing any output:
{Company}_{Role}_{YYYYMMDD} from the job-posting filename, or honor --app=<slug> if supplied.{config.directories.applications_root}/{app_slug}/.resume/cover-letter/assessment/interview/mkdir -p it, then copy
{config.directories.job_postings}/{filename} → {app_slug}/job_posting.md
so the pinned JD cannot silently change under completed work.--app=<distinct-slug>.MAXIMUM ASSESSMENT REPORT LIMIT: 20,000 TOKENS
Your assessment report output MUST NOT EXCEED 20,000 tokens. This is a hard limit.
Important Clarifications:
Strategies to stay within limit for assessment report:
Create a job-specific scoring rubric from the {{ARG1}} job posting, then evaluate the candidate using this customized rubric to provide a detailed assessment report.
Resume Source Location:
{config.directories.resume_source}This workflow uses 5 phases with parallel dispatch where possible.
Phase 1 (Sequential, fast): Load templates + Validate inputs + Load job posting
Phase 2 (PARALLEL subagents): Domain Research ‖ Candidate Profile Generation
Phase 3 (Sequential): Create Rubric (synthesizes job posting + domain research)
Phase 4 (Sequential, visible): Score Cat 1 → 2 → 3 → 4 → 5
Phase 5 (Sequential): Generate Report → Save Files
Dependency Rules:
Before starting any work, create all tasks for user visibility. Use TaskCreate for each task below, then use TaskUpdate to mark in_progress when starting and completed when done.
Create these tasks immediately at the start:
| # | Task Subject | activeForm | Phase |
|---|---|---|---|
| 1 | Load assessment framework templates | Loading assessment framework templates | 1 |
| 2 | Validate job posting and resume source | Validating job posting and resume source | 1 |
| 3 | Generate candidate profile | Generating candidate profile from source materials | 2 |
| 4 | Research domain and industry context | Researching domain and industry context | 2 |
| 5 | Create job-specific scoring rubric | Creating job-specific scoring rubric | 3 |
| 6 | Score Skills Inventory (Category 1) | Scoring Skills Inventory against rubric | 4 |
| 7 | Score Experience Relevance (Category 2) | Scoring Experience Relevance against rubric | 4 |
| 8 | Score Demonstrated Impact (Category 3) | Scoring Demonstrated Impact against rubric | 4 |
| 9 | Score Credentials (Category 4) | Scoring Credentials against rubric | 4 |
| 10 | Score Fit & Readiness (Category 5) | Scoring Fit & Readiness against rubric | 4 |
| 11 | Generate assessment report | Generating comprehensive assessment report | 5 |
| 12 | Save rubric and assessment files | Saving rubric and assessment files | 5 |
Task Update Rules:
in_progress BEFORE starting work on itcompleted AFTER finishing itcompleted with a noteEvery markdown artifact you create (rubric and assessment report) must start with YAML metadata populated with real values.
{applications_root}/{app_slug}/assessment/rubric.md):
---
job_file: {config.directories.job_postings}/{{ARG1}}
role: <role title>
company: <company name>
role_variant: <Technical IC | People Manager | Executive>
total_points: 200
generated_by: /assessjob rubric
generated_on: <ISO8601 timestamp>
output_type: rubric
status: final
version: 2.0
---
{applications_root}/{app_slug}/assessment/assessment.md):
---
job_file: {config.directories.job_postings}/{{ARG1}}
resume_source: {{ARG2}} or {config.directories.resume_source}
rubric_file: {applications_root}/{app_slug}/assessment/rubric.md
role: <role title>
company: <company name>
role_variant: <Technical IC | People Manager | Executive>
candidate: <full candidate name>
generated_by: /assessjob
generated_on: <ISO8601 timestamp>
output_type: assessment
status: draft
version: 2.0
overall_score: <XX/200>
normalized_score: <XX%>
---
Insert the appropriate block before any headings, updating timestamps and scores, and bump version if you rerun the analysis.
Tasks: Mark task 1
in_progress, then task 2in_progressafter templates load.
CRITICAL: Read all three framework templates in a single parallel batch:
{config.templates.base_dir}/{config.templates.active[assessment_rubric_framework]}/assessment_rubric_framework.md - Master 200-point rubric structure with role variants{config.templates.base_dir}/{config.templates.active[evidence_verification_framework]}/evidence_verification_framework.md - Evidence-based scoring protocols{config.templates.base_dir}/{config.templates.active[assessment_report_structure]}/assessment_report_structure.md - Assessment report formatUse three parallel Read tool calls. These templates define the mandatory structure for rubrics and assessments.
Task: Mark task 1
completed.
Task: Mark task 2
in_progress.
Resume Source Path Resolution:
Determine source path:
{config.directories.resume_source}Validate source path:
Load job posting:
{config.directories.job_postings}/{{ARG1}} (add .md extension if needed)Task: Mark task 2
completed.
CRITICAL: Dispatch both tasks simultaneously using parallel Task tool calls in a SINGLE message. Mark tasks 3 and 4 as
in_progressbefore dispatching.
Phase 2 launches two independent subagents that run concurrently. Neither depends on the other. Both need only the job posting content (loaded in Phase 1).
NOTE: Skip this step (mark task 3 completed immediately) if resume source is a single file.
For folder-based sources, dispatch a subagent to generate/load the candidate profile:
Check for existing profile: Look for [resume-source-folder]/.profile/candidate_profile.json
completed)Dispatch profile generation subagent:
Use Task tool with subagent_type=resume-summarizer, model=sonnet, and prompt:
"Read all files in [resume-source-folder]/ directory and create a structured JSON
candidate profile following the schema in .claude/agents/resume-summarizer.md.
Save output to [resume-source-folder]/.profile/candidate_profile.json and
[resume-source-folder]/.profile/extraction_log.md"
Expected outcome:
[resume-source-folder]/.profile/candidate_profile.json (8K-10K tokens)[resume-source-folder]/.profile/extraction_log.mdDispatch a domain research subagent to run concurrently with profile generation:
Use Task tool with subagent_type=general-purpose, model=sonnet, and prompt:
"Research the following for the role of [ROLE TITLE] at [COMPANY NAME]:
1. Industry standards and typical role expectations for this specific position
2. Required vs nice-to-have skills based on current market standards
3. Typical responsibilities and seniority indicators for this role level
4. Company context: culture, values, technology stack, recent developments, size, reputation
5. Current market conditions: salary ranges, demand, competitive landscape
6. Industry-specific terminology, certifications, and best practices
7. What differentiates strong vs average candidates for this type of role
Provide a structured research summary organized by these 7 areas.
Focus on actionable intelligence that would help calibrate a scoring rubric.
Be specific - cite sources and data points where possible."
IMPORTANT: Both subagents (2.1 and 2.2) MUST be dispatched in the SAME message using parallel Task tool calls. Do NOT wait for one to complete before starting the other.
Task: When each subagent completes, mark its respective task (3 or 4) as
completed.
Task: Mark task 5
in_progress. Prerequisite: Domain research (task 4) must becompleted. Candidate profile (task 3) is NOT needed yet.
Combine the job posting requirements with domain research findings to create an informed rubric. The domain research helps you:
Analyze the job posting to select the appropriate weight variant:
Technical Individual Contributor - Select if:
People Manager - Select if:
Executive - Select if:
Document selected variant in rubric header.
Extract and categorize:
MANDATORY DETAILED SCORING REQUIREMENT The dynamic rubric you create MUST include granular point allocation for every single criterion. This is NON-NEGOTIABLE. You cannot create simplified rubrics or skip detailed breakdowns. Every skill, every experience level, every responsibility MUST have the complete scoring framework with specific, measurable criteria.
CRITICAL REQUIREMENT: Follow the structure defined in the assessment rubric framework template exactly. Customize all [bracketed] content with job-specific requirements from the job posting, informed by domain research findings.
200-Point Scoring Categories (Default Weights):
ENFORCEMENT CHECK: After creating the rubric, verify:
FAILURE TO INCLUDE DETAILED BREAKDOWNS VIOLATES THE COMMAND REQUIREMENTS
MANDATORY QUALITY CHECK: Before saving, verify the rubric includes ALL required detailed breakdowns:
IF ANY SECTION LACKS DETAILED BREAKDOWNS, THE RUBRIC IS INCOMPLETE AND MUST BE REGENERATED
Save the generated rubric to: {applications_root}/{app_slug}/assessment/rubric.md
Task: Mark task 5
completed.
Prerequisites: Rubric (task 5) AND candidate profile (task 3) must both be
completed.
Load candidate materials (depends on source type):
[resume-source-folder]/.profile/candidate_profile.jsonEvidence Verification Protocol (from the evidence verification framework template):
For folder-based profiles: When citing specific achievements or skills:
For single file resumes: When citing specific achievements or skills:
Task: Mark task 6
in_progress.
Map each required/preferred skill from the custom rubric to specific evidence in candidate's history using 0-6 proficiency scale.
Score each criterion with confidence level:
| Criterion | Score | Confidence | Evidence Citation |
|---|---|---|---|
| [Skill 1] | X/6 | HIGH/MED/LOW | "exact quote" or inference note |
Task: Mark task 6
completed.
Task: Mark task 7
in_progress.
Evaluate against the specific years, industry, and domain requirements identified in the rubric (5-level scoring).
Task: Mark task 7
completed.
Task: Mark task 8
in_progress.
Verify quantified achievements against the expected outcomes defined in the rubric. This is the highest-weighted category - be thorough.
Task: Mark task 8
completed.
Task: Mark task 9
in_progress.
Check against the specific education and certification requirements listed in the custom rubric.
Task: Mark task 9
completed.
Task: Mark task 10
in_progress.
Assess based on the company values, work environment, and role-specific readiness factors.
Normalized Score Calculation:
Task: Mark task 10
completed.
Task: Mark task 11
in_progress.
CRITICAL: Follow the report structure defined in the assessment report structure template exactly. Use the detailed 3-level format with rubric criteria attribution for all scores.
Required Report Elements:
Task: Mark task 11
completed.
Task: Mark task 12
in_progress.
File Save Locations (app-centric layout):
Resolve {app_slug} per the Application Path Resolution protocol at the top of this skill, and ensure {applications_root}/{app_slug}/assessment/ exists (mkdir -p).
Save Dynamic Rubric: {applications_root}/{app_slug}/assessment/rubric.md
Save Assessment Report: {applications_root}/{app_slug}/assessment/assessment.md
No timestamped audit sub-folder is required — the app folder itself is the self-contained audit container, and the pinned job_posting.md copy guarantees the JD cannot drift after the fact.
Save Steps:
{app_slug} from the job-posting filename (or --app=<slug> override)mkdir -p {applications_root}/{app_slug}/assessment/{config.directories.job_postings}/{{ARG1}} to {applications_root}/{app_slug}/job_posting.md if not already present{applications_root}/{app_slug}/assessment/rubric.md{applications_root}/{app_slug}/assessment/assessment.mdTask: Mark task 12
completed.
Before finalizing assessment:
If issues are encountered:
ABSOLUTE REQUIREMENTS FOR EVERY RUBRIC:
VIOLATION CONSEQUENCES:
VERIFICATION CHECKLIST - BEFORE SAVING ANY RUBRIC: