From jobops
Assesses candidates against job postings using pre-created scoring rubrics, generating detailed reports with scores and evidence.
npx claudepluginhub reggiechan74/jobops --plugin jobopsThis skill uses the workspace's default tool permissions.
Read `.jobops/config.json`. If missing, stop with:
Performs expert HR assessment of candidates against job postings using custom scoring rubrics, domain knowledge, and resume analysis.
Generates structured interview scorecards and guides for any role with competencies, behavioral questions, and 1-4 scoring rubrics. Use for hiring rubrics, scorecards, or assessment criteria.
Evaluates job postings (JD text or URL) against your profile with A-F match score, archetype analysis, compensation research, positioning strategy, and interview prep.
Share bugs, ideas, or general feedback.
Read .jobops/config.json. If missing, stop with:
JOBOPS NOT CONFIGURED Run /jobops:setup to initialize your workspace.
Use config.directories.<key> for all file paths in this skill.
Use config.preferences.cultural_profile if this skill generates resume-style content.
Use config.preferences.default_jurisdiction if this skill has jurisdiction-sensitive logic (crisis/legal skills accept --jurisdiction=<ISO-3166-2> to override).
For each template used by this skill, resolve the full path as:
{config.templates.base_dir}/{config.templates.active.<template_name>}/
Templates referenced by this skill: assessment_rubric_framework, assessment_report_structure, evidence_verification_framework
This skill writes to a per-application folder. Before writing any output:
{Company}_{Role}_{YYYYMMDD} from the job-posting filename, or honor --app=<slug> if supplied.{config.directories.applications_root}/{app_slug}/.resume/cover-letter/assessment/interview/mkdir -p it, then copy
{config.directories.job_postings}/{filename} → {app_slug}/job_posting.md
so the pinned JD cannot silently change under completed work.--app=<distinct-slug>.MAXIMUM OUTPUT LIMIT: 20,000 TOKENS
Your complete assessment report MUST NOT EXCEED 20,000 tokens. This is a hard limit.
Strategies to stay within limit:
Use the existing scoring rubric from {{ARG1}} to evaluate the candidate against the {{ARG2}} job posting, providing a detailed assessment report with scores and evidence.
Phase 1 (Parallel batch): Load templates + rubric + job posting (4 parallel reads)
Phase 2 (PARALLEL): Candidate profile gen (subagent) ‖ Validate rubric alignment
Phase 3 (Sequential): Optional domain research (if rubric is stale)
Phase 4 (Sequential, visible): Score Cat 1 → 2 → 3 → 4 → 5 → 6
Phase 5 (Sequential): Generate report → Save files
Dependency Rules:
Before starting any work, create all tasks for user visibility:
| # | Task Subject | activeForm |
|---|---|---|
| 1 | Load templates, rubric, and job posting | Loading templates, rubric, and job posting |
| 2 | Generate candidate profile | Generating candidate profile from source materials |
| 3 | Validate rubric-job alignment | Validating rubric-job posting alignment |
| 4 | Score Technical Skills & Competencies | Scoring Technical Skills & Competencies |
| 5 | Score Relevant Experience | Scoring Relevant Experience |
| 6 | Score Key Responsibilities | Scoring Key Responsibilities alignment |
| 7 | Score Achievements & Impact | Scoring Achievements & Impact |
| 8 | Score Education & Certifications | Scoring Education & Certifications |
| 9 | Score Cultural Fit | Scoring Cultural Fit |
| 10 | Generate assessment report | Generating comprehensive assessment report |
| 11 | Save assessment files | Saving assessment and rubric files |
Task Update Rules:
in_progress BEFORE starting work on itcompleted AFTER finishing itcompleted immediatelyThe generated assessment in {applications_root}/{app_slug}/assessment/assessment.md must start with:
---
job_file: {config.directories.job_postings}/{{ARG2}}
rubric_file: {{ARG1}}
role: <role title>
company: <company name>
candidate: <full candidate name>
generated_by: /assesscandidate
generated_on: <ISO8601 timestamp>
output_type: assessment
status: draft
version: 1.0
overall_score: <XX/100>
---
Insert this block before any headings and update timestamps, scores, and versioning on reruns.
Task: Mark task 1
in_progress.
Read all files in a single parallel batch:
{config.templates.base_dir}/{config.templates.active[evidence_verification_framework]}/evidence_verification_framework.md - Evidence-based scoring protocols{config.templates.base_dir}/{config.templates.active[assessment_report_structure]}/assessment_report_structure.md - Assessment report format{{ARG1}} — pre-created rubric file. Accept either an absolute path, a path relative to {applications_root}/{app_slug}/assessment/rubric.md, or a bare filename to resolve inside the current app folder.{config.directories.job_postings}/{{ARG2}} (add .md extension if needed)If job posting doesn't exist in {config.directories.job_postings}/, check the root directory for legacy files.
Task: Mark task 1
completed.
CRITICAL: Dispatch both tasks simultaneously in a SINGLE message. Mark tasks 2 and 3 as
in_progressbefore dispatching.
Check for existing profile: Look for {config.directories.resume_source}/.profile/candidate_profile.json
completed immediately)If regeneration needed, dispatch subagent:
Use Task tool with subagent_type=resume-summarizer, model=sonnet, and prompt:
"Read all files in {config.directories.resume_source}/ directory and create a structured JSON
candidate profile following the schema in .claude/agents/resume-summarizer.md.
Save output to {config.directories.resume_source}/.profile/candidate_profile.json and
{config.directories.resume_source}/.profile/extraction_log.md"
While profile generates, validate the rubric:
Task: Mark task 3
completedwhen validation is done. Task: Mark task 2completedwhen profile subagent returns (or immediately if fresh profile exists).
If rubric validation identified stale context or missing information:
Skip this phase if rubric is current and complete.
Prerequisites: Candidate profile (task 2) AND rubric validation (task 3) must both be
completed.
Read the candidate profile from {config.directories.resume_source}/.profile/candidate_profile.json
Evidence Verification Protocol: When citing specific achievements or skills:
CRITICAL: Apply the evidence verification framework from the evidence verification framework template to all scoring decisions.
Task: Mark task 4
in_progress.
Use the specific required/preferred skills from the rubric to map against candidate evidence.
Task: Mark task 4
completed.
Task: Mark task 5
in_progress.
Evaluate against the years, industry, and domain requirements defined in the rubric.
Task: Mark task 5
completed.
Task: Mark task 6
in_progress.
Match candidate experience to the primary duties extracted in the rubric.
Task: Mark task 6
completed.
Task: Mark task 7
in_progress.
Verify metrics against the expected outcomes defined in the rubric.
Task: Mark task 7
completed.
Task: Mark task 8
in_progress.
Check against the specific requirements listed in the rubric.
Task: Mark task 8
completed.
Task: Mark task 9
in_progress.
Assess based on the company values and work environment from the rubric.
Task: Mark task 9
completed.
Task: Mark task 10
in_progress.
Follow the report structure defined in the assessment report structure template exactly.
CRITICAL FORMAT REQUIREMENTS:
Task: Mark task 10
completed.
Task: Mark task 11
in_progress.
File Save Location (app-centric layout):
Resolve {app_slug} per the Application Path Resolution protocol at the top of this skill, and ensure {applications_root}/{app_slug}/assessment/ exists (mkdir -p).
Pin the rubric inside the app folder: if {{ARG1}} points to a rubric outside the app folder, copy it to {applications_root}/{app_slug}/assessment/rubric.md (if one is not already pinned there). The pinned rubric is the authoritative copy used for audit.
Save Assessment Report: {applications_root}/{app_slug}/assessment/assessment.md
The app folder itself is the self-contained audit container — no timestamped audit sub-folder is needed.
Provide a summary of:
Task: Mark task 11
completed.
Before finalizing assessment:
If issues are encountered: