From hiring
Automates candidate preparation for engineering interviews via PDF extraction, web research, value scoring, STAR question generation, and YAML output. Use when preparing interview materials, researching candidates, or generating interview configurations.
npx claudepluginhub andercore-labs/claudes-kitchen --plugin hiringThis skill uses the workspace's default tool permissions.
```bash
Implements structured self-debugging workflow for AI agent failures: capture errors, diagnose patterns like loops or context overflow, apply contained recoveries, and generate introspection reports.
Monitors deployed URLs for regressions in HTTP status, console errors, performance metrics, content, network, and APIs after deploys, merges, or upgrades.
Provides React and Next.js patterns for component composition, compound components, state management, data fetching, performance optimization, forms, routing, and accessible UIs.
# Setup
cat interview-assistant/data/existing-candidate.yaml | head -150
# Execute workflow
CANDIDATE="lastname-firstname"
RESUME="em/$CANDIDATE/resume.pdf" || "data/intake/resume.pdf"
OUTPUT_DIR="em/$CANDIDATE"
YAML_TARGET="interview-assistant/data/$CANDIDATE.yaml"
# Workflow
PDF → extract → research → score → questions → YAML → validate
Trigger: "prep candidate" | "research candidate" | "interview preparation"
Input: Resume PDF
Output: YAML config + research artifacts
| Tool | Purpose | Check |
|---|---|---|
| document-skills:pdf | PDF extraction | Available in marketplace |
| WebSearch | Validation research | Claude Code builtin |
| interview-assistant | Target application | localhost:3000, localhost:8000 |
| Phase | Input | Action | Output |
|---|---|---|---|
| 0. Setup | candidate-name | Verify format → Check examples → Validate paths | lastname-firstname ID |
| 1. Extract | resume.pdf | PDF parse → Structure data | Candidate profile |
| 2. Research | profile | Web search → LinkedIn/GitHub → Validate | research.md |
| 3. Score | profile + research | Value assessment → Evidence → Confidence | scores.md + value_scores |
| 4. Questions | profile + scores | Generate STAR → Augment → Validate | questions.md + questions array |
| 5. Generate | all artifacts | Assemble YAML → Copy structure | candidate.yaml |
| 6. Validate | YAML | API tests → Browser check | ✓ Success |
Naming convention: lastname-firstname
# User says: "Homer Simpson"
CANDIDATE_ID="simpson-homer" # lastname-firstname format
# Check existing format
ls interview-assistant/data/ | head -5
# Expected: simpson-homer.yaml, simpson-bart.yaml
# Read template structure
cat interview-assistant/data/simpson-homer.yaml | head -150
Path validation:
# Resume location options
data/intake/resume.pdf → cp to em/$CANDIDATE/resume.pdf
em/$CANDIDATE/resume.pdf → use directly
# Output locations
em/$CANDIDATE/ # Reference artifacts
interview-assistant/data/$CANDIDATE.yaml # PRIMARY (app requirement)
DO NOT proceed without reading existing YAML example.
Extract via document-skills:pdf:
// Target data structure
{
personal: { name, location, current_role, current_company },
work_history: [{ company, role, dates, achievements[] }],
technical_skills: string[],
education: [{ degree, institution, date }],
achievements: [{ metric, context }],
open_source: { github_username, projects[] },
leadership: [{ team_size, scope, impact }]
}
Extraction priorities:
| Priority | Field | Pattern |
|---|---|---|
| CRITICAL | Quantified achievements | "\d+% improvement" |
| HIGH | Current role + company | First page, top section |
| HIGH | GitHub username | github.com/username pattern |
| MEDIUM | Tech stack | Skills section, tool mentions |
| LOW | Education | Degrees, dates (often irrelevant for senior roles) |
MANDATORY: Execute web searches, document findings AND gaps.
# LinkedIn (validate role)
"{full_name}" "{current_company}" site:linkedin.com
# GitHub (verify ownership)
"{full_name}" site:github.com
→ Check: bio matches location/company
→ Activity: Last 12 months contribution graph
→ Projects: Stars, forks, documentation quality
# Company context
"{company_name}" company stage industry
→ Size: Startup | Scale-up | Enterprise
→ Context: Growth phase, challenges, outcomes
# Technical writing
"{full_name}" blog | writing | article | presentation
→ Thought leadership, communication, expertise areas
Documentation format:
| Source | Found | Validated | Confidence | Evidence |
|---|---|---|---|---|
| Yes/No | Role matches? | High/Med/Low | URL or "Not found" | |
| GitHub | Yes/No | Ownership verified? | High/Med/Low | Profile URL + verification details |
| Company | Yes/No | Context gathered? | High/Med/Low | Stage, size, tech stack |
| Writing | Yes/No | Quality assessed? | High/Med/Low | URLs or "None found" |
Output: em/$CANDIDATE/research.md
## Validation Summary
| Claim | Source | Status | Evidence |
|-------|--------|--------|----------|
| Current role: Senior EM at Company | Resume | ✓ Verified | LinkedIn profile matches |
| 60% lead time improvement | Resume | ✗ Not verified | No public data |
| GitHub: username | Resume | ✓ Verified | Profile bio matches location |
## Confidence: Medium
LinkedIn private → Lower role verification confidence
Metrics unverifiable → Require interview validation
Score against 7 company values (1-5 scale).
See advanced.md for Andercore-specific values and detailed scoring rubrics.
Generic scoring pattern:
Value → Resume evidence → Research validation → Score + Confidence
Score 5: Exceptional signals, multiple verified examples
Score 4: Strong signals, reasonable verification
Score 3: Average signals (baseline "meets expectations")
Score 2: Limited signals or unverified claims
Score 1: No signals or concerning patterns
Output format em/$CANDIDATE/scores.md:
## Value: Own It (4.5/5.0)
**Evidence:**
- Took ownership of X initiative (resume line 42)
- Drove Y transformation with Z impact (resume line 58)
**Research Validation:**
- GitHub shows X project ownership ✓
- LinkedIn timeline matches ✓
- Metrics unverified ⚠
**Questions to Explore:**
- How did you get buy-in for X?
- What resistance did you face?
**Confidence:** Medium (metrics unverified)
Output: em/$CANDIDATE/scores.md + candidate.value_scores in YAML
Pattern: Base template → Augment with candidate details → Validation points
- id: "own_it_1_custom"
title: "Taking Responsibility Beyond Role"
weight: "high"
time_limit: 7
source_question: "Share an example of when you stepped outside your formal role to take on a high-impact initiative. What drove you to get involved, and what was the result?"
augmented_question: "I noticed you [specific achievement from resume, line X]. This isn't typically a [their role] responsibility. Walk me through how you recognized this was needed, how you got buy-in, and what resistance you encountered."
follow_ups:
- "You mentioned working with [person/team from background] - who did you need to convince?"
- "At [their company], what would have happened if you hadn't taken this on?"
validation_points:
- "References [specific project from resume]"
- "Mentions [technology from their stack]"
- "Discusses [specific metric they claimed]"
strong_signals:
- "Clear ownership despite not being their job"
- "Measurable results matching resume"
red_flags:
- "Can't provide details matching resume"
- "Contradicts resume timeline"
Question requirements:
| Field | Rule |
|---|---|
| augmented_question | MUST reference specific resume details (line numbers) |
| validation_points | MUST be specific enough to catch fabrication |
| follow_ups | MUST probe areas NOT on resume |
| red_flags | MUST include resume consistency checks |
Output: em/$CANDIDATE/questions.md + andercore_values[].questions[] in YAML
CRITICAL: Copy structure from existing example FIRST.
# Read template
cat interview-assistant/data/simpson-homer.yaml | head -150
# Generate to PRIMARY location
interview-assistant/data/$CANDIDATE.yaml
Required fields (frontend/backend requirements):
candidate:
id: "lastname-firstname-2025"
name: "Full Name"
summary:
overall_score: 4.1 # REQUIRED (frontend)
max_score: 5.0
recommendation: "YES"
confidence: "Medium"
value_scores: # REQUIRED (frontend)
own_it: 4.5
dive_deep: 4.0
# ... 7 values total
key_achievements: # REQUIRED (frontend)
- metric: "60% lead time improvement"
validated: false
source: "resume"
areas_to_explore: # REQUIRED (frontend)
- "Verify metrics claims"
research_files:
- "research.md"
interview_structure:
total_minutes: 60
sections: [...]
andercore_values: # REQUIRED (backend)
- name: "Own It"
description: "..."
candidate_score: 4.5
color: "#..."
mandatory: true
questions: [...] # Array of question objects
DO NOT guess structure. Copy from simpson-homer.yaml.
MANDATORY: Test before declaring success.
CANDIDATE="lastname-firstname"
# Test 1: Candidate endpoint
curl -s "http://localhost:8000/api/candidate/$CANDIDATE" | \
python3 -c "import sys,json; d=json.load(sys.stdin); print('✅ Score:', d['candidate']['summary']['overall_score'])"
# Test 2: Values endpoint
curl -s "http://localhost:8000/api/values?candidate_id=$CANDIDATE" | \
python3 -c "import sys,json; d=json.load(sys.stdin); print('✅ Values:', len(d['mandatory'])+len(d['optional']))"
# Test 3: PDF endpoint
curl -I "http://localhost:8000/api/pdf/$CANDIDATE" 2>&1 | grep "200 OK" && echo "✅ PDF loads"
Validation requirements:
ALL 3 tests pass → Success
ANY test fails → Fix YAML structure, re-test
→ Review simpson-homer.yaml structure
→ Check field names match exactly
→ Verify data types (numbers vs strings)
Final check:
1. ✓ All curl tests pass?
2. ✓ Browser test: localhost:3000 → Select candidate → Verify loads?
3. ✓ User confirms working?
NO → Debug YAML structure
YES → Complete
em/$CANDIDATE/
├── research.md # Validation findings
├── scores.md # Value assessments
├── questions.md # STAR questions rationale
└── candidate.yaml # Reference copy
interview-assistant/data/$CANDIDATE.yaml # PRIMARY (app loads this)
| Error | Action |
|---|---|
| PDF parse fails | Try document-skills:pdf → Manual extraction if still fails |
| Web search fails | Continue with resume-only, note gaps in research.md |
| GitHub not found | Note in research, score conservatively |
| API tests fail | Fix YAML structure → Re-read template → Re-test |
| Insufficient data | Mark fields as "NEEDS_VALIDATION" in YAML |
BEFORE declaring success:
- [ ] All value scores have evidence OR marked "needs validation"
- [ ] 2+ custom questions per mandatory value
- [ ] Augmented questions reference specific resume details
- [ ] No placeholder text (TODO, FIXME, etc.)
- [ ] All resume metrics cited in questions
- [ ] API tests pass (3/3)
- [ ] Browser test confirms candidate loads
See advanced.md for:
See examples.md for:
See reference.md for: