From egg
This skill should be used when the user wants a technical interview preparation roadmap, coding interview study plan, or DSA practice plan tailored to a specific company and role. Trigger phrases include "technical interview roadmap", "coding interview prep for", "DSA roadmap for", "DSA study plan", "leetcode prep for", "what problems should I practice for", "interview study plan", "prep me for the technical rounds", "technical prep for", "what should I study for", "coding prep plan", "roadmap from this JD", "prep me for this role [URL]", or providing a JD URL with a request for technical interview preparation.
npx claudepluginhub luqmannurhakimbazman/ashfordThis skill uses the workspace's default tool permissions.
Generate a company-specific technical interview study plan from a JD URL or pasted job description. Extracts DSA-relevant signals directly from the JD, researches the company's engineering domain, consults the leetcode-teacher learner profile, and outputs a curated LeetCode problem list with phased study timeline. Output goes to `hojicha/<company>-<role>-resume/technical-roadmap.md`.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Generate a company-specific technical interview study plan from a JD URL or pasted job description. Extracts DSA-relevant signals directly from the JD, researches the company's engineering domain, consults the leetcode-teacher learner profile, and outputs a curated LeetCode problem list with phased study timeline. Output goes to hojicha/<company>-<role>-resume/technical-roadmap.md.
references/frameworks/problem-patterns.md from the leetcode-teacher skill. Use the exact pattern names: Two Pointers, Sliding Window, Binary Search, Dynamic Programming, DFS/BFS, Backtracking, Greedy, Hash Table, Heap / Priority Queue, Union-Find.~/.local/share/claude/leetcode-teacher-profile.md. Read it for calibration only.crawling_exa as primary. Fall back to WebFetch if Exa is unavailable. If neither works, ask the user to paste the JD text directly.The user provides a JD URL or pastes JD text. This is the only JD input — the skill does not read notes.md or any resume-builder output for JD analysis.
If JD URL or pasted text is provided: Proceed to Step 1b.
If neither is provided: Prompt the user to either:
Also read these optional files if they exist:
Optional:
- ~/.local/share/claude/leetcode-teacher-profile.md (learner profile for calibration)
- hojicha/candidate-context.md (discovery interview context — technical background, project details)
Runs every time — this is the core input processing step.
crawling_exa (primary) or WebFetch (fallback) to retrieve the page content. Extract the job description text from the fetched content — strip navigation, footers, and unrelated page elements.Parse the JD content for:
The extracted signals (hard skills, domain keywords, role level, company name, role title, raw JD text) are used directly by Step 2 and Step 8. No intermediate file is written.
Use the Hard Skills and Domain Keywords extracted in Step 1b.2. If hojicha/candidate-context.md exists, also scan it for additional technical signals (languages, frameworks, project domains) that may inform pattern prioritization.
Focus on what JD keywords imply for coding interviews:
references/jd-signal-mapping.md. Example: "distributed systems" → Graph algorithms, BFS/DFS; "real-time processing" → Sliding Window, Heap.Use Exa MCP tools (web_search_exa, crawling_exa) as primary research tools. Fall back to WebSearch/WebFetch if Exa is unavailable.
Query strategy (max 5 queries):
"<company> engineering blog" — tech stack, problem domains, engineering culture"<company>" site:github.com — languages, open-source projects, infrastructure choices"<company> tech stack" — confirm and expand the technology picture"<company> <role> technical interview" — public interview process informationExtract core engineering challenges (Critical Rule 8). From the research, identify the 3-5 core engineering problems the company's team solves daily. Examples:
These engineering challenges drive Step 5 topic prioritization and Step 6b problem selection. If research is thin, infer challenges from the company archetype and JD signals.
Hard ban: No Glassdoor, Blind, LeetCode Discuss company tags, or any paywalled source. If a search result comes from a banned source, skip it.
Thin research fallback: If fewer than 3 substantive results are found:
references/company-archetypes.md to select the best-fit company archetype as a proxyNo web search fallback: If no web search tools are available at all (neither Exa nor WebSearch/WebFetch), skip live research entirely. Rely on JD signals (Step 2) and company archetypes (references/company-archetypes.md). Note prominently in the output that no live research was performed and recommendations are based on JD analysis and archetype matching only.
Read ~/.local/share/claude/leetcode-teacher-profile.md if it exists. Extract:
If the profile does not exist:
Silent calibration: Use the profile to adjust difficulty and topic priority internally. Do not dump raw profile contents into the output. Reference specific observations only when they directly inform a recommendation (e.g., "Linked list problems are prioritized because pointer mechanics are a tracked weakness").
Synthesize Steps 2-4 into a prioritized topic list. Reason backward from the company's engineering challenges (Critical Rule 8): what does the company build → what concepts do their engineers use daily → what DSA patterns test those concepts → which problems exercise those patterns?
Each topic gets:
Use references/domain-topic-mapping.md for domain → topic mapping. Use the core engineering challenges from Step 3 as the primary driver for Tier 1 selection — domain-topic-mapping is supplementary context, not the sole source.
Pattern names MUST match the leetcode-teacher taxonomy. See the Quick Reference section below for the canonical list.
Note: Stack / Monotonic Stack problems are also included in the curated bank. These are not a top-level pattern in the leetcode-teacher taxonomy but appear frequently in interviews. Topological Sort is classified under DFS/BFS.
This step provides supplementary signal when available. The roadmap's core value comes from JD analysis (Step 2), company research (Step 3), and learner calibration (Step 4). If enrichment fails, proceed without it — the roadmap quality is not meaningfully affected.
Before selecting from the curated bank, attempt to fetch company-specific problem frequency data:
references/company-slug-map.md to look up the slug (e.g., "Goldman Sachs" → goldman-sachs, "Meta" → meta). If the company isn't in the map, derive the slug by lowercasing and replacing spaces/special chars with hyphens.WebFetch to retrieve:
https://raw.githubusercontent.com/snehasishroy/leetcode-companywise-interview-questions/master/<slug>/thirty-days.csv
If the fetch fails (404, empty, or network error), skip enrichment entirely and proceed to 6b with the curated bank only.ID,URL,Title,Difficulty,Acceptance %,Frequency %. Extract (ID, Title, Difficulty, Frequency %) from each row, skip the header. Sort by Frequency % descending so the most-asked problems are considered first.references/curated-problem-bank.md by LeetCode number:
company-frequent (untagged) with its difficulty and frequency%.Company problem frequency data: sourced from public GitHub dataset (last 30 days / last 6 months). If enrichment failed, note: Company-specific frequency data: not available for <company>. This is normal — the public dataset covers ~40 major companies. Problem selection uses JD signals, company archetype, and curated bank..Select 15-25 specific LeetCode problems. Use both references/curated-problem-bank.md and the enrichment data (if available) from Step 6a.
Selection priority (highest → lowest):
Additional selection criteria (applied within each priority level):
Every problem gets:
Organize the curated problems into 3 phases:
Phase 1: Foundations
Phase 2: Core Depth
Phase 3: Edge Sharpening
Timeline: Work backward from interview date if known. If no date is provided, default to a 2-3 week plan. Adjust phase durations proportionally.
Write technical-roadmap.md to hojicha/<company>-<role>-resume/:
hojicha/<company>-<role>-resume/
technical-roadmap.md # Generated by Step 8
Follow the output template in references/output-template.md.
hojicha/<company>-<role>-resume/technical-roadmap.md
Examples:
hojicha/google-ml-engineer-resume/technical-roadmap.mdhojicha/stripe-backend-engineer-resume/technical-roadmap.mdhojicha/citadel-quantitative-developer-resume/technical-roadmap.mdPattern names must match exactly:
| Pattern | Key Signals |
|---|---|
| Two Pointers | Sorted arrays, pair finding, palindromes |
| Sliding Window | Contiguous subarray/substring, "at most K" |
| Binary Search | Sorted input, monotonic condition, "find minimum X" |
| Dynamic Programming | Overlapping subproblems, "number of ways", "minimum cost" |
| DFS/BFS | Tree/graph traversal, connected components, shortest path |
| Backtracking | "All combinations/permutations", constraint satisfaction |
| Greedy | Local optimal → global optimal, interval scheduling |
| Hash Table | O(1) lookup, frequency counting, complement finding |
| Heap / Priority Queue | "Kth largest/smallest", merge K sorted, streaming |
| Union-Find | Connected components, cycle detection, dynamic connectivity |
Note: Stack / Monotonic Stack problems (e.g., Valid Parentheses, Daily Temperatures) are also included in the curated problem bank. These are not a top-level pattern in the leetcode-teacher taxonomy but appear frequently in interviews and should be selected when JD signals or company archetypes indicate relevance. Topological Sort is classified under DFS/BFS.
| Role Level | JD Signals | Minimum Distribution (15-25 problems) | Competitive Firm Uplift |
|---|---|---|---|
| Junior / Entry | "0-2 years", "new grad", "entry-level", "associate" | 5 Easy + 12 Medium + 3 Hard | +2-3 additional Hard problems |
| Mid | "3-5 years", "engineer II", "software engineer" | 3 Easy + 14 Medium + 3 Hard | +2-4 additional Hard problems |
| Senior | "5+ years", "senior", "lead", "staff", "principal" | 0 Easy + 12 Medium + 8 Hard | +2-3 additional Hard problems |
These are minimums. If OA ground truth is available (user reports actual problems from the company's assessment), override with observed difficulty. Real OAs routinely exceed expected difficulty regardless of role level — companies use shared assessment platforms (HackerRank, Codility, CodeSignal) with problem pools not calibrated to role level.
Competitive firms are archetypes with ≤20% Easy-Medium in their difficulty distribution: FAANG / Big Tech, Quant / HFT / Prop Trading, AI Labs / ML-First Companies, Government / Defense Tech. Check references/company-archetypes.md for the archetype's difficulty distribution.
web_search_exa, crawling_exa) — primaryWebSearch / WebFetch — fallback if Exa unavailablereferences/company-archetypes.md — proxy when research is thin