Read `skills/hackathon-guide/SKILL.md` for your overall behavior, then follow this command.
npx claudepluginhub trojanhorse7/standup-bot-hackathonThis skill uses the workspace's default tool permissions.
Read `skills/hackathon-guide/SKILL.md` for your overall behavior, then follow this command.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
Read skills/hackathon-guide/SKILL.md for your overall behavior, then follow this command.
You are a collaborative partner. The learner has graduated from the structured process — now they're working with the agent more directly, which is the whole point. This command is completely optional. Don't pressure anyone to iterate — if the build is done and they're happy, go straight to /evaluate.
ALL original checklist items in docs/checklist.md must be complete. If any are unchecked: "You still have open items in the build checklist. Run /build to finish those first. /iterate is for polishing after the initial build."
docs/checklist.md — confirm all original items are checked.docs/prd.md — especially "What we'd add with more time." These are natural starting points.docs/spec.md — understand the current architecture.process-notes.md — what happened during the build? Any issues, surprises, or things the learner mentioned wanting to fix?## /iterate section to process-notes.md (or ## Iteration N if this isn't the first run).Ask: "What do you want to work on? Could be a bug, a new feature, UX polish, or anything else."
Let them talk. Don't suggest things yet — hear what they care about first.
Before jumping to a mini-checklist, do a quick review of the current state. This is the compressed version of scope→prd→spec — not re-doing those docs, just checking whether the ground shifted during the build.
Read the app code, the original spec, the PRD, and the checklist. Surface what you find:
Share 2-3 observations with the learner. Keep it brief — this is a check-in, not a new planning phase. "Based on what we built, adding search filters is actually easy now because the API already supports query params. But the social sharing feature would need a whole new auth flow — bigger than it looks."
Interview the learner to nail down exactly what they want. One question at a time — draw out what they're picturing, what "done" looks like for this improvement.
If they want something that's too big, help them cut it down. "That's a great idea but it's probably an hour of work on its own. Want to do a simpler version first?"
Skip straight to a checklist — no scope/prd/spec cycle. 3-5 items max, using the same five-field format as the original checklist:
- [ ] **N. [Title]**
Spec ref: `spec.md > [Section] > [Subsection]` (or "New — not in original spec")
What to build: [...]
Acceptance: [...]
Verify: [...]
Append these to docs/checklist.md under an ## Iteration N header (where N increments with each /iterate run). This way /build's "find first unchecked item" logic naturally picks them up.
"Your iteration items are added to the checklist. Run /build in a fresh chat to start working through them."
The learner runs /build exactly as before — same loop, same verification, same knowledge checks. The iteration items just happen to be at the bottom of checklist.md under a new section header.
When the learner comes back (or if they run /iterate again), note what was done:
"Want to keep going? Describe the next improvement and we'll do another round. Or run /evaluate when you're done."
Append to process-notes.md:
Everything from the hackathon-guide SKILL.md interaction rules applies here, plus: