Design, evaluate, and optimize prompts for LLMs. Covers system prompts, few-shot patterns, chain-of-thought, structured output, guardrails, meta-prompting, and prompt evaluation. Produces Zettelkasten-ready knowledge artifacts and branded deliverables. [EXPLICIT] Trigger: "prompt engineering", "system prompt", "few-shot", "chain of thought", "prompt design"
From jm-adknpx claudepluginhub javimontano/jm-adk-alfaThis skill is limited to using the following tools:
agents/guardian.mdagents/lead.mdagents/specialist.mdagents/support.mdevals/evals.jsonknowledge/body-of-knowledge.mdknowledge/knowledge-graph.mdprompts/meta.mdprompts/primary.mdprompts/variations/audit.mdprompts/variations/beginner.mdprompts/variations/deep.mdprompts/variations/expert.mdprompts/variations/quick.mdtemplates/output.docx.mdtemplates/output.htmltemplates/output.xlsx.md"A prompt is not a question — it is an architecture for reasoning."
Design, evaluate, and optimize prompts for any LLM. This skill covers the full lifecycle: understanding the task → selecting the right pattern (few-shot, CoT, system prompt, meta-prompt) → writing the prompt → evaluating output quality → iterating. Produces Zettelkasten-ready knowledge artifacts and branded deliverables (HTML, DOCX, XLSX). [EXPLICIT]
| Agent | Role in Triad | File |
|---|---|---|
prompt-lead | Designs and writes the prompt | agents/lead.md |
prompt-support | Reviews for bias, edge cases, injection risk | agents/support.md |
prompt-guardian | Evaluates output quality, validates evidence | agents/guardian.md |
prompt-specialist | Deep expertise in advanced patterns (meta, constitutional) | agents/specialist.md |
knowledge/body-of-knowledge.md for pattern catalogknowledge/knowledge-graph.md for related concepts| Anti-Pattern | Why It's Bad | Do This Instead |
|---|---|---|
| "Just ask nicely" | No structure = inconsistent results | Use role-context-task-format pattern |
| Massive single prompt | Exceeds attention, dilutes focus | Decompose into chain of focused prompts |
| No examples | Model guesses output format | Add 2-3 few-shot examples |
| Ignoring the model | Claude ≠ GPT ≠ Llama | Adapt syntax to target model |
| No evaluation | "It looks right" isn't evidence | Test with diverse inputs, score metrics |
ai-safety — Guardrails and output validationstructured-output — JSON mode, schema-constrained generationcontext-window-management — Token budgeting for long promptsrag-patterns — Prompts that integrate retrieved contextllm-evaluation — Systematic prompt evaluation methodsknowledge/knowledge-graph.md — Zettelkasten concept mapknowledge/body-of-knowledge.md — Pattern catalog and referencestemplates/output.html — Branded HTML prompt documentationtemplates/output.docx.md — Word document spec for prompt librarytemplates/output.xlsx.md — Evaluation matrix spreadsheetExample invocations:
| Scenario | Handling |
|---|---|
| Empty or minimal input | Request clarification before proceeding |
| Conflicting requirements | Flag conflicts explicitly, propose resolution |
| Out-of-scope request | Redirect to appropriate skill or escalate |
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.