Creates, reviews, evolves, repairs, and ports system prompts across LLM platforms using the Playbook format, a 10-criterion evaluation rubric, and context engineering principles. Activates when the user says "create a system prompt", "review this prompt", "optimize this prompt", "port this prompt to GPT", or "fix my prompt". [EXPLICIT] Also triggers on mentions of prompt engineering, prompt evaluation, prompt porting, or Playbook format. [EXPLICIT] Use this skill even if the user just pastes a prompt without instructions — it defaults to review mode. [EXPLICIT]
From jm-adknpx claudepluginhub javimontano/jm-adk-alfaThis skill is limited to using the following tools:
agents/guardian.mdagents/lead.mdagents/specialist.mdagents/support.mdevals/evals.jsonknowledge/body-of-knowledge.mdknowledge/knowledge-graph.mdprompts/meta.mdprompts/primary.mdprompts/variations/deep.mdprompts/variations/quick.mdreferences/domain-knowledge.mdtemplates/output.docx.mdtemplates/output.htmlTurn vague assistant ideas into structured, high-performance system prompts — portable across Claude, GPT, and Gemini. [EXPLICIT]
| Mode | Trigger | Action |
|---|---|---|
| Create | "Create a prompt for...", "I need an assistant that..." | Full forge cycle: capture, draft, evaluate, refine, deliver |
| Review | "Review this prompt", "Is this any good?" | Evaluate against rubric, deliver scorecard + prioritized fixes |
| Evolve | "Make this better", "Optimize this prompt" | Identify weaknesses, apply targeted improvements |
| Repair | "This isn't working", "The AI keeps doing X" | Diagnose failure pattern, apply surgical fix |
| Port | "Convert this for Claude/GPT/Gemini" | Adapt format, constraints, and features for target platform |
Default behavior: Generate first, confirm after. Produce a strong v1, then iterate on feedback. Do not ask 20 questions before writing.
For deep explanations and examples, read references/design-principles.md. [EXPLICIT]
Universal output template for generated prompts:
# [Assistant Name] — v[X.Y]
## Role & Archetype
[Composite expert identity: domain + methodology + communication style]
## Objective
[What the user achieves — 1-2 sentences]
## Parameters
- Model: [Target model(s)]
- Temperature: [Recommended setting]
- Context window usage: [Strategy]
## Interaction Flow
### Phase 1: Discovery
[How the assistant gathers context]
### Phase 2: Execution
[How it processes and produces output]
### Phase 3: Delivery
[How it formats and presents results]
## Constraints
[Hard boundaries — what the assistant must NOT do]
## Key Questions
[3-5 questions for ambiguous context]
## Output Template
[Exact format with placeholders]
## Self-Correction Triggers
[Patterns that signal recalibration]
For per-section guidance, read references/playbook-template.md. [EXPLICIT]
Every prompt scored 1-10 on each dimension. Target: 8+ on all for production quality. [EXPLICIT]
| # | Criterion | Measures |
|---|---|---|
| 1 | Foundation | Clear archetype, objective, and constraints? |
| 2 | Accuracy | All claims, frameworks, and techniques correct? |
| 3 | Quality | Writing professional, precise, filler-free? |
| 4 | Density | Maximum value per token? |
| 5 | Simplicity | Non-expert could understand structure? |
| 6 | Clarity | Instructions unambiguous? One interpretation? |
| 7 | Precision | Constraints specific enough to enforce? |
| 8 | Depth | Handles edge cases, failures, advanced scenarios? |
| 9 | Coherence | All sections reinforce each other? |
| 10 | Value | User gets meaningfully better results? |
For detailed scoring and repair protocols, read references/evaluation-rubric.md. [EXPLICIT]
Modern prompt design manages everything the model sees, not just instruction text. [EXPLICIT]
| Layer | Scope | Example |
|---|---|---|
| L1: Hot | System prompt + current turn | The Playbook itself |
| L2: Warm | Uploaded docs, knowledge base | Reference PDFs, style guides |
| L3: Cold | RAG retrieval, tool outputs | Dynamic data from APIs |
Design for all three layers. A great L1 with poor L2 design underperforms. [EXPLICIT]
For context hierarchy patterns and token optimization, read references/context-engineering.md. [EXPLICIT]
| Platform | System Prompt | Knowledge Base | Tools |
|---|---|---|---|
| Claude Projects | Project instructions | Project knowledge files | MCP servers |
| ChatGPT Custom GPTs | Instructions field | Uploaded files | Actions (API) |
| Gemini Gems | System instructions | Google Docs/Drive | Extensions |
| API / Code | system parameter | RAG pipeline | Function calling |
For platform-specific limits and deployment guides, read references/platform-guides.md. [EXPLICIT]
| Problem | Bad Pattern | Fix |
|---|---|---|
| Wall of text | 2000-word flat instruction | Break into Playbook sections with flow |
| Vague role | "You are helpful" | Composite archetype: domain + method + style |
| No output format | "Help users with X" | Define exact deliverable template |
| Over-constraining | 50 rules in ALL CAPS | Explain why behind each constraint |
| Platform-blind | Same text everywhere | Adapt to each platform's affordances |
| No feedback loop | Static, never improved | Build self-correction triggers + versioning |
Before delivering a prompt, confirm:
references/design-principles.md — Deep dive into the 7 principles with examples and antipatternsreferences/evaluation-rubric.md — Full scoring parameters, failure criteria, repair protocolsreferences/playbook-template.md — Complete template with per-section guidancereferences/platform-guides.md — Platform-specific formatting, limits, deployment for Claude, GPT, Gemini, APIreferences/context-engineering.md — Context hierarchy, token optimization, RAG integrationAuthor: Javier Montaño | Last updated: March 12, 2026
Example invocations:
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.