Skill for writing clear, effective LLM prompts. Covers structure, specificity, role-setting, examples, output format specification, and iterative refinement. Model-agnostic principles that work across Claude, GPT, and other language models.
From mnpx claudepluginhub molcajeteai/plugin --plugin mThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Executes pre-written implementation plans: critically reviews, follows bite-sized steps exactly, runs verifications, tracks progress with checkpoints, uses git worktrees, stops on blockers.
Principles and techniques for writing clear, effective LLM prompts that produce consistent, high-quality output.
# Role and Objective
[Who the model is and what it should accomplish]
## Instructions
[Numbered steps for the task]
## Context
[Background information, audience, purpose]
## Output Format
[Exact format, length, tone specifications]
## Examples (optional)
[3-5 input/output pairs wrapped in delimiters]
##) to separate sections<context>, <example>, <output>) when nesting is neededApply before outputting the final prompt: