From thinking-frameworks-skills
Transforms vague prompts into structured ones with roles, task decomposition, output formats, constraints, and quality checks. Useful for inconsistent AI outputs, multi-step reasoning, or safety guardrails.
npx claudepluginhub lyndonkl/claude --plugin thinking-frameworks-skillsThis skill uses the workspace's default tool permissions.
Copy this checklist and track your progress:
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Copy this checklist and track your progress:
Meta-Prompt Engineering Progress:
- [ ] Step 1: Analyze current prompt
- [ ] Step 2: Define role and goal
- [ ] Step 3: Add structure and steps
- [ ] Step 4: Specify constraints
- [ ] Step 5: Add quality checks
- [ ] Step 6: Test and iterate
Step 1: Analyze current prompt
Identify weaknesses: vague instructions, missing constraints, no structure, inconsistent outputs. Document specific failure modes. Use resources/template.md as starting structure.
Step 2: Define role and goal
Specify who the AI is (expert, assistant, critic) and what success looks like. Clear persona and objective improve output quality. See Common Patterns for role examples.
Step 3: Add structure and steps
Break complex tasks into numbered steps or sections. Define expected output format (JSON, markdown, sections). For advanced structuring techniques, see resources/methodology.md.
Step 4: Specify constraints
Add explicit limits: length, tone, content restrictions, format requirements. Include domain-specific rules. See Guardrails for constraint patterns.
Step 5: Add quality checks
Include self-evaluation criteria, chain-of-thought requirements, uncertainty expression. Build in failure prevention for known issues.
Step 6: Test and iterate
Run prompt multiple times, measure consistency and quality using resources/evaluators/rubric_meta_prompt_engineering.json. Refine based on failure modes.
Role Specification Pattern:
You are a [role] with expertise in [domain].
Your goal is to [specific objective] for [audience].
You should prioritize [values/principles].
Task Decomposition Pattern:
To complete this task:
1. [Step 1 with clear deliverable]
2. [Step 2 building on step 1]
3. [Step 3 synthesizing 1 and 2]
4. [Final step with output format]
Constraint Specification Pattern:
Requirements:
- [Format constraint]: Output must be [structure]
- [Length constraint]: [min]-[max] [units]
- [Tone constraint]: [style] appropriate for [audience]
- [Content constraint]: Must include [required elements] / Must avoid [prohibited elements]
Quality Check Pattern:
Before finalizing, verify:
- [ ] [Criterion 1 with specific check]
- [ ] [Criterion 2 with measurable standard]
- [ ] [Criterion 3 with failure mode prevention]
If any check fails, revise before responding.
Few-Shot Pattern:
Here are examples of good outputs:
Example 1:
Input: [example input]
Output: [example output with annotation]
Example 2:
Input: [example input]
Output: [example output with annotation]
Now apply the same approach to:
Input: [actual input]
Avoid Over-Specification:
Test for Robustness:
Prevent Common Failures:
Balance Specificity and Flexibility:
Iterate Based on Failures:
Resources:
resources/template.md - Structured prompt template with all componentsresources/methodology.md - Advanced techniques for complex promptsresources/evaluators/rubric_meta_prompt_engineering.json - Quality criteria for prompt evaluationOutput:
meta-prompt-engineering.md in current directorySuccess Criteria:
Quick Prompt Improvement Checklist:
Common Improvements: