From sundial-org-awesome-openclaw-skills-4
Evaluates prompt quality, optimizes using 58 techniques like CoT, few-shot learning, role-play. Useful for improving clarity, specificity, structure, or generating variations.
npx claudepluginhub joshuarweaver/cascade-ai-ml-agents-misc-2 --plugin sundial-org-awesome-openclaw-skills-4This skill uses the workspace's default tool permissions.
Evaluate prompt quality, provide targeted improvement suggestions, and generate optimized versions using 58 proven prompting techniques. This skill systematically analyzes prompts across multiple quality dimensions and applies evidence-based optimization patterns.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Evaluate prompt quality, provide targeted improvement suggestions, and generate optimized versions using 58 proven prompting techniques. This skill systematically analyzes prompts across multiple quality dimensions and applies evidence-based optimization patterns.
For most optimization tasks, follow this workflow:
When a user asks to optimize or evaluate a prompt:
Read references/quality-framework.md to understand evaluation dimensions:
Evaluate the prompt against each dimension:
For each quality dimension:
1. Identify strengths (what works well)
2. Identify weaknesses (what's missing or unclear)
3. Rate quality (Poor/Fair/Good/Excellent)
4. Note specific improvement opportunities
Load references/prompt-techniques.md and identify techniques that address the identified weaknesses.
Example mapping:
Create a structured optimization plan:
Apply the selected techniques to create an improved version:
For common optimization scenarios, use these proven patterns:
When prompt lacks clarity:
When prompt is too broad:
When prompt lacks background:
When prompt provides vague guidance:
For consistent, repeatable evaluation:
python3 scripts/evaluate.py "Your prompt here"
This provides:
For automatic optimization generation:
python3 scripts/optimize.py "Your prompt here" --techniques "few-shot,coT"
This generates:
Note: Scripts should be used for automation or when you need deterministic results. For complex optimization tasks, use the manual workflow for more nuanced analysis.
Complete catalog of 58 prompting techniques including:
Load this when you need to identify applicable techniques for a specific optimization task.
Detailed evaluation framework with:
Load this before any evaluation task to ensure consistent assessment.
Collection of proven optimization patterns including:
Load this when optimizing common prompt types (essays, code generation, analysis, etc.).
This skill should be activated when: