From contextstellar
Analyze recent prompt patterns and suggest optimizations. Use when the user wants to improve their prompting patterns, reduce token costs, or see what the RLM flywheel has learned about their project.
npx claudepluginhub sunnypatneedi/claude-code-contextstellarThis skill uses the workspace's default tool permissions.
Analyze the user's recent scoring patterns and provide actionable optimization recommendations.
Scores prompts across 7 dimensions and restructures using 8 Anthropic techniques like XML tags and chain-of-thought. Auto-triggers on PreToolUse for unstructured subagent prompts; manual via /reprompt-orator.
Optimizes prompts for AI performance via chain-of-thought, few-shot examples, token reduction, RAG integration, and model-specific tuning like GPT-4 or Claude. Activates on improve/refine/engineering requests.
Evaluates prompt quality, optimizes using 58 techniques like CoT, few-shot learning, role-play. Useful for improving clarity, specificity, structure, or generating variations.
Share bugs, ideas, or general feedback.
Analyze the user's recent scoring patterns and provide actionable optimization recommendations.
curl -s "${CONTEXTSTELLAR_BASE_URL:-https://contextstellar.com}/api/v1/hooks/stats?projectId=${CONTEXTSTELLAR_PROJECT_ID}" \
-H "Authorization: ${CONTEXTSTELLAR_API_KEY}"
Grade Distribution Analysis:
RLM Learned Weights (if available):
Token Utilization (if low):
Structural Clarity (if low):
<context>, <instructions>, <output_format>)Specificity (if low):
Content Density (if low):
Cache-Friendliness (if low):
If $ARGUMENTS contains a prompt, score it and show before/after with specific edits.
End with the user's trend direction and encourage continued improvement.