From skillkit-creative
Generates diverse creative content like blog posts, captions, stories, and ideas using Verbalized Sampling to increase variation 1.6-2.1x for brainstorming or repetitive outputs.
npx claudepluginhub rfxlamia/skillkit --plugin skillkit-creativeThis skill uses the workspace's default tool permissions.
This skill teaches agents how to use **Verbalized Sampling (VS)** - a research-backed prompting technique that dramatically increases output diversity (1.6-2.1× improvement) without sacrificing quality.
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Generates images from structured JSON prompts via Python script execution. Supports reference images and aspect ratios for characters, scenes, products, visuals.
This skill teaches agents how to use Verbalized Sampling (VS) - a research-backed prompting technique that dramatically increases output diversity (1.6-2.1× improvement) without sacrificing quality.
The Problem: Standard aligned LLMs suffer from "mode collapse" - they generate overly similar, safe, predictable outputs because of typicality bias in training data.
The Solution: Instead of asking for single instances ("write a blog post"), VS prompts the model to verbalize a probability distribution over multiple responses ("generate 5 blog post ideas with their probabilities").
Core Principle: Different prompt types collapse to different modes. Distribution-level prompts recover the diverse base model distribution, while instance-level prompts collapse to the most typical output.
Detect user intent, route to appropriate reference:
| User Request Pattern | Route To | Description |
|---|---|---|
| "Generate diverse [content]" | references/vs-core-technique.md | Learn VS basics, prompt templates, execution |
| "Write 5 blog posts / captions / ideas" | references/task-workflows.md | Task-specific workflows pre-configured |
| "Need higher quality" or "too wild" | references/advanced-techniques.md | VS-CoT, VS-Multi, parameter tuning |
| "Save to file" or "batch process 50 items" | references/tool-integration.md | VS + File tools, batch workflows |
| "VS outputs too similar" or errors | references/troubleshooting.md | Common pitfalls and solutions |
| "Which model works best?" | references/research-findings.md | Benchmarks, model compatibility |
Default workflow: Load vs-core-technique.md first, then load additional references as needed.
Use VS when user requests:
Use VS for these content types:
DON'T use VS for:
For agents who need VS immediately:
User wants multiple variations → Use VS
Generate {k} responses to: {user_request}
Return JSON format with key "responses" (list of dicts).
Each dict must include:
• text: the response string only
• probability: estimated probability (0.0-1.0)
Give ONLY the JSON object, no extra text.
import json
data = json.loads(llm_output)
candidates = data["responses"]
# Present to user ranked by probability
For detailed instructions: Load references/vs-core-technique.md
Recommended loading sequence:
references/vs-core-technique.md
references/task-workflows.md
references/advanced-techniques.md (VS-CoT, VS-Multi)references/tool-integration.md (Write, batch processing)references/troubleshooting.md (Pitfalls & fixes)references/research-findings.md (Benchmarks)Copy this for quick lookup:
| Parameter | Default Value | When to Adjust |
|---|---|---|
| k (candidates) | 5 | Use 3 for quick, 10 for exploration |
| Temperature | 0.7-1.0 | Combine with VS for extra diversity |
| Probability threshold | 0.10 (optional) | Lower (0.01) for more creative outputs |
Troubleshooting shortcuts:
advanced-techniques.mdadvanced-techniques.md)research-findings.mdQuality checklist before presenting:
This skill uses progressive disclosure for optimal token efficiency:
Documentation loaded on-demand based on agent needs:
Pattern: Agent loads SKILL.md first (routing), then loads specific references as needed during execution.
User: "Give me 5 tagline ideas for a coffee shop"
Agent workflow:
vs-core-technique.md (if not already loaded)User: "Write 10 blog post ideas about AI, I need them saved to a file"
Agent workflow:
vs-core-technique.md + tool-integration.mdUser: "These are good but need more polish for production use"
Agent workflow:
advanced-techniques.mdReady to start? Load references/vs-core-technique.md to begin using VS.