From outputai
Creates .md skill files for Output framework's lazy-loaded LLM instruction system. Use when adding reusable instructions to prompts, configuring skill loading, or debugging resolution.
npx claudepluginhub growthxai/output --plugin outputaiThis skill is limited to using the following tools:
This skill documents how to create `.md` skill files for the Output framework's skills system. Skills are lazy-loaded instruction packages that keep prompts lightweight. The LLM sees a list of skill names and descriptions in the system message, then calls a `load_skill` tool to retrieve full instructions on demand.
Guides creation of effective skills extending AI model capabilities with specialized knowledge, workflows, and tool integrations. Useful when users request new or updated skills.
Guides creation and updating of AgentSkills with principles for conciseness, degrees of freedom, SKILL.md structure including YAML frontmatter, and bundling scripts/references/assets. Use for designing skills.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
This skill documents how to create .md skill files for the Output framework's skills system. Skills are lazy-loaded instruction packages that keep prompts lightweight. The LLM sees a list of skill names and descriptions in the system message, then calls a load_skill tool to retrieve full instructions on demand.
Important: These are framework skills (.md files loaded by LLMs at runtime), not Claude Code plugin skills. The naming is similar but the systems are separate.
load_skill tool issuesSkill files live in a skills/ folder next to the prompt file. Output auto-discovers them with no configuration needed:
src/workflows/{workflow-name}/
├── workflow.ts
├── steps.ts
├── types.ts
└── prompts/
├── writing_assistant@v1.prompt
└── skills/
├── clarity_guidelines.md
├── response_format.md
└── structure_guide.md
The skills/ folder is relative to the prompt file location, not the workflow root.
Skill files are markdown documents with an optional YAML frontmatter block:
---
name: clarity_guidelines
description: Rules for writing clear, readable technical content
---
# Clarity Guidelines
When reviewing or writing technical content for clarity:
1. **Sentence length**: Keep sentences under 25 words when possible.
Break complex ideas into multiple sentences.
2. **Active voice**: Prefer active voice ("The function returns X")
over passive ("X is returned by the function").
3. **Jargon**: Define technical terms on first use.
Avoid unnecessary acronyms without explanation.
4. **Concrete examples**: Every abstract concept should have
a concrete example.
When applying this skill, flag any violations you find
and suggest improvements.
| Field | Required | Default | Description |
|---|---|---|---|
name | No | Filename without .md | Identifier the LLM uses with load_skill |
description | No | Same as name | Shown in system message, helps LLM decide when to load |
| Body | Yes | - | Full instructions returned when LLM calls load_skill |
If you omit the frontmatter entirely, the filename (without .md) is used as both the name and description. A file named clarity_guidelines.md with no frontmatter gets name: "clarity_guidelines" and description: "clarity_guidelines".
Write good descriptions. They appear in the system message and are what the LLM uses to decide whether to load a skill. "Rules for writing clear, readable technical content" is better than "clarity_guidelines".
Place .md files in a skills/ folder next to your prompt file. Output discovers them automatically. The prompt file needs no special configuration:
---
provider: anthropic
model: claude-sonnet-4
maxTokens: 2048
---
<system>
You are an expert technical writing assistant.
Use load_skill to get full instructions for any skill before applying it.
</system>
<user>
Review the following {{ content_type }} content focusing on {{ focus }}.
Content:
{{ content }}
</user>
At runtime, Output finds the colocated skills/ directory, loads all .md files, and:
load_skill tool the LLM can callReference specific skill files or directories in the prompt YAML frontmatter. Paths resolve relative to the prompt file:
---
provider: anthropic
model: claude-sonnet-4
skills:
- ./skills/
- ../shared_skills/tone_guide.md
---
When skills: is set in frontmatter, auto-discovery is skipped. Only the listed paths are loaded.
Create skills programmatically with the skill() function from @outputai/llm:
import { skill } from '@outputai/llm';
const audienceSkill = skill({
name: 'audience_adaptation',
description: 'Tailor feedback for the specified expertise level',
instructions: `# Audience Adaptation
When the target audience is specified, adjust your feedback:
**Beginner**: Flag jargon as high-priority issues.
**Expert**: Focus on accuracy and completeness.
Always mention the audience level in your summary.`
});
Pass inline skills to generateText or Agent:
const { result } = await generateText({
prompt: 'writing_assistant@v1',
variables: { content_type: 'documentation', focus: 'clarity', content: input.content },
skills: [audienceSkill],
maxSteps: 5
});
Inline skills are merged with any file-based skills.
Skills are resolved in this order:
skills: is set in the prompt frontmatter, those paths are loadedskills: in frontmatter, the skills/ directory next to the prompt file is scannedskills: [...] in generateText or Agent) are always merged inFrontmatter paths and colocated auto-discovery are mutually exclusive. Setting skills: in frontmatter disables auto-discovery. Caller-provided skills are always added regardless of which file-based method is used.
Set skills: [] in the prompt frontmatter to opt out of auto-discovery:
---
provider: anthropic
model: claude-haiku-4-5-20251001
skills: []
---
This is useful when you have a skills/ directory for other prompts in the same folder, but a specific prompt should not load any skills.
---
name: response_format
description: Standard format requirements for all review responses
---
# Response Format
Every response MUST end with the exact string "OUTPUT_COMPLETE" on its own line.
Structure your review as follows:
1. **Summary**: 2-3 sentence overview of the content quality
2. **Issues**: Numbered list of specific problems found
3. **Suggestions**: Actionable improvements for each issue
4. **Score**: Overall quality score from 0-100
OUTPUT_COMPLETE
---
provider: anthropic
model: claude-sonnet-4
maxTokens: 2048
---
<system>
You are an expert technical writing assistant.
Use load_skill to get the full instructions for any skill before applying it.
After reviewing, provide structured feedback with specific issues and suggestions.
</system>
<user>
Review the following {{ content_type }} content focusing on {{ focus }}.
Content:
{{ content }}
</user>
import { step, z } from '@outputai/core';
import { Agent, Output } from '@outputai/llm';
export const reviewContent = step({
name: 'reviewContent',
description: 'Review content using skills for specialized expertise',
inputSchema: z.object({
content: z.string(),
content_type: z.string(),
focus: z.string()
}),
outputSchema: z.object({
summary: z.string(),
issues: z.array(z.string()),
suggestions: z.array(z.string()),
score: z.number()
}),
fn: async (input) => {
const agent = new Agent({
prompt: 'writing_assistant@v1',
variables: input,
output: Output.object({
schema: z.object({
summary: z.string().describe('2-3 sentence overview'),
issues: z.array(z.string()).describe('Specific problems found'),
suggestions: z.array(z.string()).describe('Actionable improvements'),
score: z.number().describe('Quality score 0-100')
})
}),
maxSteps: 5
});
const { output } = await agent.generate();
return output;
}
});
Each skill should cover one area of expertise. Prefer multiple focused skills over one large skill:
skills/
├── clarity_guidelines.md # Writing clarity
├── structure_guide.md # Document structure
└── response_format.md # Output formatting
The description appears in the system message. Make it clear when the LLM should load this skill:
---
name: clarity_guidelines
description: Rules for writing clear, readable technical content
---
Not:
---
name: clarity_guidelines
description: clarity_guidelines
---
Use markdown headers and lists for scannable instructions:
# Clarity Guidelines
## Rules
1. Keep sentences under 25 words
2. Prefer active voice
3. Define jargon on first use
## When to Flag
- Sentences over 30 words
- Passive voice in instructions
- Undefined acronyms
Tell the LLM what to do with the skill, not just what the skill is about:
When applying this skill, flag any violations you find and suggest improvements.
.md format in a skills/ directory next to the prompt filedescription in frontmatterload_skill in system message (when using auto-discovery)skills: [], auto-discovery is intentionally disabledmaxSteps (default 10) to allow tool loop iterationsoutput-dev-prompt-file - Creating .prompt files that use skillsoutput-dev-agent-class - Using the Agent class with skillsoutput-dev-step-function - Using skills in step functionsoutput-dev-folder-structure - Understanding skill file locations