Multi-Format Deliverable Production
TL;DR: Produces project deliverables in multiple formats (Markdown, HTML, Mermaid diagrams) while maintaining brand consistency, evidence tagging, and quality standards. Manages the deliverable production pipeline from draft through review to approved final versions.
Principio Rector
Un entregable de calidad es invisible en su formato y visible en su contenido. El formato debe servir al contenido, no al revés. Markdown-first asegura versionabilidad; HTML agrega interactividad y branding; PDF asegura distribución fiel. El pipeline de producción aplica quality checks consistentes independientemente del formato.
Assumptions & Limits
- Assumes content is authored in Markdown as primary format [SUPUESTO]
- Assumes APEX branding tokens are available in canonical-tokens.md [SUPUESTO]
- Breaks if content lacks evidence tags — output engineering cannot add evidence post-hoc [PLAN]
- Scope limited to format production and quality enforcement; content creation is handled by domain skills [PLAN]
- Does not generate content — transforms and formats content produced by other skills [PLAN]
Usage
/pm:output-engineering $SOURCE_FILE --format=html --brand=apex
/pm:output-engineering $SOURCE_FILE --format=markdown,html --status=WIP
/pm:output-engineering $PROJECT_DIR --batch --format=html --quality-check
Parameters:
| Parameter | Required | Description |
|---|
$SOURCE_FILE | Yes | Path to source Markdown file or project directory |
--format | No | markdown / html / both (default: markdown) |
--brand | No | apex / minimal (default: apex) |
--status | No | WIP / Aprobado (default: WIP) |
--batch | No | Process all deliverables in directory |
--quality-check | No | Run Excellence Loop validation |
Service Type Routing
{TIPO_PROYECTO}: All project types produce deliverables. Format selection depends on audience (technical = Markdown, executive = HTML/PDF, regulatory = PDF with signatures).
Before Producing Output
- Read source content — verify completeness and evidence tagging [PLAN]
- Read
references/ontology/canonical-tokens.md — load design tokens for HTML production [PLAN]
- Check naming convention — apply
{fase}_{entregable}_{proyecto}_{status}.{ext} [PLAN]
- Verify approval status — only {Aprobado} after governance approval [PLAN]
Entrada (Input Requirements)
- Draft content in Markdown
- Target format(s) required
- Branding requirements (APEX tokens)
- Evidence tags for content verification
- Approval workflow requirements
Proceso (Protocol)
- Content review — Verify draft content completeness and accuracy
- Evidence tagging — Ensure all assertions have evidence tags
- Format selection — Determine target format(s) based on audience
- Template application — Apply appropriate template per format
- Brand compliance — Verify brand colors, fonts, and layout
- Diagram rendering — Render Mermaid diagrams to appropriate format
- Quality check — Apply Excellence Loop criteria
- Version tagging — Apply {WIP} or {Aprobado} naming convention
- Slug naming — Apply
{fase}_{entregable}_{proyecto}_{status}.{ext} convention
- Distribution — Deliver to appropriate channels per communication plan
Edge Cases
- Format conversion losing content or structure — Validate output against source; flag any content loss; provide both formats if conversion is lossy.
- Evidence tags missing from content — Return to content author; do not produce final format without evidence compliance.
- Deliverable blocked in approval workflow — Maintain {WIP} tag; do not allow {Aprobado} without governance sign-off.
- Batch processing with mixed quality levels — Produce quality report per deliverable; do not batch-approve inconsistent quality.
Example: Good vs Bad
Good Output Engineering:
| Attribute | Value |
|---|
| Naming | 03_Schedule_ProyectoAlfa_{WIP}.md — correct convention [PLAN] |
| Evidence | Every assertion tagged: [PLAN], [METRIC], [SCHEDULE], [STAKEHOLDER] |
| Brand compliance | APEX tokens applied; zero green indicators; correct fonts [PLAN] |
| Quality check | 10/10 Excellence Loop criteria passed |
| Format | Markdown source + HTML branded version produced simultaneously |
Bad Output Engineering:
File named "schedule_v3_final_FINAL.docx" with no evidence tags, random colors, no branding, and "final" version that is actually still in draft. No naming convention, no quality check, no version control.
Salida (Deliverables)
- Formatted deliverable in target format(s)
- Quality check report
- Version history
- Distribution confirmation
Validation Gate
Escalation Triggers
- Format conversion losing content or structure
- Brand compliance failures
- Deliverable blocked in approval workflow
- Format not supported by recipient platform
Additional Resources
| Resource | When to Read | Location |
|---|
| Body of Knowledge | When applying deliverable production best practices | references/body-of-knowledge.md |
| State of the Art | When implementing automated deliverable pipelines | references/state-of-the-art.md |
| Knowledge Graph | When mapping output to pipeline deliverable requirements | references/knowledge-graph.mmd |
| Use Case Prompts | When producing deliverables for specific project types | prompts/use-case-prompts.md |
| Metaprompts | When adapting output for non-APEX branding | prompts/metaprompts.md |
| Sample Output | When reviewing expected deliverable quality | examples/sample-output.md |
Output Configuration
- Language: Spanish (Latin American, business register)
- Evidence: [PLAN], [SCHEDULE], [METRIC], [INFERENCIA], [SUPUESTO], [STAKEHOLDER]
- Branding: #2563EB royal blue, #F59E0B amber (NEVER green), #0F172A dark
Sub-Agents
Format Converter
Format Converter Agent
Core Responsibility
Converts markdown to HTML, DOCX, XLSX formats. This agent operates autonomously, applying systematic analysis and producing structured outputs.
Process
- Gather Inputs. Collect all relevant data, documents, and stakeholder inputs needed for analysis.
- Analyze Context. Assess the project context, methodology, phase, and constraints.
- Apply Framework. Apply the appropriate analytical framework or model.
- Generate Findings. Produce detailed findings with evidence tags and quantified impacts.
- Validate Results. Cross-check findings against related artifacts for consistency.
- Formulate Recommendations. Transform findings into actionable recommendations with owners and timelines.
- Deliver Output. Produce the final structured output with executive summary, analysis, and action items.
Output Format
- Analysis Report — Structured findings with evidence tags and severity ratings.
- Recommendation Register — Actionable items with owners, deadlines, and success criteria.
- Executive Summary — 3-5 bullet point summary for stakeholder communication.
Ghost Menu Injector
Ghost Menu Injector Agent
Core Responsibility
Injects navigation ghost menus into deliverables. This agent operates autonomously, applying systematic analysis and producing structured outputs.
Process
- Gather Inputs. Collect all relevant data, documents, and stakeholder inputs needed for analysis.
- Analyze Context. Assess the project context, methodology, phase, and constraints.
- Apply Framework. Apply the appropriate analytical framework or model.
- Generate Findings. Produce detailed findings with evidence tags and quantified impacts.
- Validate Results. Cross-check findings against related artifacts for consistency.
- Formulate Recommendations. Transform findings into actionable recommendations with owners and timelines.
- Deliver Output. Produce the final structured output with executive summary, analysis, and action items.
Output Format
- Analysis Report — Structured findings with evidence tags and severity ratings.
- Recommendation Register — Actionable items with owners, deadlines, and success criteria.
- Executive Summary — 3-5 bullet point summary for stakeholder communication.
Multi Format Packager
Multi Format Packager Agent
Core Responsibility
Packages deliverables in multiple formats for distribution. This agent operates autonomously, applying systematic analysis and producing structured outputs.
Process
- Gather Inputs. Collect all relevant data, documents, and stakeholder inputs needed for analysis.
- Analyze Context. Assess the project context, methodology, phase, and constraints.
- Apply Framework. Apply the appropriate analytical framework or model.
- Generate Findings. Produce detailed findings with evidence tags and quantified impacts.
- Validate Results. Cross-check findings against related artifacts for consistency.
- Formulate Recommendations. Transform findings into actionable recommendations with owners and timelines.
- Deliver Output. Produce the final structured output with executive summary, analysis, and action items.
Output Format
- Analysis Report — Structured findings with evidence tags and severity ratings.
- Recommendation Register — Actionable items with owners, deadlines, and success criteria.
- Executive Summary — 3-5 bullet point summary for stakeholder communication.
Template Processor
Template Processor Agent
Core Responsibility
Processes templates with project data. This agent operates autonomously, applying systematic analysis and producing structured outputs.
Process
- Gather Inputs. Collect all relevant data, documents, and stakeholder inputs needed for analysis.
- Analyze Context. Assess the project context, methodology, phase, and constraints.
- Apply Framework. Apply the appropriate analytical framework or model.
- Generate Findings. Produce detailed findings with evidence tags and quantified impacts.
- Validate Results. Cross-check findings against related artifacts for consistency.
- Formulate Recommendations. Transform findings into actionable recommendations with owners and timelines.
- Deliver Output. Produce the final structured output with executive summary, analysis, and action items.
Output Format
- Analysis Report — Structured findings with evidence tags and severity ratings.
- Recommendation Register — Actionable items with owners, deadlines, and success criteria.
- Executive Summary — 3-5 bullet point summary for stakeholder communication.