Get expert guidance on Orchestr8 optimization strategies and resource creation
Provides expert guidance on Orchestr8 optimization strategies and resource creation.
/plugin marketplace add seth-schultz/orchestr8/plugin install orchestr8@orchestr8[topic] - What do you need help with? (e.g., 'create agent', 'optimize workflow', 'jit loading')I'll provide expert guidance on Orchestr8optimization and resource creation based on your request.
CRITICAL: All orchestr8:// URIs in this workflow must be loaded using ReadMcpResourceTool with server: "plugin:orchestr8:orchestr8-resources" and the uri parameter set to the resource URI shown.
For detailed instructions and examples, load: orchestr8://guides/mcp-resource-loading
Traditional Approach (WASTEFUL):
JIT Approach (OPTIMAL):
→ Load: orchestr8://agents/orchestr8-expert
Let me understand what you need help with:
Analyzing request: "$ARGUMENTS"
Common topics I can help with:
→ Checkpoint: Request understood, expert loaded
Based on your topic, I'll load targeted expertise:
→ Load: orchestr8://skills/orchestr8-optimization-patterns
Guidance areas:
→ Load: orchestr8://skills/jit-loading-progressive-strategies
Guidance areas:
→ Load: orchestr8://skills/fragment-creation-workflow
Guidance areas:
→ Load: orchestr8://skills/fragment-metadata-optimization
Guidance areas:
→ Checkpoint: Specific expertise loaded
If examples would help, I'll load relevant ones:
→ Load (if applicable): orchestr8://match?query=$ARGUMENTS+example&mode=index&maxResults=5
This will show you:
→ Checkpoint: Examples provided (if needed)
Now I'll help you apply the optimization strategies:
1. JIT Loading (91-97% reduction)
orchestr8://match queries with maxTokensorchestr8://match?query=typescript+async&mode=index&maxResults=52. Fragment-Based Organization (25-40% additional)
3. Index-Based Lookup (85-95% reduction)
4. Progressive Loading (50-78% savings)
5. Catalog-First Mode (54% savings)
mode=catalog to explore metadatamode=full unless specifically needed6. Hierarchical Families (56% savings)
For new resources:
□ Size: 500-1000 tokens (or <1500 max)
□ useWhen: 5-20 keyword-rich scenarios
□ Tags: 5-15 specific, searchable
□ Capabilities: 3-8 action-oriented
□ Cross-refs: 3-10 related resources
□ Content: Practical, copy-paste examples
□ Structure: Clear headings, best practices
For workflows/commands:
□ Uses orchestr8://match for JIT loading
□ maxTokens specified for each query
□ Phases clearly defined (0-X%, X-Y%, etc.)
□ Token budget tracked per phase
□ Checkpoints after each phase
□ Conditional loading for optional expertise
□ Total budget: 3,000-6,000 tokens target
For fragments:
□ Focused on single concern
□ Related concepts split into family
□ Core vs Advanced split if >1500 tokens
□ Cross-references for JIT navigation
□ Index rebuilt after changes
→ Checkpoint: Complete - Ready to implement!
This command's token usage:
vs Traditional approach: ~23,000 tokens Savings: 81-88% reduction 🎯
Based on your topic "$ARGUMENTS", here's what to do next:
npm run build-indexnpm run build-indexorchestr8://match queriesFor detailed guidance on specific topics:
# Agent creation
/orchestr8:orchestr8-expert "How do I create an optimized agent?"
# Workflow optimization
/orchestr8:orchestr8-expert "Optimize my workflow with JIT loading"
# Fragment sizing
/orchestr8:orchestr8-expert "Should I split this 1800 token resource?"
# Metadata optimization
/orchestr8:orchestr8-expert "Writing better useWhen scenarios"
# Index performance
/orchestr8:orchestr8-expert "Why isn't my resource matching?"
For browsing resources directly:
// Explore the expert agent
orchestr8://agents/orchestr8-expert
// Quick reference patterns
orchestr8://skills/orchestr8-optimization-patterns
// Fragment creation workflow
orchestr8://skills/fragment-creation-workflow
// JIT loading strategies
orchestr8://skills/jit-loading-progressive-strategies
Remember these principles:
You're now ready to create world-class token-efficient resources for Orchestr8! 🚀
The following resources were dynamically loaded during this command execution:
Phase 1:
Phase 2 (based on your topic):
Phase 3 (conditional):
Total tokens loaded: 2,750-4,350 tokens (vs 23,000 traditional) Efficiency achieved: 81-88% reduction through JIT loading