Claude Experiments
A curated marketplace for Claude Code plugins featuring state machine-based prompt optimization and developer productivity tools.
What is This?
This repository is a Claude Code plugin marketplace that provides tools to reduce LLM token consumption and improve prompt engineering workflows through deterministic preprocessing and specialized LLM agents. Plugins in this marketplace help you work more efficiently with Claude by optimizing how prompts are constructed and executed.
Installation
Install this marketplace in Claude Code:
/plugin install jtsylve/claude-experiments
Once installed, all plugins in this marketplace will be available for use in your Claude Code environment.
Available Plugins
meta-prompt (v1.0.0)
State machine-based optimization infrastructure achieving 40-60% token reduction through deterministic preprocessing and template-based routing.
The meta-prompt plugin implements a state machine architecture with three specialized LLM agents that work together to optimize prompt execution. The system uses deterministic bash scripts for orchestration (zero tokens) and invokes LLM agents only for targeted work: template selection, prompt optimization, and task execution.
Architecture
- State Machine: Deterministic bash orchestration with zero-token overhead
- Template Selector Agent: Lightweight classifier for automatic template detection (Haiku model)
- Prompt Optimizer Agent: Extracts variables and populates templates (Sonnet model)
- Template Executor Agent: Executes optimized prompts with domain-specific skills (Sonnet model)
Key Features
- Token Reduction: 40-60% overall, 100% for orchestration
- Classification Accuracy: 90%+ for hybrid template routing
- Performance: <100ms deterministic overhead
- Templates: Domain-specific templates covering common development patterns
- Zero-Token Orchestration: State machine routing eliminates LLM orchestration costs
- Specialized Agents: Three focused agents for selection, optimization, and execution
- Hybrid Classification: Keyword-based routing with LLM fallback for edge cases
Commands
/prompt <task> - Optimize and execute a prompt with automatic template selection
/prompt --template=<name> <task> - Use a specific template (--code, --review, --test, --docs, --extract, --compare, --custom)
/prompt --plan <task> - Create execution plan and get approval before running
/prompt --return-only <task> - Generate optimized prompt without executing
Templates Included
| Template | Use Cases | Key Variables |
|---|
| code-refactoring | Modify code, fix bugs, implement features | TASK_REQUIREMENTS, TARGET_PATTERNS |
| code-review | Security audits, quality analysis, feedback | PATHS, REVIEW_FOCUS, LANGUAGE_CONVENTIONS |
| test-generation | Generate unit tests, test suites, coverage | CODE_CONTEXT, FOCUS_AREAS, TEST_FRAMEWORK |
| documentation-generator | API docs, READMEs, docstrings, user guides | TARGET_FILES, DOCUMENTATION_STYLE, AUDIENCE |
| data-extraction | Extract data from logs, JSON, HTML, text | INPUT_SOURCE, EXTRACTION_PATTERN, FORMAT |
| code-comparison | Compare code snippets, check equivalence | FIRST_CODE, SECOND_CODE, COMPARISON_FOCUS |
| custom | Novel tasks (LLM fallback) | TASK_DESCRIPTION |
Template Variables
Each template uses specific variables that are automatically extracted from your task description:
- code-refactoring: TASK_REQUIREMENTS, TARGET_PATTERNS
- code-review: PATHS, REVIEW_FOCUS, LANGUAGE_CONVENTIONS
- test-generation: CODE_CONTEXT, FOCUS_AREAS, TEST_FRAMEWORK
- documentation-generator: TARGET_FILES, DOCUMENTATION_STYLE, AUDIENCE
- data-extraction: INPUT_SOURCE, EXTRACTION_PATTERN, FORMAT
- code-comparison: FIRST_CODE, SECOND_CODE, COMPARISON_FOCUS
- custom: TASK_DESCRIPTION
Quick Start
# Auto-detect template and execute
/prompt "Analyze security vulnerabilities in the authentication module"
# Use explicit template with planning mode
/prompt --review --plan "Check code for security issues"
# Generate tests with specific template
/prompt --test "Generate pytest tests for user service"
# Create optimized prompt without executing
/prompt --code --return-only "Refactor user service to use dependency injection"
Performance Metrics
| Metric | Target | Status |
|---|
| Token reduction | 40-60% | Met |
| Orchestration tokens | 0 | Met |
| Classification accuracy | 90%+ | Met |
| Deterministic overhead | <100ms | Met |
Documentation
See meta-prompt/README.md for complete documentation including:
- Architecture overview
- Template authoring guide
- Script development guide
- Examples and use cases
- Contribution guidelines
Contributing to the Marketplace
We welcome contributions of new plugins and improvements to existing ones!