Claude Commands - LLM Capital Efficiency Framework Implementation
Transforms Claude Code into autonomous development infrastructure that generates 5-20x velocity improvements through systematic workflow automation and multi-agent orchestration.
/plugin marketplace add jleechanorg/claude-commands/plugin install claude-commands@claude-commands-marketplaceWhen this command is invoked, YOU (Claude) must execute these steps immediately: This is NOT documentation - these are COMMANDS to execute right now. Use TodoWrite to track progress through multi-phase workflows.
Action Steps: Transform Claude Code from a productivity tool into development infrastructure that generates measurable velocity improvements. This command system demonstrates the paradigm shift from AI consumption to systematic workflow automation, achieving 5-20x development velocity through proven automation patterns.
Action Steps:
Our /orch command demonstrates this pillar by deploying multiple autonomous agents in isolated tmux sessions:
Action Steps:
1. Analyze the issue manually
2. Write code manually
3. Test manually
4. Create PR manually
5. Handle review comments manually
Action Steps:
/pr "fix authentication bug" # โ think โ execute โ push โ copilot โ review
/copilot # โ analyze PR โ fix all issues autonomously
/execute "add user dashboard" # โ plan โ auto-approve โ implement
/orch "implement notifications" # โ multi-agent parallel development
Action Steps:
Action Steps:
Action Steps:
Action Steps:
### Phase 9: ๐ก Your First Steps as a Capital Allocator
**Action Steps:**
1. **Track Your "Touch Rate"**: Notice manual interventions needed - each is an automation opportunity
2. **Build Your First Slash Command**: Automate one repetitive AI workflow
3. **Calculate Your Arbitrage Score**: Compare manual time vs AI cost for true capital efficiency
This isn't just command sharing - it's **cognitive capital transformation** through the power of systematic AI orchestration.
## ๐ REFERENCE DOCUMENTATION
# Claude Commands - LLM Capital Efficiency Framework Implementation
โ ๏ธ **PROTOTYPE WIP REPOSITORY** - This is an experimental command system exported from a working development environment. Use as reference but note it hasn't been extensively tested outside of the original workflow. Expect adaptation needed for your specific setup.
### ๐ฏ Framework Metrics (Development Environment Results)
- **900+ PRs generated** in 30-day validation period during active development
- **42-minute median merge time** with autonomous review pipeline (vs ~2-4 hours manual)
- **78% test coverage** maintained automatically through command workflows
- **156+ custom slash commands** creating systematic automation patterns
- **4-stage AI code review** pipeline reducing manual review cycles
> *Metrics collected during intensive development period using automated tracking. Results vary based on project complexity, team adoption, and workflow integration depth. Individual results may differ.*
### ๐ก Capital Allocator vs Consumer Mindset
| Consumer Approach | **Capital Allocator Approach** |
|-------------------|--------------------------------|
| Goal: Minimize AI costs | **Goal: Maximize value arbitrage** |
| Focus: Reducing token usage | **Focus: Maximizing throughput of shippable work** |
| Action: Trim prompts, cache results | **Action: Build automated, multi-model workflows** |
| Outcome: 1.2x productivity | **Outcome: 5-20x development velocity** |
> *Development velocity improvements measured through PR throughput, review cycle time, and automation coverage during 30-day validation. Actual results vary significantly based on project complexity, team adoption, and workflow integration. The "5-20x" range represents observed improvements in specific metrics like PR merge time (42min vs 2-4hr) and review automation, not guaranteed universal outcomes.*
Transform Claude Code into an autonomous development powerhouse through simple command hooks that enable complex workflow orchestration and systematic automation deployment.
## ๐ CLAUDE CODE SELF-SETUP
```bash
# Point Claude Code to this repository and let it set up what you need
"I want to use the commands from https://github.com/jleechanorg/claude-commands - please analyze what's available and set up the ones that would be useful for my project"
Claude Code will intelligently analyze your project, recommend relevant commands, and configure everything automatically.
This isn't just a collection of commands - it's a complete cognitive capital allocation system that transforms how you develop software using the LLM Capital Efficiency Framework's three foundational pillars.
Each command operates in isolated branch environments, enabling conflict-free parallel development:
Command composition creates sophisticated autonomous workflows:
/pr chains: /think โ /execute โ /push โ /copilot โ /review/copilot orchestrates: analyze โ fix โ test โ document โ verify/execute combines: plan โ approve โ implement โ validate
# Multi-command composition in single request (like /fake command)
"/arch /thinku /devilsadvocate /diligent" # โ comprehensive code analysis
# Sequential workflow chains
"/think about auth then /execute the solution" # โ analysis โ implementation
# Conditional execution flows
"/test login flow and if fails /fix then /pr" # โ test โ fix โ create PR
/cerebras - Hybrid Code Generation (19.6x Faster)What It Does: Revolutionary hybrid workflow using Cerebras Inference API for 19.6x faster code generation (500ms vs 10s), with Claude as ARCHITECT and Cerebras as BUILDER.
Why Cerebras Is Faster: Cerebras achieves breakthrough performance through their Wafer Scale Engine (WSE-3) with 44GB of on-chip SRAM and 21 petabytes/second memory bandwidth - 7,000x more than traditional GPUs. By storing entire models on-chip, Cerebras eliminates the memory bandwidth bottleneck that limits GPU inference to hundreds of tokens/second, achieving over 2,100 tokens/second on large models.
Hybrid Architecture:
docs/{branch}/cerebras_decisions.mdPerfect For: Well-defined code generation, boilerplate, templates, unit tests, algorithms, documentation, repetitive patterns
Real Example:
/cerebras "create React component for user dashboard with TypeScript"
โ
Claude: Analyzes requirements โ Creates detailed specification
Cerebras: Generates component code at 19.6x speed (500ms via WSE-3)
Claude: Integrates, validates, and documents decision
Speed Comparison:
/execute - Plan-Approve-Execute CompositionWhat It Does: Combines /plan โ /preapprove โ /autoapprove โ execute in one seamless workflow with TodoWrite tracking.
3-Phase Workflow:
/plan command - creates TodoWrite checklist and displays execution plan/preapprove validation followed by /autoapprove with message "User already approves - proceeding with execution"Real Example:
/execute "fix login button styling"
โ
Phase 1 - Planning (/plan): Creates TodoWrite checklist and execution plan
Phase 2 - Approval Chain: /preapprove โ /autoapprove โ "User already approves - proceeding"
Phase 3 - Implementation: [Check styles โ Update CSS โ Test โ Commit]
/plan - Manual Approval Development PlanningWhat It Does: Structured development planning with explicit user approval gates.
Perfect For: Complex architectural changes, high-risk modifications, learning new patterns.
Workflow:
/pr - Complete Development LifecycleWhat It Does: Executes the complete 5-phase development lifecycle: /think โ /execute โ /push โ /copilot โ /review
Mandatory 5-Phase Workflow:
Phase 1: Think - Strategic analysis and approach planning
โ
Phase 2: Execute - Implementation using /execute workflow
โ
Phase 3: Push - Commit, push, and create PR with details
โ
Phase 4: Copilot - Auto-executed PR analysis and issue fixing
โ
Phase 5: Review - Comprehensive code review and validation
Real Example:
/pr "fix login timeout issue"
โ
Think: Analyze login flow and timeout causes โ
Execute: Implement timeout fix systematically โ
Push: Create PR with comprehensive details โ
Copilot: Fix any automated feedback โ
Review: Complete code review and validation
/copilot - Universal PR Composition with ExecuteWhat It Does: Targets current branch PR by default, delegates to /execute for intelligent 6-phase autonomous workflow.
6-Phase Autonomous System:
/execute optimizationPerfect For: Full autonomous PR management without manual intervention.
Real Example:
/copilot # Auto-targets current branch PR
โ
๐ฏ Targeting current branch PR: #123 โ
๐ Delegating to /execute for intelligent workflow โ
Analyze โ Fix โ Test โ Document โ Reply โ Verify
/orch - Multi-Agent Task Delegation SystemWhat It Does: Delegates tasks to autonomous tmux-based agents working in parallel across different branches.
Multi-Agent Architecture:
Real Example:
/orch "add user notifications system"
โ
Frontend Agent: notification UI components (parallel)
Backend Agent: notification API endpoints (parallel)
Testing Agent: notification test suite (parallel)
Opus-Master: architecture review and integration
โ
All agents work independently โ Create individual PRs โ Integration verification
Cost: $0.003-$0.050 per task (highly efficient)
Monitoring:
/orch monitor agents # Check agent status
/orch "What's running?" # Current task overview
tmux attach-session -t task-agent-frontend # Direct agent access
Each command is a simple .md file that Claude Code reads as executable instructions. When you type /pr "fix bug", Claude:
.claude/commands/pr.mdYou can chain multiple commands in one request:
# Sequential execution
"/think about authentication then /arch the solution then /execute it"
# Conditional execution
"/test the login flow and if it fails /fix it then /pr the changes"
# Parallel analysis
"/debug the performance issue while /research best practices then /plan implementation"
# Full workflow composition
"/analyze the codebase /design a solution /execute with tests /pr with documentation then /copilot any issues"
/copilot - 6-Layer Universal Composition SystemLayer 1: Universal Composition Bridge
โโโ /execute - Intelligent workflow optimization
โโโ Task complexity analysis (PR size, comment count, CI failures)
โโโ Execution strategy determination (parallel vs sequential)
โโโ Resource allocation and optimization decisions
โโโ Orchestrates all 6 phases through universal composition
Layer 2: GitHub Status Verification (Phase 1 - MANDATORY)
โโโ gh pr view - Fresh GitHub state verification
โโโ Status evaluation - CI, mergeable, comment analysis
โโโ Skip condition assessment - Optimization opportunity detection
โโโ Execution path determination - Full vs optimized workflow
Layer 3: Data Collection Layer (Phase 2 - CONDITIONAL)
โโโ /commentfetch - Complete comment/review data gathering
โ โโโ GitHub API pagination handling
โ โโโ Comment threading analysis
โ โโโ Review status compilation
โโโ Optimization bypass - Skip when zero comments detected
โโโ Smart verification - Quick check before full collection
Layer 4: Resolution Engine (Phase 3 - CONDITIONAL)
โโโ /fixpr - CI failure and conflict resolution
โ โโโ Test failure analysis and automatic fixes
โ โโโ Merge conflict detection and resolution
โ โโโ Build error correction
โ โโโ Code quality improvements
โโโ Skip logic - Bypass when CI passing and mergeable
โโโ Status verification - Always check before skipping
Layer 5: Communication Layer (Phase 4 - CONDITIONAL)
โโโ /commentreply - Enhanced context comment responses
โ โโโ Comment threading with ID references
โ โโโ Commit hash inclusion for proof of work
โ โโโ Technical context enhancement
โ โโโ Status marker integration (โ
DONE / โ NOT DONE)
โโโ Optimization - Skip when zero unresponded comments
โโโ Delegation trust - Let commentreply handle verification
Layer 6: Validation & Sync (Phases 5-6 - CONDITIONAL/MANDATORY)
โโโ /commentcheck - Enhanced context reply verification (Phase 5)
โ โโโ Coverage validation for processed comments
โ โโโ Context quality assessment
โ โโโ Threading completeness verification
โโโ /pushl - Final synchronization (Phase 6 - MANDATORY)
โ โโโ Local to remote sync with verification
โ โโโ GitHub API confirmation
โ โโโ Push success validation
โโโ Merge approval protocol integration - Zero tolerance enforcement
/execute - 3-Layer Orchestration SystemLayer 1: Planning & Analysis
โโโ /think - Task decomposition
โโโ /arch - Technical approach
โโโ /research - Background investigation
Layer 2: Auto-Approval & Setup
โโโ TodoWrite initialization
โโโ Progress tracking setup
โโโ Error recovery preparation
Layer 3: Implementation Loop
โโโ /plan - Detailed execution steps
โโโ /test - Continuous validation
โโโ /fix - Issue resolution
โโโ /integrate - Change integration
โโโ /pushl - Completion with sync
/pr - 4-Layer Development LifecycleLayer 1: Analysis
โโโ /debug - Issue identification
โโโ /arch - Solution architecture
โโโ /research - Context gathering
Layer 2: Implementation
โโโ /execute - Code changes
โโโ /test - Validation
โโโ /coverage - Quality verification
Layer 3: Git Workflow
โโโ /newbranch - Branch management
โโโ /pushl - Push with verification
โโโ /integrate - Merge preparation
Layer 4: PR Creation & Management
โโโ GitHub PR creation
โโโ /reviewstatus - Status monitoring
โโโ /copilot - Autonomous issue handling
/orch - Multi-Agent DelegationAgent Assignment Layer:
โโโ Frontend Agent (/execute frontend tasks)
โโโ Backend Agent (/execute API tasks)
โโโ Testing Agent (/execute test tasks)
โโโ Opus-Master (/arch + integration)
Coordination Layer:
โโโ Redis-based communication
โโโ Task dependency management
โโโ Resource allocation
Integration Layer:
โโโ Individual PR creation per agent
โโโ Cross-agent validation
โโโ Final integration verification
TodoWrite Integration: All commands break down into trackable steps
/execute "build dashboard"
# Internally creates: [plan task] โ [implement components] โ [add tests] โ [create PR]
Memory Enhancement: Commands learn from previous executions
/learn "React patterns" then /execute "build React component"
# Second command applies learned patterns automatically
Git Workflow Integration: Automatic branch management and PR creation
/pr "fix authentication"
# Internally: /newbranch โ code changes โ /pushl โ GitHub PR creation
Error Recovery: Smart handling of failures and retries
/copilot # If tests fail โ /fix โ /test โ retry until success
/scaffold - Repository Setup and Infrastructure DeploymentWhat It Does: Rapidly scaffolds essential development infrastructure by copying proven development scripts from the claude-commands repository into any target repository with intelligent technology stack adaptation.
Core Infrastructure Components:
create_worktree.sh, integrate.sh, schedule_branch_work.shclaude_mcp.sh, claude_start.sh, codebase_loc.sh, coverage.sh, create_snapshot.sh, loc.sh, push.sh, resolve_conflicts.sh, run_lint.sh, run_tests_with_coverage.sh, setup-github-runner.sh, sync_branch.shLLM Adaptation Intelligence:
Real Example:
/scaffold
โ
1. Copies 17 essential development scripts to your project
2. Detects your tech stack (e.g., Node.js with TypeScript)
3. Adapts run_lint.sh to use 'npm run lint' or 'npx eslint'
4. Adapts run_tests_with_coverage.sh to use 'jest --coverage'
5. Updates package.json with scaffold script shortcuts
6. Provides integration guidelines for your specific stack
Perfect For: New project setup, standardizing development workflows, rapid infrastructure deployment, team onboarding acceleration
The testing framework demonstrates LLM-Native Testing patterns that work across any web application or system, using AI to create, execute, and validate complex test scenarios.
Each test follows a structured .md format designed for LLM execution:
# Test: [Component/Feature Name]
## Pre-conditions
- Server requirements, test data setup, environment configuration
## Expected Results
**PASS Criteria**: Specific conditions for test success
**FAIL Indicators**: What indicates test failure
## Bug Analysis
**Root Cause**: Analysis of why test fails
**Fix Location**: Files/components that need changes
The framework works across any web application:
# Test: E-commerce Checkout Flow
# Test: Social Media Login
# Test: API Documentation Interface
### Integration with Command Composition
Meta-testing integrates seamlessly with the command system:
```bash
# Red-Green-Refactor with LLM tests
/tdd "authentication flow" # Creates failing LLM test
/testuif test_auth.md # Execute test with Playwright MCP
/fix "implement OAuth flow" # Fix code to make test pass
/testuif test_auth.md # Verify test now passes
LLM tests incorporate comprehensive matrix testing:
The orchestration system is an active development prototype that demonstrates autonomous multi-agent development workflows.
Agent Assignment Layer:
โโโ Frontend Agent (/execute frontend tasks)
โโโ Backend Agent (/execute API tasks)
โโโ Testing Agent (/execute test tasks)
โโโ Opus-Master (/arch + integration)
Coordination Layer:
โโโ Redis-based communication
โโโ Task dependency management
โโโ Resource allocation
Integration Layer:
โโโ Individual PR creation per agent
โโโ Cross-agent validation
โโโ Final integration verification
# Basic task delegation
/orch "implement user dashboard with tests and documentation"
# Complex multi-component feature
/orch "add notification system with real-time updates, email integration, and admin controls"
# System monitoring
/orch monitor agents # Check agent status
/orch "What's running?" # Current task overview
tmux attach-session -t task-agent-frontend # Direct agent access
โ Working Features:
๐ง In Development:
๐ฎ Future Roadmap:
This export contains 144 commands that transform Claude Code from a productivity tool into a cognitive capital allocation platform:
Note: Command count is automatically updated during export to reflect the actual number of commands, libraries, and utilities included.
# Let Claude Code analyze and set up what you need
"I want to use the commands from https://github.com/jleechanorg/claude-commands - please analyze what's available and set up the ones that would be useful for my project"
Claude Code will:
.claude/commands/.claude/settings.json/execute, /pr, /copilot)
# After setup, use powerful workflow commands
/execute "implement user authentication" # โ Full implementation workflow
/pr "fix performance issues" # โ Analysis โ fix โ PR creation
/copilot # โ Fix PR conflicts and comments
What It Does: Get comprehensive feedback from 5 AI models (Cerebras, Gemini, Perplexity, OpenAI, Grok) with synthesized recommendations for design decisions, code reviews, and bug analysis.
Installation (from Your Project repository):
# Clone the repository with the secondo plugin
git clone https://github.com/jleechanorg/your-project.com.git
cd your-project.com
# Install the complete plugin suite (includes secondo MCP server)
/plugin install .
# Authenticate with AI Universe backend (required for secondo)
node scripts/auth-cli.mjs login
What Gets Installed:
/secondo command: Quick multi-model feedback interface/second_opinion command: Detailed second opinion workflowUsage:
# Get comprehensive feedback (all types)
/secondo
# Specific feedback type
/secondo design # Design review
/secondo code-review # Code quality analysis
/secondo bugs # Bug detection
# With custom question
/secondo "Should I use Redis or in-memory caching for rate limiting?"
Features:
Architecture:
Claude Code โ Secondo MCP Server (port 3003) โ secondo-cli.sh โ AI Universe Backend
Requirements:
auth-cli.mjs login (one-time setup)Troubleshooting:
# Check authentication status
node scripts/auth-cli.mjs status
# Re-authenticate if needed
node scripts/auth-cli.mjs login
# Check MCP server logs
tail -f /tmp/secondo-mcp-test.log
Commands contain placeholders that need adaptation:
$PROJECT_ROOT/ โ Your project's main directoryyour-project.com โ Your domain/project name$USER โ Your usernameTESTING=true python โ Your test execution patternBefore (exported):
TESTING=true python $PROJECT_ROOT/test_file.py
After (adapted):
npm test src/components/test_file.js
Workflow Orchestrators: /pr, /copilot, /execute, /orch - Complete multi-step workflows
Cognitive Commands: /think, /arch, /debug, /learn - Analysis and planning
Infrastructure Commands: /scaffold - Repository setup and development environment configuration
Operational Commands: /headless, /handoff, /orchestrate - Protocol enforcement
This is a reference export from a working Claude Code project. Commands may need adaptation for your specific environment, but Claude Code excels at helping you customize them.
Transform your development process from consuming AI tools to deploying cognitive capital for exponential leverage where single commands handle complex multi-phase processes.
The productivity gains available right now represent the largest arbitrage opportunity in software development. Most developers are still thinking like consumers, leaving exponential leverage on the table.
This framework is how you seize that opportunity.
๐ Generated with Claude Code
Co-Authored-By: Claude noreply@anthropic.com