Engineer, test, version, and optimize prompts for LLMs. Covers prompt design patterns (few-shot, chain-of-thought, ReAct, tree-of-thought), structured output, system prompt design, prompt injection prevention, A/B testing, and evaluation.
From godmodenpx claudepluginhub arbazkhan971/godmodegodmode//promptCrafts and reviews prompts for subagents and humans. FIX mode (default) outputs improved prompt; REVIEW mode provides analysis and verdict.
/promptManages superego prompts for the project: lists available (default), switches between code/writing/learning modes, shows current details and backups.
/promptGenerate Google Stitch-ready prompts from briefs or spec files using the authoring skill
/promptGenerate a structured planning prompt from a raw idea. Analyzes codebase, discovers skills, builds detailed prompt for bishx-plan.
/promptManages superego prompts for the project: lists available (default), switches between code/writing/learning modes, shows current details and backups.
Engineer, test, version, and optimize prompts for LLMs. Covers prompt design patterns (few-shot, chain-of-thought, ReAct, tree-of-thought), structured output, system prompt design, prompt injection prevention, A/B testing, and evaluation.
/godmode:prompt # Full prompt engineering workflow
/godmode:prompt --pattern few-shot # Design with specific pattern
/godmode:prompt --pattern cot # Chain-of-thought prompt
/godmode:prompt --pattern react # ReAct agent prompt
/godmode:prompt --pattern tot # Tree-of-thought prompt
/godmode:prompt --model claude # Target specific model
/godmode:prompt --optimize # Analyze and improve existing prompt
/godmode:prompt --test # Run prompt test suite
/godmode:prompt --compare v1 v2 # A/B compare prompt versions
/godmode:prompt --harden # Audit and fix injection defenses
/godmode:prompt --json # Design for structured JSON output
/godmode:prompt --eval # Full evaluation suite
/godmode:prompt --version # Show prompt version registry
/godmode:prompt --export # Export prompt spec as YAML
prompts/<task>/prompt-spec.yamlprompts/<task>/system-prompt.mdprompts/<task>/examples.yamlprompts/<task>/tests.yaml"prompt: <task> — v<version>, <pattern>, accuracy=<val>, <N> test cases"After prompt engineering: /godmode:eval to run comprehensive evaluation, /godmode:rag to add retrieval context, or /godmode:agent to build an agent around the prompt.
/godmode:prompt Design a prompt to classify support tickets
/godmode:prompt --pattern cot Design a prompt for multi-step reasoning
/godmode:prompt --optimize Our extraction prompt is only 72% accurate
/godmode:prompt --harden Audit our chatbot for injection vulnerabilities
/godmode:prompt --compare v1.1 v1.2 Which prompt version is better?
/godmode:prompt --json Design a prompt that outputs structured JSON