Prompt Architect
Transform vague prompts into expert-level, structured prompts using 27 research-backed frameworks across 7 intent categories.
Works with Claude Code, ChatGPT, Gemini CLI, Cursor, GitHub Copilot, Windsurf, OpenAI Codex, and 30+ Agent Skills compatible tools.

Quick Start
npx @ckelsoe/prompt-architect
The interactive installer detects your AI agents (Claude Code, Gemini CLI, Cursor, Copilot, Codex, and more) and lets you choose where to install.
Important: Use npx, not npm install. The npx command runs the interactive multi-agent installer. Running npm install will only install to Claude Code silently via the postinstall hook.
Requires .npmrc with @ckelsoe:registry=https://npm.pkg.github.com and a GitHub token with read:packages scope.
Table of Contents
Overview
Prompt Architect is an Agent Skills compatible skill that elevates your prompting capabilities through:
- Intelligent Analysis - Evaluates prompts across 5 quality dimensions (clarity, specificity, context, completeness, structure)
- Framework Recommendation - Suggests the best framework(s) for your specific use case with clear reasoning
- Guided Dialogue - Asks targeted clarifying questions to gather missing information progressively
- Systematic Application - Applies selected framework to transform your prompt
- Iterative Refinement - Continues improving based on feedback until perfect
Target Audience:
- Developers using AI coding agents (Claude Code, Gemini CLI, Cursor, Copilot, etc.)
- Prompt engineers optimizing LLM interactions
- AI practitioners seeking systematic prompt improvement
- Teams wanting consistent, high-quality prompts
Key Features
27 Research-Backed Frameworks Across 7 Intent Categories
| Framework | Best For | Complexity |
|---|
| CO-STAR | Content creation, writing tasks | High |
| RISEN | Multi-step processes, procedures | High |
| CRISPE | Comprehensive prompts with multiple output variants | High |
| BROKE | Business deliverables with OKR-style measurable outcomes | Medium |
| RISE-IE | Data analysis, transformations (Input-Expectation) | Medium |
| RISE-IX | Content creation with examples (Instructions-Examples) | Medium |
| TIDD-EC | High-precision tasks with explicit dos/don'ts | Medium |
| RACE | Expert tasks requiring role + context + outcome clarity | Medium |
| CARE | Constraint-driven tasks with explicit rules and examples | Medium |
| CTF | Simple tasks where situational context drives the prompt | Low |
| RTF | Simple, focused tasks where expertise framing matters | Low |
| APE | Ultra-minimal one-off prompts | Low |
| BAB | Rewriting, refactoring, transforming existing content | Low |
| Tree of Thought | Decisions requiring exploration of multiple approaches | Medium |
| ReAct | Agentic / tool-use tasks with iterative reasoning | Medium |
| Skeleton of Thought | Structured long-form content (outline-first) | Medium |
| Step-Back | Principle-grounded reasoning (abstract first, then specific) | Medium |
| Least-to-Most | Compositional multi-hop problems (simplest first) | Medium |
| Plan-and-Solve (PS+) | Zero-shot numerical/calculation reasoning | Low |
| Chain of Thought | Reasoning, problem-solving | Medium |
| Chain of Density | Iterative refinement, summarization | Medium |
| Self-Refine | Iterative output quality improvement (any task) | Medium |
| CAI Critique-Revise | Principle-based critique and revision (Anthropic) | Medium |
| Devil's Advocate | Strongest opposing argument against a position | Low |
| Pre-Mortem | Assume failure, identify specific causes | Low |
| RCoT | Verify reasoning by reconstructing the question | Medium |
| RPEF | Recover/reconstruct a prompt from an existing output | Low |
| Reverse Role Prompting | AI interviews you before executing | Low |
Quality Scoring System