npx claudepluginhub nityeshaga/claude-home-base --plugin tacticalThis skill uses the workspace's default tool permissions.
This skill helps create high-quality prompts and instructions for AI systems by treating AI as a genius human teammate who needs clear, context-rich communication.
Creates, reviews, and optimizes prompts for AI like Claude and GPT using genius intern framework, prompt types, and best practices.
Crafts advanced LLM prompts with chain-of-thought, constitutional AI, meta-prompting, and optimization techniques. Use for AI features, agent performance, system prompts.
Optimizes prompts for production AI features with analysis, 6-step framework, failure detection, and research-backed techniques. Use for prompt review, system prompts, or improvement suggestions.
Share bugs, ideas, or general feedback.
This skill helps create high-quality prompts and instructions for AI systems by treating AI as a genius human teammate who needs clear, context-rich communication.
This skill is built on two foundational principles:
Modern AI models have both high intelligence and high emotional intelligence. They don't need tricks or "prompt engineering hacks"—they need what any smart remote teammate needs: clear written communication with sufficient context.
This means, by default:
I say "by default" because sometimes you actually need to write a step-by-step tutorial - in which case you should be versatile enough to do that.
This skill includes three core reference documents. Read them in full as needed:
Writing for AI Teammates - Core philosophy covering:
Load this file at the start of every prompt creation task. It's your primary reference.
Prompt Framework - Comprehensive guide covering:
Always load read this detailed framework to understand best practices based on how Anthropic writes their prompts.
GPT-5 Prompting Guide - GPT-5-specific patterns:
Only load this file if the user explicitly mentions they're targeting GPT-5, OpenAI models, or asks for GPT-5 optimization after seeing the initial draft.
Prompting Philosophy - A practical guide showing:
When the user asks for help with a prompt, quickly assess:
Type of prompt needed:
Available context:
Domain Knowledge is Gold
Users often have valuable expertise that AI wouldn't naturally prioritize:
Output Specifications are Critical Context
Output format preferences are legitimate requirements, not over-engineering. AI knows how to build a table—it doesn't know which table you want.
AI has no way of knowing:
Know What to Cut vs Keep:
| Remove | Keep |
|---|---|
| Punishment/reward language | Table structures with column/row specs |
| Step-by-step discovery process | File naming conventions |
| Explaining basic formulas | Scoring rubrics with examples |
| Tutorial hand-holding | Exact output format examples |
| "If I see X, I will punish you" | Folder structures for deliverables |
Abbreviation Creates Ambiguity:
Bad: "Historical table (8Q: Revenue, EPS, acceleration in bps)"
Good: "Build a table showing the last 8 quarters:
- Revenue
- Revenue YoY Growth %
- Note if accelerating/decelerating (by how many basis points quarter-over-quarter)
- EPS
- EPS YoY Growth %"
The abbreviated version loses precision. The full version tells AI exactly what rows to include and how to annotate them.
"Assume Intelligence" Has Boundaries:
Don't hesitate to ask the user for output specifications—they're not fluff, they're requirements.
When you do ask user, ask them 1-2 questions at a time. The question must be one sentence followed by another sentence of why you need that info from the user.
Example:
"Do you use any specific formats for the earnings report? Asking because sometimes hedge funds have a specific format that they use in their work."
Load the Prompt Framework reference first, then:
Choose the right structure based on prompt type:
Apply universal principles:
Avoid anti-patterns:
Always create the prompt as a markdown file.
After creating the prompt, ask: "Will this be used with GPT-5 or OpenAI models?"
If yes, you MUST load the GPT-5 Prompting Guide and perform a revision pass:
Official GPT-5 prompting guide
You're not doing "prompt engineering"—you're helping someone communicate clearly with an intelligent teammate. Focus on clarity, context, and decisiveness. Trust the AI to be smart; give them what they need to be effective in your specific context.