Orchestrate business name discovery: gather context, generate candidates, challenge them with a scoring sub-agent, then check domain availability. Use when the user asks to name a business, startup, app, or product — or wants brand name ideas. Trigger phrases: "help me name my business", "business name ideas for", "what should I call my company", "find a business name", "brand name generator", "suggest names for my startup", "name my app", "find an available domain name for my business". Also triggered by /namesmith. Examples: <example> user: "I'm building a project management SaaS, help me name it" assistant: "I'll use the namesmith skill to generate and evaluate name candidates, then check domain availability." <commentary>User describes a product and wants name ideas — namesmith triggers.</commentary> </example> <example> user: "I need a name for my pet care startup" assistant: "Let me run namesmith to brainstorm, score, and check domains for your pet care business." <commentary>Direct naming request for a startup — namesmith triggers.</commentary> </example> <example> user: "/namesmith I'm launching a coffee subscription service targeting remote workers" assistant: "Starting namesmith with your coffee subscription context." <commentary>Explicit /namesmith invocation with business description.</commentary> </example>
From namesmithnpx claudepluginhub grixu/cc-toolkit --plugin namesmithThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
The goal is not to generate impressive variety — it is to surface 5 names the user will actually want to register. Keep this in mind throughout.
Key calibrations that non-experts get wrong:
.io, .ai, .app as primary targets from the start, not as fallbacks. Short coined names (≤6 chars with unusual consonant clusters) are the exception — those are often unclaimed.Before asking questions, extract as much as possible from $ARGUMENTS. Most users provide more signal than they realize — read between the lines:
If $ARGUMENTS contains a clear description of what the product/service does, who it serves, and what tone is desired (≥ 20 words covering these points), proceed to Phase 1 immediately.
Otherwise, use AskUserQuestion to gather:
Accept short answers — do not prompt for more detail than needed.
MANDATORY READ before generating: Load ${CLAUDE_PLUGIN_ROOT}/references/naming-criteria.md — use Section 1 only (Generation Archetypes).
Do NOT load mcp-fallback.md or Section 2 of naming-criteria.md at this phase — they are for later phases.
Generate 15–20 name candidates that cover all 6 archetype types. This phase is high freedom — the archetype constraints prevent clustering, not artistry. Apply full creative judgment within them.
The 5-second test: For each name you generate, ask: "If someone heard this name at a conference badge, would they still remember it by end of day?" If not, the name needs more distinctiveness before you include it.
| Archetype | Example | Max names |
|---|---|---|
| Invented / coined word | Kodak, Xerox, Etsy | 4 |
| Compound word | Dropbox, GitHub, Snapchat | 4 |
| Metaphorical / evocative | Amazon, Oracle, Stripe | 3 |
| Descriptive-but-memorable | Basecamp, Mailchimp | 3 |
| Short coined (≤6 chars) | Uber, Lyft, Fiverr | 3 |
| Domain-hack friendly (root ≤8 chars) | del.icio.us style | 3 |
Rules:
.io or .ai is plausibly unclaimed (.com is assumed taken for most good coined words)Store internally as a list: [name, archetype, 1-sentence rationale]
Do NOT display this list to the user yet.
Launch an Agent call to challenge and score all names from Phase 1.
Pass the following in the prompt:
Name | Archetype | Rationale${CLAUDE_PLUGIN_ROOT}/references/naming-criteria.mdThe agent is defined at ${CLAUDE_PLUGIN_ROOT}/agents/name-challenger.md. Instruct the agent to follow its scoring rubric and produce the required structured output.
Parse the agent's output:
Verdict: KEEPsurvivors = the final filtered list (5–15 names)
MANDATORY READ before probing: Load ${CLAUDE_PLUGIN_ROOT}/references/mcp-fallback.md.
Do NOT re-read naming-criteria.md at this phase.
This phase is low freedom — follow the probe-then-fallback pattern exactly. Do not improvise alternatives to the probe step.
Probe: Attempt one search_domains call for the first name in survivors, with tlds: [".com"].
If the probe succeeds (MCP healthy):
For each name in survivors, call check_domain_availability for .com, .io, .co, and .app.
.com is taken for a name, also call generate_domain_variations to surface creative alternativesIf the probe fails (tool not found, HTTP error, or timeout):
mcp-fallback.md— (manual check)Display a results table:
## Business Name Candidates
| Name | Type | .com | .io | .app | Notes |
|-----------|--------------------|------|-----|------|--------------------------------|
| Veltora | Coined | ✓ | ✓ | ✓ | |
| NestRun | Compound | ✗ | ✓ | ✓ | .com taken; nestrun.io free |
| ... | ... | ... | ... | ... | ... |
If domain check was unavailable, replace availability columns with a single Domain column containing the manual check URL from mcp-fallback.md.
If the challenger threshold was relaxed, add a note: * Threshold relaxed to top 5 by score — consider running another round for stronger candidates.
Then use AskUserQuestion to offer three options:
If option 1 (Explore):
Ask which name. Call generate_domain_variations for that name across multiple TLDs and suffix patterns. Present the variations with availability status. If generate_domain_variations is unavailable, list common TLD alternatives manually (.ai, .co, .app, .io, .dev, -hq.com, get[name].com).
If option 2 (Another round): Summarize the rejection reasons from the challenger output. Identify the most common failure dimension (e.g., "5 names failed distinctiveness, 3 failed context fit"). Adjust generation accordingly — if distinctiveness was the top failure, reduce metaphorical/evocative names and increase coined. Return to Phase 1.
If option 3 (Done): End cleanly. Do not summarize unless the user asks.
| Scenario | Action |
|---|---|
| MCP tool not found | Follow mcp-fallback.md UNAVAILABLE template |
| MCP returns HTTP error or timeout | Follow mcp-fallback.md DEGRADED template; do not retry |
| All names rejected by challenger | Take top 5 by score; note threshold was relaxed |
| Challenger output is malformed / unparseable | Treat all names as KEEP with score 5; add note about parsing failure |
| User provides < 20 words of context | Phase 0 clarifying questions handle this |
| Generation produces near-identical names | Deduplicate before Phase 2 |
generate_domain_variations tool unavailable | Use manual TLD alternatives listed in Phase 4 Explore path |
| Agent tool unavailable | Perform challenger scoring inline using the rubric from naming-criteria.md Section 2 |
| User hits "another round" 3+ times | Surface the pattern: ask user to revisit the brief before generating again |