This skill should be used when the user uses ambiguous terminology like "make it talk", "we need an api", "make it portable", "check for gaps", asks meta-questions like "am I making sense?", "does this make sense?", mentions being a "non-technical user", uses vague action verbs ("make it work", "do the thing"), mixes domain languages, uses invented terms, or when detecting semantic drift between human natural language and technical precision. Provides semantic translation, disambiguation, and domain knowledge mapping across Autogen, Langroid, MCP (Model Context Protocol), UTCP (Universal Tool Calling Protocol), FastAPI, Git/Gitflow, SRE (Site Reliability Engineering), and Memory Graphs domains. Bridges the gap between user intent and technical specificity through ontological translation.
Detects ambiguous user language like "make it talk" or "does this make sense?" and translates it into precise technical concepts across 8 domains (Autogen, Langroid, MCP, UTCP, FastAPI, Git, SRE, Memory Graphs). Triggers when vague action verbs, meta-questions, or domain-crossing terminology appear, then queries knowledge sources to present clarification options before proceeding.
/plugin marketplace add jcmrs/jcmrs-plugins/plugin install semantic-linguist@jcmrs-pluginsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
AI-DOMAIN-ADDITION-GUIDE.mdexamples/autogen-mappings.mdexamples/common-ambiguities.mdexamples/fastapi-mappings.mdexamples/git-gitflow-mappings.mdexamples/langroid-mappings.mdexamples/mcp-mappings.mdexamples/memory-graphs-mappings.mdexamples/sre-mappings.mdexamples/utcp-mappings.mdknowledge/ambiguous-terms.jsonknowledge/ontology-graph.jsonknowledge/technical-mappings.jsonreferences/cognitive-framework.mdreferences/decision-trees.mdreferences/domain-ontologies.mdreferences/translation-patterns.mdPrevent miscommunication, assumptions, and hallucinations by translating ambiguous user terminology into precise technical concepts across multiple domains. Act as a semantic bridge between human natural language and technical specificity, mapping concepts across Autogen, Langroid, MCP, UTCP, FastAPI, Git/Gitflow, SRE, and Memory Graphs ecosystems.
This skill provides semantic translation and disambiguation across 8 technical domains:
Users frequently encounter the "abyss" between natural language intent and technical precision:
Without intervention, these ambiguities lead to:
Trigger semantic validation when detecting:
Ambiguous Action Verbs
Meta-Questions (User Seeking Validation)
Domain-Crossing Language
Unclear References
Scope Ambiguity
Analyze the user's message for ambiguity signals. Use pattern matching from scripts/detect-ambiguity.py or manual analysis.
High-confidence triggers (always validate):
Moderate-confidence triggers (validate if >80% confidence):
Query knowledge sources in this specific order for efficiency:
1. Static Domain Knowledge (First - Fastest)
knowledge/ambiguous-terms.json for known ambiguous phrasesknowledge/technical-mappings.json for domain-specific translationsknowledge/ontology-graph.json for conceptual relationships2. External Documentation (Second - Authoritative)
3. Codebase Validation (Third - Context-Specific)
Map ambiguous terminology to precise technical concepts using domain knowledge.
Translation process:
Example translation:
Ambiguous: "make it talk"
Domain: Autogen
Possible translations:
- ConversableAgent.send() (confidence: 0.8, context: single message)
- register ConversableAgent (confidence: 0.7, context: enable conversation)
- GroupChat setup (confidence: 0.5, context: multi-agent conversation)
Never assume. Always verify understanding with the user.
Present options conversationally:
I notice "[ambiguous term]" could mean different things:
1. [Precise interpretation 1] - [Brief context]
2. [Precise interpretation 2] - [Brief context]
3. [Precise interpretation 3] - [Brief context]
[Ask clarifying question based on context]
Tone guidelines:
Wait for confirmation before proceeding with implementation.
User message received
├── Contains meta-question? ("am I making sense?")
│ ├── Yes → HIGH confidence, validate immediately
│ └── No → Continue analysis
├── Contains ambiguous action verb? ("make it talk")
│ ├── Yes → Check domain context
│ │ ├── Clear domain → Query knowledge, translate
│ │ └── Unclear domain → Ask which framework/library
│ └── No → Continue analysis
├── Contains vague scope? ("check for gaps", "make it portable")
│ ├── Yes → Query knowledge for common interpretations
│ │ ├── Multiple viable → Present options
│ │ └── One clear match → Confirm with user
│ └── No → Continue analysis
├── Contains domain-crossing language?
│ ├── Yes → Identify conflicting domains, ask clarification
│ └── No → Proceed normally (low ambiguity)
Ambiguity detected
├── Confidence score > 80%?
│ ├── Yes → Trigger validation
│ └── No → Monitor, don't interrupt
├── Query knowledge sources (static → external → codebase)
├── Translation mappings found?
│ ├── Yes, single mapping → Confirm with user
│ ├── Yes, multiple mappings → Present options
│ └── No mappings found → Ask open-ended clarification
└── User confirms → Proceed with precise terminology
Key ambiguous terms across all 8 supported domains with precise translations:
"make it talk" → ConversableAgent.send() (single message) vs initiate_chat() (conversation) vs GroupChat setup (multi-agent) "agent" → ConversableAgent (base) vs AssistantAgent (LLM-powered) vs UserProxyAgent (human proxy)
"agent" → ChatAgent (conversation) vs ToolAgent (function-calling) "task" → Langroid Task object (orchestration) vs general task concept
"mcp server" → SSE server (web-based) vs stdio server (process-based) vs HTTP server vs WebSocket server "resource" → MCP resource (data/content exposed by server) vs system resource (CPU/memory) "prompt" → MCP prompt template (structured prompts) vs LLM prompt (text input)
"tool calling" → UTCP universal calling (framework-agnostic) vs framework-specific (OpenAI tools, Anthropic tools) "tool schema" → UTCP universal schema vs framework-specific schema
"dependency" → FastAPI Depends() (dependency injection) vs pip dependency (package) vs architectural dependency (service) "endpoint" → Path operation decorator (@app.get) vs external API endpoint "model" → Pydantic model (validation) vs database model (ORM) vs ML model
"merge" → Merge commit (preserves history) vs squash merge (single commit) vs rebase (linear history) "branch" → Gitflow branch type (feature/release/hotfix/develop/main) vs general branch name
"observability" → Logs (events) vs metrics (measurements) vs traces (request paths) - three pillars "sli" → Availability SLI (uptime %) vs latency SLI (response time) vs error rate SLI (failure %) "incident" → SEV-1 incident (critical outage) vs alert (automated notification) vs degradation (partial failure)
"memory" → Knowledge graph (structured entities/relationships) vs vector memory (embeddings) vs episodic memory (temporal context) vs system RAM "retrieval" → Semantic search (embedding similarity) vs graph traversal (relationship following) vs hybrid approach
"am I making sense?" → Trigger explicit semantic validation across all domains
Key concepts to validate:
Common ambiguities:
Key concepts to validate:
Common ambiguities:
Key concepts to validate:
Common ambiguities:
Key concepts to validate:
Common ambiguities:
Key concepts to validate:
Common ambiguities:
Key concepts to validate:
Common ambiguities:
Key concepts to validate:
Common ambiguities:
Key concepts to validate:
Common ambiguities:
When domain unclear:
Use confidence scores to determine intervention:
High confidence (>80%): Always validate
Medium confidence (50-80%): Validate if multiple interpretations
Low confidence (<50%): Monitor but don't interrupt
Contains user phrases mapped to ambiguity scores and contexts.
Query pattern:
term = extract_key_phrase(user_message)
entry = load_json("knowledge/ambiguous-terms.json").get(term)
if entry and entry["ambiguity_score"] > 0.8:
trigger_validation(term, entry["contexts"])
Contains precise technical translations organized by domain.
Query pattern:
mappings = load_json("knowledge/technical-mappings.json")
domain_mappings = mappings.get(domain, {})
translations = domain_mappings.get(ambiguous_term, [])
present_options(translations)
Contains conceptual relationships between terms across domains.
Query pattern:
graph = load_json("knowledge/ontology-graph.json")
related_concepts = graph.get(concept, {}).get("related", [])
cross_domain = graph.get(concept, {}).get("cross_domain_equivalents", {})
Pattern matching utility for ambiguity detection.
Usage:
python scripts/detect-ambiguity.py --message "user message here"
# Returns: confidence score, detected patterns, suggested validation
When to use:
Term translation utility using knowledge files.
Usage:
python scripts/domain-mapper.py --term "make it talk" --domain autogen
# Returns: ranked translations with confidence scores
When to use:
Unified interface to all knowledge sources.
Usage:
python scripts/knowledge-query.py --term "api" --sources static,external,codebase
# Returns: results from each source in order
When to use:
For detailed domain knowledge and advanced patterns:
references/cognitive-framework.md - Complete AGENTS.md framework adapted for Claude Codereferences/decision-trees.md - Detailed decision trees and flowchartsreferences/domain-ontologies.md - Comprehensive domain knowledge graphs (Autogen, Langroid)references/translation-patterns.md - Extensive ambiguous→precise mappingsWorking examples of semantic validation:
examples/autogen-mappings.md - Autogen-specific ambiguity resolutionsexamples/langroid-mappings.md - Langroid-specific examplesexamples/common-ambiguities.md - Cross-domain frequent patternsDomain knowledge JSON files:
knowledge/ambiguous-terms.json - User phrases with ambiguity scoresknowledge/technical-mappings.json - Domain-specific translationsknowledge/ontology-graph.json - Conceptual relationships"Never ASSUME - it makes an ass out of u and me."
Users have different patterns for expressing uncertainty. Learn from settings:
.claude/semantic-linguist.local.mdNot every message needs validation:
Users can customize via .claude/semantic-linguist.local.md:
Respect user configuration when determining whether to trigger validation.
Core principle: Bridge the gap between human natural language and AI technical precision through systematic semantic validation and conversational clarification.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.