/plugin marketplace add jcmrs/jcmrs-plugins/plugin install jcmrs-semantic-linguist-plugins-semantic-linguist@jcmrs/jcmrs-pluginsDefined in hooks/hooks.json
{
"UserPromptSubmit": [
{
"hooks": [
{
"type": "prompt",
"prompt": "# Semantic Ambiguity Detection Hook\n\nAnalyze the user's prompt for semantic ambiguity that could lead to misinterpretation, assumptions, or hallucinations.\n\n## User Prompt\n```\n$USER_PROMPT\n```\n\n## Detection Criteria\n\n### HIGH Confidence Triggers (Always Validate)\n1. **Meta-questions** - User seeking validation:\n - \"am I making sense?\"\n - \"does this make sense?\"\n - \"is this right?\"\n - \"am I doing this right?\"\n\n2. **User self-identification**:\n - \"non-technical user\"\n - \"I'm not technical\"\n - \"beginner\"\n\n3. **Known high-ambiguity terms** (from knowledge base):\n - \"make it talk\" / \"make it work\" / \"make it [X]\"\n - \"we need an api\"\n - \"make it portable\"\n - \"check for gaps\"\n\n### MODERATE Confidence (Validate if Multiple Signals)\n1. **Vague action verbs**:\n - \"do the thing\"\n - \"fix it\"\n - \"create X\" (without specifics)\n\n2. **Generic technical terms without context**:\n - \"agent\" (could be Autogen, Langroid, general)\n - \"task\" (could be Langroid Task, async task, general)\n - \"tool\" (could be function calling, utility, CLI)\n - \"component\", \"service\", \"module\"\n\n3. **Unclear scope**:\n - \"add validation\", \"improve performance\", \"add logging\"\n - Without specifying what type or metric\n\n4. **Domain confusion**:\n - Mixing framework-specific terms\n - Using terms from incompatible domains\n\n5. **Unclear references**:\n - \"that\", \"it\", \"the thing\"\n - Without clear antecedent\n\n## Analysis Process\n\n1. **Check for HIGH confidence triggers**:\n - If meta-question detected → VALIDATE IMMEDIATELY\n - If user self-identifies ambiguity → VALIDATE IMMEDIATELY\n - If known high-ambiguity term → VALIDATE IMMEDIATELY\n\n2. **Calculate confidence score** (0-100):\n - Meta-question: +100 (auto-trigger)\n - Known ambiguous term: +40\n - Vague action verb: +30\n - Generic term without context: +25\n - Domain confusion: +35\n - Unclear reference: +20\n - Recent conversation provides context: -20\n - Specific technical term used: -30\n\n3. **Trigger validation if**:\n - Confidence score > 80 (high confidence)\n - OR meta-question detected\n - OR user trigger phrase detected\n\n## Output Format\n\nIf validation NOT needed (low ambiguity):\n```json\n{\n \"continue\": true,\n \"systemMessage\": \"\"\n}\n```\n\nIf validation NEEDED (high ambiguity or trigger detected):\n```json\n{\n \"continue\": true,\n \"systemMessage\": \"⚠️ Semantic validation triggered. Detected: [specific ambiguity]. Load 'semantic-validation' skill and clarify before proceeding. Conversational tone required - never assume, always verify.\"\n}\n```\n\n## Examples\n\n**Example 1: Meta-question (HIGH confidence)**\nUser: \"I want to build a multi-agent system. Am I making sense?\"\nOutput:\n```json\n{\n \"continue\": true,\n \"systemMessage\": \"⚠️ Semantic validation triggered. User meta-question detected: 'Am I making sense?' - indicates user uncertainty. Analyze last 5-10 messages, identify ambiguities in 'multi-agent system' (Autogen GroupChat? Langroid Tasks? General concept?), and validate understanding with user before proceeding.\"\n}\n```\n\n**Example 2: High-ambiguity term (HIGH confidence)**\nUser: \"I need to make the agent talk to other agents\"\nOutput:\n```json\n{\n \"continue\": true,\n \"systemMessage\": \"⚠️ Semantic validation triggered. Detected high-ambiguity term: 'make it talk' (score: 90). Unclear: which framework (Autogen/Langroid?), what type of communication (send()? GroupChat? Task delegation?). Load 'semantic-validation' skill, clarify framework and communication pattern before implementing.\"\n}\n```\n\n**Example 3: Generic term with context (MODERATE - don't trigger)**\nUser: \"In Autogen, I want to create a ConversableAgent that sends messages\"\nOutput:\n```json\n{\n \"continue\": true,\n \"systemMessage\": \"\"\n}\n```\n(No validation needed - specific framework and class mentioned)\n\n**Example 4: Multiple moderate signals (MODERATE - trigger)**\nUser: \"I need to create an agent with tools\"\nOutput:\n```json\n{\n \"continue\": true,\n \"systemMessage\": \"⚠️ Semantic validation triggered. Detected moderate ambiguity (score: 75): 'agent' and 'tools' are multi-domain terms. Could be Autogen (AssistantAgent + register_for_llm), Langroid (ToolAgent + ToolMessage), or other. Load 'semantic-validation' skill and clarify framework before proceeding.\"\n}\n```\n\n**Example 5: Vague scope (MODERATE - trigger)**\nUser: \"Check for gaps in the codebase\"\nOutput:\n```json\n{\n \"continue\": true,\n \"systemMessage\": \"⚠️ Semantic validation triggered. Detected unclear scope: 'check for gaps' (score: 82). Could mean: test coverage gaps, documentation gaps, feature gaps, security gaps, or data gaps. Load 'semantic-validation' skill and clarify which type of gap analysis before proceeding.\"\n}\n```\n\n## Critical Guidelines\n\n1. **Never assume** - If ambiguity detected, trigger validation\n2. **Conversational tone** - System messages should guide, not dictate\n3. **Be specific** - Identify exact ambiguous terms and possible interpretations\n4. **Load skill** - Always mention loading 'semantic-validation' skill when triggered\n5. **Don't block** - Set continue: true (provide guidance, don't stop execution)\n6. **Context-aware** - Consider recent conversation when scoring\n\n## Your Task\n\nAnalyze the user prompt above. If ambiguity detected with confidence > 80% OR meta-question/trigger detected, output validation trigger. Otherwise, output empty systemMessage.",
"timeout": 30
}
],
"matcher": "*"
}
]
}{
"riskFlags": {
"touchesBash": false,
"matchAllTools": false,
"touchesFileWrites": false
},
"typeStats": {
"prompt": 1
},
"eventStats": {
"UserPromptSubmit": 1
},
"originCounts": {
"absolutePaths": 0,
"pluginScripts": 0,
"projectScripts": 0
},
"timeoutStats": {
"commandsWithoutTimeout": 0
}
}